December 14, as part of the AI EVERYWHERE event in New York, Intel showcased the fifth generation of Xeon Scalable processors with improved machine learning. These new processors have more cores, larger cache, and a simplified architecture of chiplets, making them the top choice for artificial intelligence applications. [source]
Intel has announced a significant increase in performance for the new Xeon compared to the previous generation of Sapphire Rapids, particularly in the field of AI. The new Xeon processors are 1.4 times faster. [source]
The new Emerald Rapids processors come with up to 64 cores and an increased 320 MB L3 cache. They also support faster DDR5 memory with speeds of up to 5600 mt/s, providing a peak bandwidth of 368 GB/s. [source]
Intel has focused on improving performance in machine learning with the introduction of Advanced Matrix Extensions (AMX) instructions. The new Xeon processors can handle machine learning models with up to 20 billion parameters, offering reasonable delays. [source]
Additionally, Intel has emphasized the use of a simplified architecture with a smaller number of chiplets and larger computing tiles on the new processors, resulting in reduced energy consumption. [source]
However, Intel acknowledges that for larger AI models like GPT-3 with 175 billion parameters or the upcoming GPT-4 with 1.76 trillion parameters, specialized AI accelerators may still be required. [source]
Looking ahead, Intel promises to release future processors with even more cores and support for faster memory based on their 7-nanometer technology, including Granite Rapids and Sierra Forest, expected in 2024. [source]
These latest announcements from Intel put the company in direct competition with other manufacturers, including AMD and ARM processor developers, who are also actively working on new AI solutions. [source]