The rise of AI in recent years has caused all hardware manufacturers to have a desEsports Extrasthat is small or large scale for said market. In the case of AMD and Intel we have examples and solutions of all kinds, but one of the most interesting for the future is their AMD EPYC Turin for AIa variant of its server CPU with additional units to speed up the execution of deep learning algorithms.
While NVIDIA has had enough of selling graphics cards and reaping the seed by adapting them to AI for several years, at AMD their bet is on the capabilities they have recently obtained after the purchase of Xilinx. Which has expanded its portfolio of technologies, which will be used without any consideration in future brand chips.
New details of AMD EPYC Turin for AI
The AIEs, AMD Intelligent Engines, are chiplets that AMD intends to use in the AI-oriented variation of its future EPYC Turin server CPU, based on Zen 5 architecture. While Intel has opted for the use of AMX units within the processor itself , of the same nature as NVIDIA’s Tensor Cores, in combination with HBM memory, in AMD the bet is different.
For those of Lisa Su, the bet is the use of chiplets in which to place accelerators specialized in this task with V-Cache memory on top of each one of them. Which means that the Deep and Machine Learning algorithms would be optimized by these specialized coprocessors for this specific task.
Well, thanks to the latest leaks we have been able to know what would be the gross power that we can expect from each of the AIE in the EPYC Turin for AI and how many of them can we find in these processors. According to MLID the figure would fluctuate between 246 and 310 TOPS by AIE. For what AMD could create configurable systems with power up to 1700 TOPS without the need to connect a graphics card or any other type of accelerator capable of running these types of workloads.
To put you in perspective, heto current NVIDIA RTX 4090 has a maximum capacity of 1321 TOPSwhile the H100 reaches 1980 TOPS. So AMD’s proposal does not fall short at all.
AIE technology comes from Xilinx
AIE technology comes from Xilinx, bought by AMD last year and rebranded as AMD Embedded recently. These units were used in the Versal Engines and they were still the classic neural processors, whose configuration is nothing more than nuclei interconnected with each other in a matrix. In such a way that each one of them is pumped with data and instructions from the neighboring nuclei and this does the same with its neighbors. Hence the name systolic, in honor of the circulatory system.
Each of Xilinx’s Versal Engines can have configurations of tens and even hundreds of cores on their dies. However, the AMD EPYC Turin AIEs for AI are an evolution adapted to work as co-processors of a server CPU. We do not know more details about it, but it is striking that the bet of Lisa Su and her team is not to launch a specialized AI card or reinforce their graphics cards, but to place the Xilinx AIE technology in a variant of their server CPUs.
AMD bets on AI in CPU, but not in graphics cards
AMD’s core business is microprocessors and this is something we need to keep in mind. Although NVIDIA has shown the viability of graphics cards to run deep learning algorithms with more than excellent performance and efficiencies, said business is not the main business of Lisa Su’s company. Their Radeons do not sell at the same level as their Ryzen and EPYC processors and it is totally normal that they are looking to downplay the importance of AI in GPU and want to bet on it in CPU.
It’s still the same thing that Intel did with its AI-optimized Sapphire Rapids, it’s actually still the same concept. Sell potential customers not to spend more money on a graphics card that can do AI when that power is already offered by one of these powerful CPUs. In any case, EPYC Turin for AI is not AMD’s mainstream server processor family and only time will tell if it will be a good move by the company, which has been absent from this discipline of computing for too long.