The ISSCC 2023 continues to kick its tail and once again it’s up to AMD to advance what the CPUs of the future will be. Currently we are already facing complex solutions on PCs and servers, where the physical space requires more area to accommodate all the cores and logic of the AMD chiplets. This implies that it is necessary to reinvent itself in order to continue scaling in power, number of transistors per square millimeter and above all, it is necessary to improve efficiency and consumption. Therefore, Lisa Su has shown AMD’s approach to the CPUs of the future: vertically stacked memory on top of the cores.
It is not a new concept, far from it. Intel already introduced it years ago with Foveros 3D Packaging and now AMD has seen the opportunity to do the same and in its own way. What we are going to see is the first step to reach the zettaescalewhere the reds already estimate that in just 12 years the first 500 servers in the world will consume the same as a small nuclear plant.
AMD and the CPUs of the Future: Memory Stacking on Cores
Moore’s Law is dead by today’s concept of scalability. It goes without saying that MCM architectures have changed everything and now to comply with the duplication of transistors in a processor something else is going to be required, something that will complicate everything greatly.
The main problem is that the slowdown in new nodes has changed the strategy of chipmakers, especially from IBM, Intel and AMD (NVIDIA is late here.) If you want to include more transistors in the same space, you have two options at the same node: expand to the width or to the height.
If we have noticed, sockets and CPUs are getting bigger and much of that space is reserved for the interposer, for the logic that interconnects the matrices and dies. Therefore, yese is reaching the limit of profitability and physical space, both for PC and server. It remains then to expand to the top, and this is where the game of the future is: increase the height by stacking different dies.
But it will not only be cores that are stacked vertically, but it can also be other logic chips, such as HBM memories, and that is exactly where AMD is working.
AMD 3D Hybrid Bond, the next step for chips
The goal is to reduce physical space, reduce latency, increase performance and bandwidth, but above all and above all, improve efficiency in pJ/bit. For this reason, AMD is not only considering adding HBM with an interposer for Cluster Compute (CD), something that it has been doing for several years now with TSMC 2.5D technology, but it will require the hybrid approach with 3Dsomething that we have already seen in other articles that they have ready with companies in the sector in a joint effort that AMD now calls 3D Hybrid Bond.
We understand that it is his concept carried over to chip design. The first objective, as we have commented, is efficiency. Without efficiency you cannot scale in performance no matter how much physical space you have, regardless of whether it is vertical or horizontal, nothing matters if you cannot reduce the energy consumed by your chips.
As a result, 3D Hybrid Bond will be able to move data faster and much more efficiently. In fact, AMD already has a first key piece of information: an 85% reduction in energy access to data. In other words, with this new vertical array stacking technology, accessing or moving data will cost 85% less energy, a decades leap indeed.
Different types of packaging in the same CPU
The concept is easy to understand, but everything gets complicated when we see the diagram of what a CPU will be in the near future. The first thing we must understand is that we will have internal connection optical drives, what AMD calls optical transceivers. These are in charge of connecting the CPU with other high-speed units, being able to reduce latency and the energy required, all through the UCIe chip-to-chip interface.
We will have heterogeneous cores as now, but packaged in different dies that will be attached to different types of memory. These at the same time will have specific accelerators, surely AI, where everything said will have different packaging to join. Here you will work with 2D, 2.5D and 3D technologies in different interposers. Therefore, the complexity will increase exponentially, because it is no longer just logically connecting different dies as it is now, but the current 3D V-Cache is only the first step to seeing an unprecedented number of vertical layers.
To give us an idea and since Intel is more advanced in this vertical interconnection, the most advanced they have for now are 4 layers: two of DRAM, a substrate with CPU and GPU and a lower floor that will have the cache and the I/O, all with their different interposers that will have to be connected.
AMD’s approach doesn’t seem to be that complex in form, but it is layered. It gives the impression that everything vertical will be layers of memory joined by SVTwhile the lower layer will have all the logical elements: CPU in chiplets, caches, GPU, I/O etc… We will see in less than 12 years where this is finally, but if we look back those 12 years… Day by day Today we would not even imagine having vertical stacking as a technology for gaming and servers, imagine what is to come with AMD 3D Hybrid Bond and Intel Foveros 3D.
Leave a Reply