Over 13 years ago, Intel Chairman Gordon Moore announced to the world at the Intel Developer Forum (IDF) that the law famously attributed to him — that computing power doubles every 18 months — would “get to some finite limits” in five generations. In the fall of 1997 the world was transitioning from .35 micron to .25 micron. I held my pen over million dollar cheques to IBM that year for one chip in each process. Then came .18 micron, 90 nanometer (nm), 65nm, 45nm and now we’re fully five generations of technology past his prediction, at 32nm.
Doubling the number of transistors by shrinking features, and continuing to increase clock rates as those transistors move closer together also serves to increase power consumption which, Moore noted, would generate untenable amounts of heat. It did. And it does.
Which is why we have slammed into the Power Wall.
Even layman have noticed that chips aren’t getting faster any more. Instead there is a huge sucking noise as Intel, AMD and others absorb everything they can find into their “system on a chips” in order to soak up the real estate generated as we move to smaller geometries.
Incorporating the GPU, memory and I/O into a single package at smaller geometries still increases both power density and total power. Every additional watt of power fed into a chip must be removed as waste heat after it’s job is done, so advanced technologies to remove heat are becoming highly valued. Just as Henry Ford began mass producing liquid cooled cars against great scepticism with the Model T in 1907, a century later we’re seeing liquid cooled computers enter mass production in the data centre.
All of this means that chip design has become first and foremost about balancing a complete system within power consumption and heat dissipation constraints — or working against “the Power Wall”.