No less than 400 watts, a figure that many fear, as it would represent a power supply of a quantity of watts and a quality that they cannot afford. This is the scenario we are going to see in less than a year according to rumors and as such, we have to dig into all the reasons we know today to try to put black on white. Why not reduce consumption if a new node is released?
TSMC’s 5nm node is not as good as they say, Intel has a chance.
The truth is that AMD and NVIDIA are going to bet everything on the new Taiwanese 5nm due to the improvements brought by this new lithographic process, already with EUV under its belt. If the logic prevailed that the consumption would return to the normal 250-280 watts per card, the reality is going to show the opposite.
There are two main reasons that explain both this and what is to come, at least to a greater or lesser extent in the next few years. Both Intel, TSMC and Samsung are focusing on specific areas of the nodes due to the move from FinFET to GAA, which is proving to be a challenge and therefore some improvements are needed in areas that are not efficiency-first.
GPU Power Consumption at 5nm, Better Node Does Not Mean Better Power Consumption
To be specific, the big three are struggling with two issues and shortcomings:
- The new lithographic processes are not geared to achieve a reduction in consumption or simply keep consumption at the same gain as the space achieved. The density achieved by the reduction of transistors is not equivalent or equitable with the consumption per mm2.The problem here is that both AMD and NVIDIA are going to need to manufacture chips without the so-called “dark silicon”, that is, without unusable parts of the silicon, so they need all the mm2 to the edge of each die to introduce the largest number of transistors.
- And here comes problem number two: the competition between AMD and NVIDIA. Unlike other generations AMD is now a rival of NVIDIA, so neither of them are going to keep anything up their sleeves and are going to put everything available in their new architectures, to the limit. Such a sharp hardware jump (+-70/80% more transistors) is a challenge in terms of power consumption, because the frequencies are bound to stay the same or even go up, and what is achieved here is the best that both companies can technically offer, which added to point one results in a beastly performance increase compared to previous years, but also higher power consumption.
So, both Navi 31 and AD102 are going to change the paradigm and the worldwide GPU market again, the problem for the user (price and availability aside) is going to be the PSU requirements for these monsters that are going to hit the 480 watts ceiling. Will we need power supplies to power them up? 1000 or 1200 watts to keep them at bay? Will the cards once again suffer problems with such fast and aggressive peaks because they don’t have PSUs ready for it?