There are plenty of sources that explain the
performance per watt of a computer. However, I wanted to investigate how accelerated computer components (notably GPUs) have become more efficient at a lower price over the years. I have thus defined a metric: performance per watt per price-unit, and plotted these by launch date and launch price.
The results are as follows:
Notes:
- GFLOPS are single precision
- Prices are in euro as they were approximately at launch date; if no launch price is known, it has not been proxied with a current price and no plot point is shown
- Data taken mostly from Comparison of Nvidia graphics processing units - Wikipedia, the free encyclopedia and collection started from the advent of multi-core GPU architectures. The choice of Nvidia was made, because of our professional interest in deploying CUDA and does not constitute an endorsement.
- Data is on retail components as opposed to OEM components.
- Launch prices are often artificially high, because it is the feature set that appeals to the enthusiasts who are also the first movers. The price degradation over time (loosely 10% per year) has not been taken into account.
- The last two plot points are the GeForce GTX Titan and GeForce GTX 650 Ti Boost.
So while performance per watt has increased more than 5-fold over the observed period, the performance per watt per price-unit has not kept up accordingly: almost 4-fold.
In fact, there is even an inverse relationship between the number of cores and the performance metric:
Perhaps the high end cards do not drop in price as much and maintain their price level at launch to finance the development of the lower end cards.