NVIDIA GeForce GTX 580 GF110 Fermi Video Card ReviewTue, Nov 09, 2010 - 12:00 AM
A Fixed Fermi Core Called GF110 Goes Into GTX580!
This morning NVIDIA has finally introduced the GeForce GTX 580 video card after many weeks of rumors and information leaks. It has only been seven months since NVIDIA released the GeForce GTX 480 video card, but that card is officially no longer the flagship GPU of the NVIDIA arsenal. Priced at $499, the GeForce GTX 580 comes at the same suggested retail price as the GeForce GTX 480 did, but the video card has higher clock speeds and all 512 CUDA cores enabled thanks to a new GPU die revision.
NVIDIA has essentially gone back to the drawing board with their Fermi GF100 GPU and has re-engineered it at the transistor level. Through this redesign the company was able to turn up the clock speed with less power usage since they were able to see what worked well and what didn’t work at all in the original GF100 GPU. The result of their efforts is now called the GF110 and that is what we will be taking a closer look at today. The GPU is still made by TSMC on their 40nm process technology, but you’ll see it is a while new beast!
What is new with the GF110 on the GeForce GTX 580? If you look at the block diagram above you won’t see any changes as NVIDIA did not issue a new diagram since the original GF100 die shot that you can see above. NVIDIA told us that the GF110 has roughly 3.05 Billion transistors and that the GF100 has 3.01 Billion transistors and the overall die size remains the same. You might be wondering how this could be true since the GeForce GTX 580 has more CUDA Cores, texture units and SMs than the GeForce GTX 480. Well, the simple answer there is that NVIDIA had to disable part of the GF100 GPU core in order to launch the GTX 480 product in a somewhat timely manner. While NVIDIA was ‘under the hood’ making performance and power improvements they did add two new elements that are worth pointing out. The first is that the GeForce GTX 580 now supports full-speed FP16 texture filtering. NVIDIA said that this will help performance in certain texture-heavy applications. Second, the GeForce GTX 580 also supports new tile formats that improve Z-cull efficiency.
NVIDIA said that these clock-for-clock enhancements increase the GTX 580′s performance versus the GTX 480 by anywhere from 5-14% depending on the benchmark.
The thing is that the GeForce GTX 580 doesn’t run at the same clock speed; the GTX 580 runs at higher clock speeds! The GeForce GTX 480 had a GPU clock of 700MHz, a Stream Processor clock of 1401MHz and the 1536MB frame buffer consisting of Samsung GDDR5
memory was clocked at 924MHz, for an effective data rate of 3696MHz. The GeForce GTX 580 has a GPU core clock speed of 772MHz, a Stream Processor clock of 1544MHz and the 1536MB GDDR5 memory is now running at 1002MHz, for an effective data rate of 4008MHz. Looking at the picture above you can see the GeForce GTX 480 sitting below the new GeForce GTX 580. Visually the two cards are very different looking and this is thanks to the improvements on the GF110 Fermi core. The GeForce GTX 580 runs cooler and uses less power despite the fact that it runs at a core clock speed that is 10.3% faster than the GTX 480!
Many of our readers love specification charts and here is a good one that NVIDIA provided that shows what the GTX580 has to offer. The Thermal Design Power went from 250 Watts on the GTX 480 down to 244 Watts on the GTX 580.
One of the most interesting new features on the GeForce GTX 580 that we need to talk about is the new power monitoring hardware that is on the GTX 580. Beginning with the GTX 580 we are told that NVIDIA will include dedicated hardware circuitry on the graphics card that performs real-time monitoring of current and voltage on each 12V rail (6-pin, 8-pin and the PCI Express slot itself). Code in the video card driver then monitors these power levels and then will dynamically adjust performance in certain stress applications such as Furmark or OCCT. This will keep the board from having excessive power draw. NVIDIA will have to now keep drivers updated and track how certain games or applications stress the GPU.