A Fixed Fermi Core Called GF110 Goes Into GTX580!

GeForce GTX 580 Video Card

This morning NVIDIA has finally introduced the GeForce GTX 580 video card after many weeks of rumors and information leaks. It has only been seven months since NVIDIA released the GeForce GTX 480 video card, but that card is officially no longer the flagship GPU of the NVIDIA arsenal. Priced at $499, the GeForce GTX 580 comes at the same suggested retail price as the GeForce GTX 480 did, but the video card has higher clock speeds and all 512 CUDA cores enabled thanks to a new GPU die revision.

GeForce GTX 580 Video Card Features

NVIDIA has essentially gone back to the drawing board with their Fermi GF100 GPU and has re-engineered it at the transistor level. Through this redesign the company was able to turn up the clock speed with less power usage since they were able to see what worked well and what didn't work at all in the original GF100 GPU.  The result of their efforts is now called the GF110 and that is what we will be taking a closer look at today. The GPU is still made by TSMC on their 40nm process technology, but you'll see it is a while new beast!

GeForce GF110 GPU Die Shot

What is new with the GF110 on the GeForce GTX 580?  If you look at the block diagram above you won't see any changes as NVIDIA did not issue a new diagram since the original GF100 die shot that you can see above.  NVIDIA told us that the GF110 has roughly 3.05 Billion transistors and that the GF100 has 3.01 Billion transistors and the overall die size remains the same. You might be wondering how this could be true since the GeForce GTX 580 has more CUDA Cores, texture units and SMs than the GeForce GTX 480. Well, the simple answer there is that NVIDIA had to disable part of the GF100 GPU core in order to launch the GTX 480 product in a somewhat timely manner. While NVIDIA was 'under the hood' making performance and power improvements they did add two new elements that are worth pointing out.  The first is that the GeForce GTX 580 now supports full-speed FP16 texture filtering. NVIDIA said that this will help performance in certain texture-heavy applications.  Second, the GeForce GTX 580 also supports new tile formats that improve Z-cull efficiency.

GeForce GTX 580 Video Card Differences

NVIDIA said that these clock-for-clock enhancements increase the GTX 580's performance versus the GTX 480 by anywhere from 5-14% depending on the benchmark.

GeForce GF110 GPU Die Shot

The thing is that the GeForce GTX 580 doesn't run at the same clock speed; the GTX 580 runs at higher clock speeds! The GeForce GTX 480 had a GPU clock of 700MHz, a Stream Processor clock of 1401MHz and the 1536MB frame buffer consisting of Samsung GDDR5 memory was clocked at 924MHz, for an effective data rate of 3696MHz.  The GeForce GTX 580 has a GPU core clock speed of 772MHz, a Stream Processor clock of 1544MHz and the 1536MB GDDR5 memory is now running at 1002MHz, for an effective data rate of 4008MHz. Looking at the picture above you can see the GeForce GTX 480 sitting below the new GeForce GTX 580. Visually the two cards are very different looking and this is thanks to the improvements on the GF110 Fermi core. The GeForce GTX 580 runs cooler and uses less power despite the fact that it runs at a core clock speed that is 10.3% faster than the GTX 480!

GeForce GTX 580 Video Card

Many of our readers love specification charts and here is a good one that NVIDIA provided that shows what the GTX580 has to offer. The Thermal Design Power went from 250 Watts on the GTX 480 down to 244 Watts on the GTX 580.

GeForce GTX 580 Video Card Hardware Monitor

One of the most interesting new features on the GeForce GTX 580 that we need to talk about is the new power monitoring hardware that is on the GTX 580. Beginning with the GTX 580 we are told that NVIDIA will include dedicated hardware circuitry on the graphics card that performs real-time monitoring of current and voltage on each 12V rail (6-pin, 8-pin and the PCI Express slot itself). Code in the video card driver then monitors these power levels and then will dynamically adjust performance in certain stress applications such as Furmark or OCCT. This will keep the board from having excessive power draw. NVIDIA will have to now keep drivers updated and track how certain games or applications stress the GPU.

A Closer Look At The GeForce GTX 580

Say goodbye to the old GeForce GTX 480 video card!

NVIDIA GeForce GTX 480 Video Card

And hello to the new GeForce GTX 580 video card that is said to be better in every possible way!

NVIDIA GeForce GTX 580 Video Card

The GeForce GTX 580 graphics card that we have on the test bench today is a dual-slot single GPU video card that measures in at 10.5" in length. The first thing you'll notice with the GeForce GTX 580 is that the 'front radiator' or 'George Foreman cooking grill' is gone as the card sports just a plain plastic fan shroud.

NVIDIA GeForce GTX 580 Video Card Back

Flipping the GeForce GTX 580 over we don't find too many interesting things, but we did notice that the holes in the PCB for the GeForce GTX 480 are missing. We are told this card's GF110 core runs more efficiently and that the vapor chamber cooler is better, so maybe it isn't needed. We asked NVIDIA why it is gone and got this as a response:

"We chose not to implement the PCB cutout on GTX 580 for a variety of reasons which I can’t disclose." - NVIDIA PR

NVIDIA GeForce GTX 580 Video Card DVI and HDMI

The NVIDIA GeForce GTX 580 GDDR5 graphics card has a pair of dual-link DVI-I outputs along with a mini-HDMI output header. Both the Dual-link DVI and HDMI outputs can be used to send high-definition video to an HDTV via single cable (audio too, if running HDMI). A regular sized HDMI header was not used since it couldn't fit next to the pair of DVI outputs.

NVIDIA GeForce GTX 580 Video Card PCIe Power

The NVIDIA GeForce GTX 580 video card requires a 600 Watt or greater power supply to power the GeForce GTX 480 as it has a max board power (TDP) of 250 Watts. NVIDIA also suggests that your power supply have a minimum of 42 Amps on the +12V rail. It also requires that the power supply has one 6-pin PCI Express power connector and an 8-pin PCI Express power connector for proper connection. It should be noted that the NVIDIA minimum system power requirement is based on a PC configured with an Intel Core i7 3.2GHz CPU.

NVIDIA GeForce GTX 580 Video Card SLI Connector

The NVIDIA GeForce GTX 580 graphics cards do support SLI and the card has a pair of SLI bridges located along the top edge of the graphics card. The NVIDIA GTX 580 series supports two, three, and quad SLI configurations.

NVIDIA GeForce GTX 580 Video Card Vapor Chamber Cooling

The next thing that needs some explaining on the GeForce GTX 580 graphic card is the cooling solution. This card still runs warm and NVIDIA has begun using Vapor Chamber technology. This is nothing new to the PC industry as other companies like AMD have used Vapor Chamber for a number of years now.

NVIDIA GeForce GTX 580 Video Card Vapor Chamber Cooling

NVIDIA states that the GeForce GTX 580 runs at just 47dB and the company claims that it has nearly half the noise level as the GeForce GTX 480 (53db) from a couple generations ago. This also shows that it is quieter than the GeForce GTX 280 and the GTX 285! Let's take a closer look at the GTX580 video card by taking it apart!

Taking The GeForce GTX 580 Apart

NVIDIA GeForce GTX 580 Video Card

Since this is the first time I have actually seen a GeForce GTX 580 production card in person I had to take it apart to see what parts are being used. Okay, I really wanted to just see the GF110 GPU core and see what brand GDDR5 memory ICs were being used, but since I had to take it all the way apart I might as well walk you through the whole process. The front of the GeForce GTX 580 graphics card consists of a glossy black plastic shroud that is screwed onto a metal assembly.

NVIDIA GeForce GTX 580 Video Card

The fan shroud on the NVIDIA GeForce GTX 480 just clipped on, but on the GeForce GTX 580 they held it down with six screws with thread locking compound on them. Once you get the screws off you can gently lift the cover off.  This is as far as most people will ever need to go as you can easily use compressed air to blow the dust out of the fins if needed.

NVIDIA GeForce GTX 580 Video Card

Here you can see how the airflow goes out the card and with the fan shroud on the majority of the hot air from the GPU is exhausted out the rear of the video card.  This is very different from the GeForce GTX480 as it had a radiator plate on it and much of the heat remained inside the PC case.

NVIDIA GeForce GTX 580 Video Card

Next you can remove the four larger Philips head spring screws as they attach the heatsink to the video card. With the heatsink removed you can see the core for the very first time! If you ever want to change out the thermal paste this is how far you need to go.

NVIDIA GeForce GTX 480 Video Card

Here is a better look at the Vapor Chamber cooler that NVIDIA is using on the GeForce GTX580 to help keep it nice and cool. You can see the copper bottom and this is the part that contains the liquid that comes to a boil to help cool the GPU.

NVIDIA GeForce GTX 480 Video Card

NVIDIA didn't machine the entire base flat as our base was only partly machined (see a higher resolution picture here).  NVIDIA said that they have set tolerances for the base and that our HSF base finish is up to specifications. We have a feeling that enthusiasts and overclockers will find lapping this heat sink will be an easy way to improve performance as ours was not perfectly flat by any means.

NVIDIA GeForce GTX 480 Video Card

The next step is removing all 18 of the small screws that hold on the aluminum frame! Once all of these screws are removed you can pull off the aluminum frame that helps cool the board's components and doubles as the fan mount. Notice the thermal pads on the frame to help cool down the voltage regulators and the GDDR5 memory ICs.

NVIDIA GeForce GTX 480 Video Card

After the aluminum frame is removed you can take off the three Philips screws that hold the squirrel-cage fan down. Flipping it over we expected to see a brand name and model number like we did with the GTX480, but instead we see a foam pad to help keep noise levels down. As you can see NVIDIA has really tried to make the GTX580 quieter.

NVIDIA GeForce GTX 480 Video Card

Now that the card if stripped down and pretty much bare we can take a good look at the heart of the GeForce GTX 580 and that would be the GF110 core shown above. The core is labeled as GF110-375-A1, which we are guessing would be an A1 stepping of the GF110 silicon. This GF110 GPU has all 512 cores enabled whereas the GF100 used on the GeForce GTX 580 only had 480 enabled due to various reasons.

NVIDIA GeForce GTX 480 Video Card

Here is the picture that our readers always request to give you an idea of the core size next to something most people are familiar with, the United States quarter!

NVIDIA GeForce GTX 580 Video Card Samsung GDDR5 ICs

Located around the GF110 GPU core are twelve Samsung GDDR5 memory chips for the card's memory. We were shocked to find that they are the same exact part numbers as what came with our NVIDIA GeForce GTX 480 reference card.  The part number on the memory ICs is "K4G10325FE-HC04". That means these are 0.40ns parts that are speed rated at 5Gbps, according to Samsung.

NVIDIA GeForce GTX 480 Video Card

Here you can see the NVIDIA GeForce GTX 480 graphics cards on top and the GeForce GTX 580 graphics card on the bottom to get an idea of how they differ. As you can see, both cards use nearly identical 10.5" Printed Circuit Boards (PCB) and have a layout that is shockingly similar if you ask me. The GeForce GTX 580 doesn't have the solid-state capacitors located around the power phases and lacks the vent holes and the jumper underneath the power connectors. Other than that and a brand change on the DVI connectors the cards are nearly identical!

The Test System

All testing was done on a fresh install of Windows 7 Ultimate 64-bit with all the latest updates installed. All benchmarks were completed on the desktop with no other software programs running. The Kingston HyperX T1 DDR3 memory modules were run in triple-channel mode at 1866MHz with 8-8-8-24 1T timings. The ATI Radeon HD 5000 series cards were all tested using CATALYST 10.10 drivers, the NVIDIA GeForce cards all used Forceware 260.89 WHQL drivers (262.99 beta were used on the GTX480 and GTX580) and the ASUS P6X58D-E motherboard was run using BIOS 0303 with the processor running stock settings and Turbo enabled.

Windows 7 Drivers Used:
Intel Chipset Inf Update Program V9.1.1.1025
Realtek Audio Driver V6.0.1.6037 for Windows 64bit Windows 7.(WHQL)
Marvell Yukon Gigabit Ethernet Driver V11.10.5.3 for 32/64bit Windows 7.(WHQL)
Marvell 9128 SATA 6Gbps Controller Driver V1.0.0.1036 for 32/64bit Windows 7.

 

The Video Card Test System

Here is the Intel LGA 1366 Test platform:

Intel Test Platform

 

Component

 

Brand/Model

 

Live Pricing

 

 Processor

Intel Core i7-970

 

 Motherboard

ASUS P6X58D-E

 

Memory

6GB Kingston DDR3 1866MHz

 

 Video Card

 

 See Below

 

 Hard Drive

 

 Crucial C300 256GB SSD

 

 Cooling

 

 Titan Finrar

 

 Power Supply

Corsair HX850W

 

 Chassis

 

 None (Open Bench)

 

 Operating System

 

 Windows 7 Ultimate 64-Bit

Video Cards Tested:

NVIDIA GeForce GTX 580 Video Card GPU-Z 0.4.8 Details:

NVIDIA GeForce GTX 580 Video Card GPU-Z 0.4.8 Details
NVIDIA GeForce GTX 580 Video Card GPU-Z 0.4.8 Details

Aliens vs. Predator

Aliens vs Predator D3D11 Benchmark v1.03

Aliens vs Predator D3D11 Benchmark v1.03 is a standalone benchmark test based upon Rebellion's 2010 inter-species shooter Aliens vs. Predator. The test shows xenomorph-tastic scenes using heavy tessellation among other DX11 features.

Aliens vs Predator D3D11 Benchmark v1.03

We cranked up all the image quality settings in the benchmark to the highest level possible, so we were running 4x AA and 16x AF with SSAO enabled at both 1920x1200 and 1280x1024 on all the video cards.

Aliens Vs. Predator Benchmark Results

Benchmark Results:  The NVIDIA GeForce GTX 580 was 16.2% faster than the GeForce GTX 480 in AvP with a resolution of 1920x1200.  It looks like things are off to a solid start for NVIDIA!

Batman: Arkham Asylum GOTY

Batman: Arkham Asylum

Batman: Arkham Asylum is an action-adventure stealth video game based on DC Comics' Batman for PlayStation 3, Xbox 360 and Microsoft Windows. It was developed by Rocksteady Studios and published by Eidos Interactive in conjunction with Warner Bros.

Batman: Arkham Asylum

For our testing we set everything as high as it would go including Multi Sample Anti-Aliasing as we set that to 8x.

Batman: Arkham Asylum Benchmark Results

Benchmark Results: Batman: Arkham Asylum GOTY edition showed the GeForce GTX 580 was 18.1% faster than a GeForce GTX 480 at 1920x1200!

Just Cause 2

Just Cause 2

Just Cause 2 is a sandbox style action video game currently under development by Swedish developer Avalanche Studios and Eidos Interactive, published by Square Enix. It is the sequel to the 2006 video game, Just Cause. 

Just Cause 2 Game Settings

Just Cause 2 employs a new version of the Avalanche Engine, Avalanche Engine 2.0, which is an updated version of the engine used in Just Cause.  The game will be set on the other side of the world, compared to Just Cause, which is on the fictional tropical island of Panau in Southeast Asia. Rico Rodriguez will return as the protagonist, aiming to overthrow the evil dictator Pandak "Baby" Panay and confront his former boss, Tom Sheldon.

Just Cause 2 Benchmark Results

Benchmark Results: Just Cause 2 does better on AMD video cards from our experience, and that proved true again here today. The NVIDIA GeForce GTX 580 was 12% faster than the GeForce GTX 480 at a resolution of 1280x1024.

Metro 2033

Metro 2033

Metro 2033 is an action-oriented video game with a combination of survival horror and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in the Ukraine. The game is played from the perspective of a character named Artyom. The story takes place in post-apocalyptic Moscow, mostly inside the metro station where the player's character was raised (he was born before the war, in an unharmed city), but occasionally the player has to go above ground on certain missions and scavenge for valuables.

Metro 2033 Settings

This is another extremely demanding game. Settings were left at High quality with AA and AF at lowest values- AAA and AF 4x, respectively, for each DirectX 9, 10 & 11 APIs. Advanced DirectX 11 settings were left at default. The section of Metro 2033 tested was the Prologue with fraps polling from when you are climbing up the ladder until opening the door to exit the metro station. This section includes many features found throughout the game including four creatures which attack you before you exit the building, dense particles, ammo in cabinets, a few computer controlled sections and, of course, Miller, your first companion.

Metro 2033

Benchmark Results: Metro 2033 is tough on most graphics cards, but the GeForce GTX 480 and GTX 580 both do very well on this benchmark.  Here we see the GeForce GTX 580 is 14% faster than a GeForce GTX 480 at 1920x1200 resolution.


Metro 2033

One of our forum members asked us if we could run the GeForce GTX580 and GTX480 at the same clock speeds to see what happens to performance. This way you can see the performance boost created by just the extra CUDA cores and the new architecture enhancements on the GeForce GTX580. As you can see the clock frequency boost is where NVIDIA is getting most of their performance increase from. We are seeing a 1.33 to 1.66 FPS increase with just the extra CUDA cores, which translated to just a 3.5% boost in performance in Metro 2033 at 1920x1200.

StarCraft II: Wings of Liberty

StarCraft II: Wings of Liberty DX11 Performance Benchmark

StarCraft II: Wings of Liberty is a military science fiction real-time strategy video game developed by Blizzard Entertainment for Microsoft Windows and Mac OS X. A sequel to the award-winning 1998 video game StarCraft, the game was released worldwide on July 27, 2010. It is split into three installments: the base game with the subtitle Wings of Liberty, and two upcoming expansion packs, Heart of the Swarm and Legacy of the Void. StarCraft II: Wings of Liberty has had a successful launch, selling three million copies worldwide in less than a month.

StarCraft II: Wings of Liberty Settings

The game StarCraft II: Wings of Liberty has no internal benchmarking tools built into the game engine, so we recorded a scene and played it back at normal speed and measured performance with FRAPS. The screen capture above shows the system settings that we were using for StarCraft II. Notice we are running the graphics quality and textures set to Ultra and maxed out everything in the menu. We did not manually enable AA in the drivers though, as we wanted to keep testing simple and consistent to the choices offered in the games settings menus.

Stalker Call of Pripyat Advanced Image Quality Settings

Benchmark Results: StarCraft II: Wings of Liberty showed that the NVIDIA GeForce GTX 580 was again 14% faster than the GeForce GTX 480 at 1920x1200.

S.T.A.L.K.E.R.: Call of Pripyat

Stalker Call of Pripyat DX11 Performance Benchmark

The events of S.T.A.L.K.E.R.: Call of Pripyat unfold shortly after the end of S.T.A.L.K.E.R.: Shadow of Chernobyl following the ending in which Strelok destroys the C-Consciousness. Having discovered the open path to the Zone's center, the government decides to stage a large-scale operation to take control of the Chernobyl nuclear plant.

S.T.A.L.K.E.R.: Call of Pripyat utilizes the XRAY 1.6 Engine, allowing advanced modern graphical features through the use of DirectX 11 to be fully integrated; one outstanding feature being the inclusion of real-time GPU tessellation. Regions and maps feature photo realistic scenes of the region it is made to represent. There is also extensive support for older versions of DirectX, meaning that Call of Pripyat is also compatible with older DirectX 8, 9, 10 and 10.1 graphics cards.

Stalker Call of Pripyat DX11 Performance Benchmark

The game S.T.A.L.K.E.R.: CoP has no internal benchmarking tools built into the game engine, but they do have a standalone benchmark available that we used for our testing purposes. The screen capture above shows the main window of the benchmark with our settings. Notice we are running Enhanced Full Dynamic Lighting "DX11" as our renderer.

Stalker Call of Pripyat Advanced Image Quality Settings

Under the advanced settings we enabled tessellation and 4x MSAA. We didn't enable ambient occlusion as we wanted to use these test settings for mainstream cards down the road and these settings should be tough enough to stress any and all DX11 enabled video cards.

Stalker Call of Pripyat Advanced Image Quality Settings

Benchmark Results: S.T.A.L.K.E.R. Call of Pripyat showed that the NVIDIA GeForce GTX 580 was 16% faster than the GeForce GTX 480 and it was just behind the performance level of two Radeon HD 6870 video cards running in CrossFire.

H.A.W.X. 2 Benchmark

We wanted to include a new benchmark for this review, so Tom Clancy's H.A.W.X. 2 was added in to see how it looked. This benchmark got some attention recently for not being neutral over the failure to use optimized code for better tessellation performance on AMD and NVIDIA cards, but that doesn't matter to us as we will be using this benchmark to look at just AMD cards.

Tom Clancy's HAWX 2

Aerial warfare has evolved. So have you. As a member of the ultra-secret H.A.W.X. 2 squadron, you are one of the chosen few, one of the truly elite. You will use finely honed reflexes, bleeding-edge technology and ultra-sophisticated aircraft - their existence denied by many governments - to dominate the skies. You will do so by mastering every nuance of the world's finest combat aircraft. You will slip into enemy territory undetected, deliver a crippling blow and escape before he can summon a response. You will use your superior technology to decimate the enemy from afar, then draw him in close for a pulse-pounding dogfight. And you will use your steel nerve to successfully execute night raids, aerial refueling and more. You will do all this with professionalism, skill and consummate lethality. Because you are a member of H.A.W.X. 2 and you are one of the finest military aviators the world has ever known. H.A.W.X. 2 is due out on November 16, 2010 for PC gamers.

Tom Clancy's HAWX 2

We ran the benchmark in DX11 mode and cranked up all the Antialiasing and Advanced image quality settings. We also enabled hardware tessellation as without that setting turned on the cards were getting well over 120FPS at a resolution of 1920x1200 and over 160FPS at 1280x1025. We wanted to stress the cards a bit and enabling tessellation appeared to do the trick as you'll see below.

Tom Clancy's HAWX 2 Benchmark Results

Benchmark Results: This benchmark looks amazing and you should really download it and try it out if you haven't done so yet. We are primarily using this benchmark to check out the performance of the GeForce GTX 580 versus the GeForce GTX 470. We are seeing a 12% performance increase with the GTX 580 over the GTX 480 at 1920x1200, and just 11% at 1280x1024.

3DMark Vantage

3DMark Vantage

3DMark Vantage is the new industry standard PC gaming performance benchmark from Futuremark, newly designed for Windows Vista and DirectX10. It includes two new graphics tests, two new CPU tests, several new feature tests, and support for the latest hardware. 3DMark Vantage is based on a completely new rendering engine, developed specifically to take full advantage of DirectX10, the new graphics API from Microsoft.

3DMark Vantage

The Extreme settings were used for testing, so a resolution of 1920x1200 was used.

3DMark Vantage Benchmark Results

Benchmark Results: 3DMark Vantage showed that the GeForce GTX 580 was 31.4% faster than the GeForce GTX 480 when run in Extreme mode.  When run in default performance mode we scored P29461 with a GPU score of 24586, in case you are wondering what the GeForce GTX 580 scored with default benchmark settings.

Unigine 'Heaven' DX11

Unigine DirectX 11 benchmark Heaven

The 'Heaven' benchmark that uses the Unigine easily shows off the full potential of DirectX 11 graphics cards. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode emerging, experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extent and exhibiting the possibilities of enriching 3D gaming. The distinguishing feature of the benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.

DirectX 11 benchmark Unigine engine

We ran the Heaven v2.1 benchmark that just recently out with VSync turned disabled, but with 8x AA and 16x AF enabled to check out system performance. We ran the benchmark at 1920x1200 and 1280x1024 to see how the benchmark ran at some different monitor resolutions. It should be noted that we ran the new extreme tessellation mode on this benchmark.  These are the toughest settings that you can run on this benchmark, so it should really put the hurt on any graphics card.

Unigine Heaven Benchmark

Benchmark Results: This is a great benchmark for taking a look at tessellation performance and Heaven 2.1 with the image quality settings cranked up showed that the GeForce GTX 580 was 18% faster than the GeForce GTX 480 at 1920x1200.

FurMark 1.8.2

FurMark 1.8.2

FurMark is a very intensive OpenGL benchmark that uses fur rendering algorithms to measure the performance of the graphics card. Fur rendering is especially adapted to overheat the GPU and that's why FurMark is also a perfect stability and stress test tool (also called GPU burner) for the graphics card.

FurMark 1.8.2

The benchmark was rendered in full screen mode with no AA enabled on both video cards.

Furmark Benchmark Results

Benchmark Results: Furmark showed the NVIDIA GeForce GTX 580 performing poorly as the GTX580's new power monitoring hardware kicked on and slowed the card's performance down. NVIDIA said the GTX580 dynamically adjusts performance in certain applications (Furmark and OCCT) in order to keep the power within their specifications.  Here we see an example of where the new hardware monitoring management system hurts the card's performance. We have concerns about this, but NVIDIA said they can make changes in the driver, so if a game comes out down the road a simple driver update is all that is needed to change the power level.

GeForce GTX580 F@H Performance

NVIDIA GeForce GTX 480 Video Card Folding

For those new to F@H, Folding@home is a distributed computing project that, very simply stated, studies protein folding and misfolding. In the last few years Folding@home on graphics cards has gained popularity as it offers great performance per watt. By utilizing the power of the GPU, Stanford has been able to leverage ever evolving graphics technology and put it to good use trying to understand and possibly solve the problem of proteins misfolding. Since the release of the NVIDIA GPU client, their graphics cards have consistently been at the top of points per day, meaning the highest production video card. NVIDIA has been touting Fermi as THE next big thing for GPGPU and projects like Folding@home. Many have been giddy with anticipation since Nvidia first broke the news of Fermi way back on September 30th of 2009.

NVIDIA GeForce GTX 580 Video Card Folding

We ran the Windows GPU2 client on the GeForce GTX 480 and the GTX 580 to see how the cards performed.

  GeForce GTX 480
GeForce GTX 580
GeForce GTX 580 OC
 Work Unit Name
 264 Fs-coil
264 Fs-coil 264 Fs-coil
 Seconds per step
55
48
43
 GPU Idle Temp
 51C 36C 
 36C
 GPU Folding Temp
 80C 69C
 71C
 System Load
254W
243W
 247W
 Noise Level
62dB 59dB
59dB

For our test we ran the 264 Fs_coil work unit on all of the pairs of cards and found that the GeForce GTX 580 was 12.7% faster per step than a GeForce GTX 480. When we overclocked the shaders of the card up to 1740MHz we were able to improve performance and the seconds per step dropped by five seconds! With this nice overclock on the 512 shaders we were able to get a 22% F@H performance improvement over a GeForce GTX 480 reference design! As you can see, the GeForce GTX 580 is faster, quieter and uses less power than a GeForce GTX 480.

Temperature Testing

Since video card temperatures and the heat generated by next-generation cards have become an area of concern among enthusiasts and gamers, we want to take a closer look at how the NVIDIA GeForce GTX 580 graphics card does at idle and under a full load.

NVIDIA GeForce GTX 580 Video Card Idle Temperature:

NVIDIA GeForce GTX 580 Video Card GPU-Z 0.4.8 Details

As you can see from the screen shot above, the idle state of the NVIDIA GeForce GTX580 drops the GPU core clock frequency down to 50.6MHz and the memory clock down to 67.5MHz to help conserve power and lower temperatures.  At idle on an open test bench the GeForce GTX 580 video card temperature was observed at 36C. The fan speed on the card was at 40% and nearly silent as it spins at just 14000 RPM.

NVIDIA GeForce GTX 580 Video Card Load Temperature:

NVIDIA GeForce GTX 580 Furmark

We fired up FurMark and ran the stability at 640x480, which was enough to put the GF110 GPU core at full load in order to get the highest load temperature possible. This application also charts the temperature results so you can see how the temperature rises and levels off, which is very nice. The fan on the NVIDIA GeForce GTX 580 video card was left on auto during temperature testing. This test showed that the peak temperature reported on the open air test bench was just 71C.

NVIDIA GeForce GTX 580 Furmark

Since Furmark is clearly being impacted by the new power monitoring hardware we fired up OCCT 3.1.0 and set the shader complexity to eight and let the application run for 30 minutes. We only read 71C on Furmark, but with OCCT we were able to get the board up to 85C and it looks like performance here is being limited by the monitoring hardware as the FPS was really high and then it was greatly reduced when the card reached about 60C. We pushed the NVIDIA GeForce GTX 580 as hard as we could in games and other activities like F@H and hitting 85C in OCCT was the highest that we could get the card.

Power Consumption

For testing power consumption, we took our test system and plugged it into a Kill-A-Watt power meter. For idle numbers, we allowed the system to idle on the desktop for 15 minutes and took the reading. For load numbers we measured the peak wattage used by the system while running the OpenGL benchmark FurMark 1.8.2 at 1280x1024 resolution.

Total System Power Consumption Results

Power Consumption Results: Here we see the GeForce GTX 580 using 19 Watts less power at idle and 102 less at load thanks to the new hardware monitoring 'fix' that has been done on Furmark. We are doing some more benchmarking tests to see if we can find a replacement power consumption load test as Furmark and OCCT are no longer fair tests for even just NVIDIA branded cards.

GeForce GTX580 Overclocking

A performance analysis of the GeForce GTX 580 video card wouldn't be complete without some overclocking results, so we got in touch with our friends over at EVGA and they said the latest build of their EVGA Precision software would work on the GeForce GTX 580 reference card.


NVIDIA GeForce GTX 450 Video Card Overclocking

Using the EVGA Precision software utility for the GeForce GTX 580 graphics card is a little different, though, as the shader clock slider is locked with the core clock just like on the GeForce GTX 480. We contacted NVIDIA about this and they had this to say on the issue:

"When you move the processor clock slider on GTX 480 skin on the latest GF100 Precision tool the core clock goes up by proportional amount, since it’s ½ shader clock on this architecture. This is by design, GF100’s NV clock domain is controlled solely by the processor/shader domain." NVIDIA PR

This makes sense and makes overclocking a little easier! The bad news is that those that run F@H won't be able to adjust just the shaders as the core and shader clock are locked together and can't be un-linked.

NVIDIA GeForce GTX 450 Video Card Overclocking

The highest overclock that we could get on the GeForce GTX 580 reference card was 886MHz on the core and 1125MHz on the GDDR5 memory. This overclock was 100% stable on games, but for some reason it would crash on the longer synthetic benchmarks like 3DMark Vantage.

NVIDIA GeForce GTX 450 Video Card Overclocking

We had to back down the GPU Core Clock down to 850MHz (1700MHz shaders) and the overclock was then 100% stable. 

NVIDIA GeForce GTX 450 Video Card Overclocking

What is interesting is that we could get the core clock up to 871MHz (1742MHz shaders) and run F@H for 12+ hours with no crashes or turn in any early work units.  Looks like 3DMark Vantage puts the hurting on the Fermi core if it isn't 100% stable.

NVIDIA GeForce GTX 580 Graphics Card at 772MHz/1544MHz/1002MHz:

NVIDIA GeForce GTX 580 Video Card

NVIDIA GeForce GTX 580 Graphics Card at 850MHz/1700MHz/1125MHz:

NVIDIA GeForce GTX 580 Video Card

We saw 3DMark Vantage go up from X13292 to X14498, which is a 9% jump in performance! Let's see what it does in real games.

NVIDIA GeForce GTX 580 Graphics Card Overclocked
NVIDIA GeForce GTX 580 Graphics Card Overclocked

The jump from 772MHz to 850MHz on the core clock helped boost performance by 13% in Metro 2033 and by 10% in AvP at a resolution of 1920x1200.  We'll take a 10-13% performance increase from overclocking any day!  If we could adjust the GPU voltage we are certain we could have gotten a higher overclock in the games.  We were able to run AvP at 885MHz core clock with full stability, but just couldn't get it stable in 3DMark Vantage! As you know, overclocking can be really frustrating!

Dual LCD Display Testing

When the Fermi GPU architecture first came out we discovered that the video cards didn't play that well with a dual monitor setup. NVIDIA recently actually put a notice in their drier notes about this shortly after our article was released.

GPU Runs at a High Performance Level (full clock speeds) in Multi-display Modes
This is a hardware limitation and not a software bug. Even when no 3D programs are running, the driver will operate the GPU at a high performance level in order to efficiently drive multiple displays. In the case of SLI or multi‐GPU PCs, the second GPU will always operate with full clock speeds; again, in order to efficiently drive multiple displays.

We aren't sure what driver it started with, but NVIDIA informed us that if we ran a pair of monitors together at the same resolution that the video card would be in an idle state. We had to test this out for ourselves.

GeForce GTX 580 2 LCD Display Idle

I fired up GPU-Z and ran a pair of identical Samsung SyncMaster monitors at their native resolutions and the GeForce GTX 580 ran at an idle state and had a GPU temperature of just 38C. The system consumed 121 Watts of power and the fan speed was 1410RPM that put out 54db from half a foot away.

GeForce GTX 580 2 LCD Display Idle

Changing the resolution on one of the monitors put the video card in a full power state at all times and the system's power consumption jumped up to 188 Watts!  The temperature also jumped up to 58C and the GPU fan was running at 1710RPM to help keep the card cooler. This means the fan was now louder as it was spinning fast enough to put off 56dB.

  GTX 480 1 LCD
GTX 480 2 LCD
 GTX 580 1 LCD
 GTX 285 2 LCD
Core Clock
 50.0MHz 405MHz
 50.6MHz  772MHz
 Mem Clock
67.5MHz
924MHz
 135.0MHz  1002MHz
 Shader Clock
 101.0MHz 810MHz
 101.3MHz  1544MHz
 Idle Temp
47C 72C
 38C  58C
 Idle Power
139W
205W
 121W 188W
 Fan Speed
 1734RPM 1822RPM
1410RPM
1710RPM

If you run two different types of panels or have two panels running at different resolutions you can look at the numbers you will see above. If you run two different monitors at the same resolution it won't keep the video card at idle as each monitor has different internal timings and the card will run at full load. This is something we think is interesting and most people still don't know about it!

Final Thoughts and Conclusions

NVIDIA GeForce GTX 580 and GTX 480 Video Cards

The NVIDIA GeForce GTX 580 is what we would consider a GeForce GTX 480 video card done right. When the GeForce GTX 480 first came out we were less than impressed and were one of the very few sites that didn't give it an award or praise it because it was very fast in certain areas. Why is this? We knew something wasn't exactly right about the card and when we went to use it on a daily basis in our office it would actually heat up our office to the point where it was uncomfortable. Trying to cook an egg on the GeForce GTX 480 made a few people upset at NVIDIA, but no one said our thoughts were wrong. NVIDIA brought out a new BIOS revision and later new drivers that helped reduce the temperature issues we found when running dual monitors. These are all just patches and were by no means a solution to the problem. Over the past seven months not one company wanted to send us a retail version to review because they knew our stance. This morning's release of the GeForce GTX 580 and the fact that it is a re-engineered GF100 core with enhancements and improvements done to it confirm that we were right in our initial thoughts. That said, the GeForce GTX 580 is a whole new beast.

The NVIDIA GeForce GTX 580 video card has better performance, makes less noise, runs cooler and draws less power than the GTX 480. We honestly love this card and it really sucks for NVIDIA that the original GeForce GTX 470 wasn't designed this way.  The performance of the GeForce GTX 580 was great and we constantly saw double digit performance gains in many of the games and applications that we benchmarked. Overclocking performance was also good as we were able to get another ~10% or so from a card that was already at the top of the performance charts for single GPU performance.

When it comes to pricing NVIDIA informed us that the GeForce GTX 580 will be $499 and that the NVIDIA GeForce GTX 480 will remain in the product lineup at around $449. If you look at online retailers like Newegg the GeForce GTX 580 is listed at $559.99 at the low end and some of the overclocked cards are priced at $589.99. Newegg looks like they are doing a 10% promo code right now, but they state that it is a limited time offer. Usually NVIDIA gives out a product lineup pricing sheet when a new GPU launches, but that was not the case this time.  In fact, we specifically asked for one and were not given one.  Based on the fact that NVIDIA won't give us an SRP list of their products we feel that the $499 price they told the press isn't accurate and that the prices shown online are correct.

Newegg GeForce GTX 580 Video Card Launch Day Pricing Before 10% Off Code EMCZZYR24:

11/9/2010 3PM CST UPDATE: NVIDIA just sent us an e-mail letting us know that they felt we were a bit harsh with out pricing comments and shared a link to Tiger Direct where they have a Galaxy GeForce GTX 580 available for the $499 price they told the press the card would be launched at. When we looked into pricing this morning there were no $499 cards available. We also noticed that Newegg dropped the price of the Gigabyte GeForce GTX 580 down to $549.99 now and after the 10% discount that card is down to $495.  Looks like $495 is the best price to be seen so far.
TigerDirect GeForce GTX 580 Video Card Launch Day Pricing:

The NVIDIA GeForce GTX 580 after the 10% off discount code can be had for $503.99, which makes it slightly more than the ATI Radeon HD 5970. The ATI Radeon HD 5970 can be found for $469.99 after rebate, so as you can see there is a $40 price difference between the two cards. The ATI Radeon HD 5970 was faster than the GTX580 in many of the benchmarks, but remember this is a dual-GPU solution and the GTX580 has just one GPU. The GeForce GTX 580 has a better upgrade path, uses less power, is nearly 3" shorter and doesn't have any multi-GPU scaling issues that arise with CrossFire and SLI solutions. Other things to consider are PhysX, 3D Vision and superior F@H and SLI scaling performance. It looks like AMD needs to get the Radeon HD 6900 series out the door!  The AMD Radeon HD 6900 series will consist of Cayman (single-GPU) and Antilles (dual-GPU) and we can only hope that they can compete with the GeForce GTX 580.  Our prediction is that the GeForce GTX 580 will be faster than Cayman, but will cost way more. Based on our testing we have to give the GeForce GTX 580 the Editor's Choice award as this card is fast and NVIDIA looks to have designed it right the second time around.


Legit Reviews Editor's Choice Award

Legit Bottom Line: The NVIDIA GeForce GTX 580 Video Card has better performance, makes less noise, runs cooler and draws less power than the GTX 480!