Good Riddance Fermi and Hello Kepler!

GeForce GTX 680 Video Card

This morning NVIDIA introduced the GeForce GTX 680 video card which is the first card in the long awaited GeForce 600 series and also the first desktop GPU based on the new 28nm 'Kepler' core architecture. This means that the GeForce GTX 580 has been officially dethroned after over 16 months of rule! Its hard to imagine that the GeForce GTX 580 was released back in November 2010 but it was and the card has served gamers well. Priced at $499, the GeForce GTX 680 comes at the same suggested retail price as the GeForce GTX 580 did, but features a new GPU core that has 1536 CUDA Cores. The GeForce GTX 580 had just 512 CUDA cores, so with three times as many CUDA cores you know things are about to get very interesting.

GeForce GTX 680 Video Card Features

NVIDIA focused on three principles when they designed the GeForce GTX 680: Make it Faster. Smoother. Richer. By doing so they developed the Kepler architecture on an optimized 28nm process and squeezed all the performance and power enhancements out of it that they could. The GK104 is the first GPU based on this architecture and it is the core used on the GeForce GTX 680.

With a total of eight Streaming Multiprocessor (SMX) units, the GeForce GTX 680 video card has 1536 CUDA Cores. The memory subsystem of the GeForce GTX 680 consists of four 64-bit memory controllers (256-bit) with 2GB of GDDR5 memory. The base clock speed of the GeForce GTX 680 is 1006MHz and the typical Boost clock speed is 1058MHz. The typical Boost clock is based on the average GeForce GTX 680 card running a wide variety of games and applications. The actual Boost clock will vary depending on actual system conditions. GeForce GTX 680’s memory speed is an impressive 6008MHz (effective). The best part of these specifications is that NVIDIA managed to get it all on a 10-inch PCB with two 6-pin PCIe power connectors and a TDP of just 195 Watts!

NVIDIA Fermi & Kepler Differences

Compared to the GF110 GPU used on the GeForce GTX 580, the GK104 and the GeForce GTX 680 is a whole new beast.  For starters the core clock and shader clock are now running at the same frequency. The graphics core clock is now also dynamic, but we'll talk about that more on the next page. The transistor count is up by 18% to 3.54 Billion, but the CUDA cores have tripled and the number of texture units has doubled. The compute performance has nearly doubled from 1581 GFLOPS to 3090 GFLOPS! You would think that all this would come at the cost of increased power consumption, but that is not the case as the TDP has been lowered down to just 195Watts. NVIDIA has also doubled the number of displays that you can use off a single card, so four monitors is now supported.  This is huge as you can run 3D Vision surround plus an additional monitor as an accessory display off a single card. AMD Eyefinity was a multi-monitor feature that NVIDIA couldn't touch with a single card prior to this, so this does help build a richer gaming and overall PC experience.

GeForce GK104 GPU Die Shot

What is new with the GK104 GPU on the GeForce GTX 680?  NVIDIA told us that the GK104 has roughly 3.54 Billion transistors on a die that is just 294mm². This yields a transistor density of 12 million per mm², which is competitive with what AMD has done on the Radeon HD 7000 series.

NVIDIA GK104 Block Diagram

Like Fermi, Kepler GPUs are composed of different configurations of Graphics Processing Clusters (GPCs), Streaming Multiprocessors (SMs), and memory controllers. The GeForce GTX 680 GPU consists of four GPCs, eight next-generation Streaming Multiprocessors (SMX), and four memory controllers.

NVIDIA GK104 SMX Diagram

In GeForce GTX 680, each GPC has a dedicated raster engine and two SMX units. With a total of eight SMX units, the GeForce GTX 680 implementation has 1536 CUDA Cores. The CUDA cores themselves remain unchanged from the previous generation.

GeForce GTX 680 Video Card Specs

Many of our readers love specification charts and here is a good one that NVIDIA provided that shows what the GTX680 has to offer. The GeForce GTX 680’s memory subsystem was completely revamped, resulting in dramatically higher memory clock speeds. Operating at 6008MHz data rate, the GeForce GTX 680 offers the highest memory clock speeds of any GPU in the industry. Tied to each memory controller are 128KB L2 cache and eight ROP units (each of the eight ROP units processes a single color sample). With four memory controllers, a full GeForce GTX 680 GPU has 512KB L2 cache and 32 ROPs (i.e., 32 color samples).

Let's take a look at the new features this card has.

NVIDIA GPU Boost, Adaptive VSync & TXAA

NVIDIA GPU Boost

Among the other features that NVIDIA is launching with the GeForce GTX 680, they are also introducing GPU Boost. If you have been following the processor scene over the last year or so, both Intel and AMD have been implementing their own version of Turbo into their processors. Intel introduced it with the launch of 'Sandy Bridge' and AMD with the Phenom II X6 processors and further pushed the envelope with the FX 'Bulldozer' series. Each of these will increase the clock speed of the processor cores to a predetermined level depending on usage. NVIDIA's GPU Boost works in a very similar fashion, though it's not being done with multipliers, it is strictly the frequency. 

NVIDIA GPU Boost algorithm
Here is a quick look at the algorithm that shows the things NVIDIA takes into consideration when dynamically setting both the frequency and voltage on the GPU and Memory.

NVIDIA GPU Boost

It is much more dynamic, there is no predetermine level that the GPU core will be raised to. The key to the NVIDIA GPU Boost it the new Kepler architecture has a TDP of only 195 Watts, but not all 3D applications will bring the card to that level. Starting with the base GPU clock of 1006MHz the NVIDIA GeForce GTX 680 will run that frequency as a minimum speed in 3D applications that bring the Kepler core to the TDP limit. Now here's where things start to get tricky. If you're running a 3D application that isn't bringing the GTX 680 up to TDP, the clock speed will dynamically increase to the predetermined power level! The key to this process is the TDP that the GeForce GTX 680 is currently running at. If it is running at 180 Watts you will get a smaller boost than if you are running at only 160 Watts. GPU Boost will increase the GPU frequency to bring the TDP up to the set maximum. One of the great aspects about the GPU Boost is that much like the processor Turbo Boost technologies, NVIDIA's GPU boost all happens seamlessly in the background with no intervention by the user!

NVIDIA GPU Boost

An advantage to NVIDIA GPU Boost over the way the mentioned processors run turbo, is that it works with overclocking. With processors you can increase the turbo, the base multiplier, or both on some motherboards. NVIDIA GPU Boost allows you to overclock the GPU, and the GPU Boost will push the GTX 680 further yet! It will be interesting to see how this will pan out for the overclockers that like to truly push their GeForce GTX 680's while using liquid nitrogen. We have been told that the GTX 680 is going to be a very friendly card to sub-zero cooling! I am definitely looking forward to seeing these cards under some liquid nitrogen and other extreme cooling in the future!

EVGA Precision X

In the latest version of EVGA's Presicion X software we can tweak the settings for the GPU Boost. More specifically the Power target, or TDP of the GeForce GTX 680 can be increased. This should allow more headroom for the GPU Boost to work. Though out of the box NVIDIA found that the median GPU Boost or 'Boost Clock' (average clock frequency the GPU will run under non-TDP apps) is 1058MHz which is just a hair over 5%. According to NVIDIA though, many of the non-TDP applications that they ran during internal testing saw GPU Boost take the GeForce GTX 680 to 1.1GHz or higher!

EVGA Precision X

Taking a look quick look at the options that we can adjust in EVGA Precision X, we can adjust the Power Target as we mentioned above, the GPU Clock Offset, and the Memory Clock Offset. We can take the Power Target to 132% of the standard TDP, that's 225 Watts TDP an increase of 30 Watts (The most this card can pull at board-level is 225 watts – 75W for each 6-pin, plus mobo PCIe connector 75W = 225W.) So, NVIDIA allows people to raise to max board power without violating the PCIe specification. The GPU Clock Offset can slide up another 549MHz, at least in EVGA Precision we'll play with that more on the overclocking page later in the article. The Memory Offset can go up another 1000MHz.

EVGA Precision X

Just toying with the settings at this point, perhaps we will call it a little teaser for our overclocking section. We slid the power target to the maximum 132% and bumped the GPU clock and memory clocks each up 100MHz and opened up Heaven 3.0. Prior to the temperature hitting the point that you can see above (76 degrees Celsius) the GPU Boost topped out at 1215MHz! That is more than 200MHz above the stock settings! We can see in the screenshot that although the frequency dropped from 1215MHz we are still cooking along at 1197MHz. I'm betting we can get some more out of it when we sit down for an overclocking session with it, but we'll just have to wait and see though.

EVGA Precision Frame Rate Target

You can also set a frame rate target with EVGA Precision, which is essentially a frame rate limiter. If you set an FPS target, (you need to restart application after this) the app will be locked to that max FPS. The cool thing with the GTX 680 is that the card will lower the clockspeed to match the FPS. So its good for older games that run at like 200FPS, you can set a frame cap at 100FPS and the card will lower clocks/voltage/power to maintain 100FPS and not waste power/heat and reduce tearing.

Vsync

NVIDIA has also come up with something new called Adaptive Vsync which dynamically toggles off Vsync only when the frame rate drops below the display's refresh rate. All gamers should know that Vsync prevents you from running games at over 60 frames per second. Vsync is great when your video card is capable of generating a ton of high frame rates, but not so good when you dip under the 60 FPS mark as you get stutters when this happens. Once you go below 60FPS Vsync then targets your frame rates at 30FPS, then 20FPS and so on if your frame rate is that low. When Vsync takes you from 60FPS to 30FPS it's not a smooth transition and that is why you can visually see the change in the form of a small stutter or hesitation.

Vsync

One of the solutions to this is to turn Vsync off for smoother frame rates, but then you run into the problem of going too fast and then you find yourself having tearing when a quick drop in the frame rate happens.

Vsync

What adaptive Vsync does is intelligently adjust the behavior. When the frame rates are high the 60 FPS limit will be in place, but when it drops below 60 FPS it will go back to the native behavior of the application or game. This technology uses whatever the native refresh rate is of the monitor, so it will be 60 on some and 120 on others. To simplify things, this technology caps it above 60 FPS and keeps it free flowing below 60 FPS.

TXAA

NVIDIA's new Temporal Anti-Aliasing (TXAA) feature is also worth mentioning as it is going to make your game look better and you'll take less of a performance hit for using it. For example it is said to provide 16x MSAA quality at the cost of 2x-4x MSAA without any blurred effects. TXAA is a mix of hardware anti-aliasing, custom CG film style AA resolve, and in the case of 2xTXAA an optional temporal component for better image quality. It's also a feature that will be added the GeForce GTX 500 series cards with a driver release at a later date, so this will also be available to current owners. TXAA will first be implemented in upcoming game titles shipping later this year.

TXAA

TXAA is available with two modes: TXAA 1, and TXAA 2. TXAA 1 offers better than 8xMSAA visual quality with the performance hit of 2xMSAA, while TXAA 2 offers even better image quality than TXAA 1, but with performance comparable to 4xMSAA.

Here is a quick video that covers NVIDIA GPU Boost, TXAA, PhysX and Adaptive VSync.

A Closer Look At The GeForce GTX 680

NVIDIA GeForce GTX 680 Video Card

The GeForce GTX 680 graphics card on the test bench today is a dual-slot single GPU video card that measures in at 10-inches in length, which is half an inch shorter than the GeForce GTX 580!  NIVIDIA gave the GeForce GTX 680 fan shroud more of an angled shape which gives it the appearance it was designed by military engineers trying to reduce the radar cross-section and minimize straight lines.

NVIDIA GeForce GTX 680 Video Card Back

Turning the NVIDIA GeForce GTX 680 over there is not much of interest as none of the memory chips or major components are located on the backside of the PCB. We used a pair of dial calipers and found the mounting holes around the GPU are spaced 58mm apart. This is good news for those that like to use aftermarket GPU coolers as this is a fairly common size.

NVIDIA GeForce GTX 680 Video Card Back

One item on the back of the PCB  that did warrant further investigation was a small 'daughter' PCB soldered to the main PCB for the PWM controller (RT8800A Datasheet). Our GeForce GTX 680 reference card used a Richtek Technology RT8800A General Purpose 3-Phase PWM Controller for High-Density Power Supply. We were told that since the power control on video cards is getting pretty advanced that NVIDIA wanted an easy way to change controllers if needed. By designing a secondary PCB with a standard foot print, we assume that NVIDIA can change controller brands quickly without a complete PCB change. This is also great if prices on a component go up, they can switch over to another brand or model and not be tied to just one company or component. 

NVIDIA GeForce GTX 680 Video Card DVI and HDMI

The NVIDIA GeForce GTX 680 2GB GDDR5 graphics card has dual-link DVI-I and DVI-D outputs along with a DispalyPort and HDMI output headers. The GeForce GTX 680 fully supports HDMI 1.4a, 4K monitors (3840x2160) and multi-stream audio. One of the major features of the GeForce GTX 680 is an all-new display engine that is capable of driving up to four displays at once.  This means you can now run 3D Vision Surround on three displays from a single GeForce GTX 680.  The fourth panel has to be used as an accessory display and cannot be part of the surround setup.

NVIDIA GeForce GTX 680 Video Card End

The back end of the GeForce GTX 680 has a small opening to allow airflow to enter the fan.

NVIDIA GeForce GTX 680 Video Card PCIe Power

The NVIDIA GeForce GTX 680 video card requires a 550 Watt or greater power supply with a minimum of 38 Amps on the +12V rail and two 6-pin PCI Express power connectors for proper connection. It should be noted that the NVIDIA minimum system power requirement is based on a PC configured with an Intel Core i7 3.2GHz CPU. These are pretty reasonable power requirements, so that is good news for everyone!   

NVIDIA GeForce GTX 680 Video Card SLI Connector

The NVIDIA GeForce GTX 680 graphics cards has full SLI support and comes with a pair of SLI bridges located along the top edge of the graphics card. The NVIDIA GTX 680 series supports two, three, and quad SLI configurations.

NVIDIA GeForce GTX 580 Video Card Vapor Chamber Cooling

Over the past several video card series, NVIDIA has spent a great deal of time improving the cooling system and the GeForce GTX 680 is no different. Three key features allow the GeForce GTX 680 to deliver a quiet gaming experience: acoustic dampening material used in the GPU fan, an embedded triple heatpipe design, and a custom fin stack that’s been shaped for better airflow.

NVIDIA GeForce GTX 680 Video Card Cooling

When it comes to noise levels, NVIDIA states that the GeForce GTX 680 runs at just 46dB. This means that the GeForce GTX 680 is 1dB lower than the GTX580 (47dB) and  7dB lower than the GeForce GTX 480 (53dB). NVIDIA also told us that the GeForce GTX680 is 5dB quieter than the AMD Radeon HD 7970 under load.

NVIDIA GeForce GTX 680 Video Card Cooling

This image shows the new triple heatpipe design embedded into the base of the GPU heatsink that draw heat off the GK104 GPU and is cooled by the custom fin stack. 

Let's take a closer look at this new cooler in the GTX680 video card by taking it apart!

Taking The GeForce GTX 680 Apart

Since supposedly no one has seen a GeForce GTX 680 production card in person, we took it apart to see the layout and to take a peak at the new GK104 GPU core. First we removed the black plastic shroud that is screwed onto a metal assembly on the front of the GeForce GTX 680 graphics card.

NVIDIA GeForce GTX 680 Video Card

The NVIDIA GeForce GTX 680 fan shroud is held down with six screws with thread locking compound making removal a little tricky. Once you get the six screws off you can gently lift the cover off revealing the fin stack and some of the components on the PCB.  This is as far as most people will ever need to go as you can easily use compressed air to blow the dust out of the custom fin stack if needed.

NVIDIA GeForce GTX 680 Video Card

One of the first things that we noticed is that NVIDIA ended the aluminum fin stack at an angle rather than cutting them at a 90 degree angle.  NVIDIA said that by providing this extra space between the heatsink and the rear bracket, hot air from the GPU is able to flow through the heatsink and exhaust out the back of the card more efficiently allowing the GPU to run cooler than with the heatsink placed flush to the bracket.

NVIDIA GeForce GTX 680 Video Card

Here is another look at the end of the GPU cooler fin stack showing the fin end angle which reduces noise and improves cooling performance.

NVIDIA GeForce GTX 680 Video Card

On the other end of the card you can see the power design that NVIDIA is using on the GeForce GTX680. NVIDIA went with a 4-phase power design for the GPU and a 2-phase power design for the 2GB of GDDR5 memory. Just looking at this picture you might miss some of the design changes that NVIDIA made for improved cooling.  For starters they moved the main GPU power phases to the bottom edge of the card, so the cooling fan blows air across them to keep them better cooled. In order to make room for the power phases, NVIDIA move the cooling fan higher up on the card which required NVIDIA to change the orientation of the 6-pin power connectors. NVIDIA made a custom stacked connector to provide the room necessary to move the fan higher up on the PCB which also allowed a half-inch reduction in the overall PCB length. Acoustic dampening material is also used inside the GPU fan to minimize unwanted tones such as the blade passing frequency tones. NVIDIA said they are pleased with the 4+2 phase design and that it is capable of delivering plenty of power for overclocking beyond 1.2GHz on the core clock if desired. As you can tell from the image above there are some missing components on both power phases, so it looks like NVIDIA had the ability to run more phases, but didn't need to.

NVIDIA GeForce GTX 680 Video Card Bracket

Next you can remove the four larger Philips head spring screws attaching the heatsink to the video card. With the heatsink removed reveals the core for the very first time! If you ever want to change out the thermal paste this is how far you need to go. As you can see from the image above, the company that assembled this card did a great job with the thermal compound application.

NVIDIA GeForce GTX 680 HSF Base

Here is a better look at the copper bottom of the GPU cooler that NVIDIA is using to make direct contact with the GK104 GPU core. NVIDIA has had some issues with getting the base machined flat in previous generations, but we are glad to report that it was fully machined and flat. Enthusiasts and overclockers will likely find lapping this copper base plate will be an easy way to improve performance as it was fairly rough and the three heatpipes are hidden safely on the other side of the plate.

NVIDIA GeForce GTX 680 Video Card

The metal bracket that covers the memory modules is held down by T7 Torx screws and our screwdriver in that size has walked off, so here is a PCB shot provided by NVIDIA.

NVIDIA GK104 Core

Now that the card is stripped down we can take a good look at the heart of the GeForce GTX 680 GPU, the GK104 core shown above. The core is labeled as GK104-400-A2, which we are guessing would be an A2 stepping of the GK104 silicon. For reference, the core is about the same size as United States quarter! 

The Test System

Before we look at the numbers, let's take a brief look at the test system that was used. All testing was done using a fresh install of Windows 7 Ultimate 64-bit and benchmarks were completed on the desktop with no other software programs running.

The AMD Radeon graphics cards were tested with Catalyst 12.1 beta drivers and all of the NVIDIA graphics cards ran GeForce 295.51 Beta drivers.

Intel X79/LGA2011 Platform

Intel LGA2011 Test System
Intel LGA2011 Test System

The Intel X79 platform that we used to test the all of the video cards was running the ASUS P9X79 Deluxe motherboard with BIOS 0906 that came out on 12/22/2011. The Corsair Vengeance 16GB 1866MHz quad channel memory kit was set to 1866MHz with 1.5v and 9-10-9-27 1T memory timings. The OCZ Vertex 3 240GB SSD was run with  firmware version 2.15.

The Intel X79 Test Platform

Component

Brand/Model

Live Pricing

Processor

Intel Core i7-3960X

Motherboard

ASUS P9X79 Deluxe

Memory

16GB Corsair 1866MHz

Video Card

Various

Solid-State Drive

OCZ Vertex 3 240GB

Cooling

Intel RTS2011LC

Power Supply

Corsair HX850W

Operating System

Windows 7 Ultimate 64-bit

Video Cards Tested:

NVIDIA GeForce GTX 680 GPU-Z Information:

NVIDIA GeForce GTX 680 GPU-Z
NVIDIA GeForce GTX 680 GPU-Z Idle

Batman: Arkham City

Batman: Arkham City PC Game

Batman: Arkham City is a 2011 action-adventure video game developed by Rocksteady Studios. It is the sequel to the 2009 video game Batman: Arkham Asylum, based on the DC Comics superhero Batman. The game was released by Warner Bros. Interactive Entertainment for the PlayStation 3, Xbox 360 and Microsoft Windows. The PC and Onlive version was released on November 22, 2011.

Batman: Arkham City Game Settings

Batman: Arkham City Game Settings

Batman: Arkham City uses the Unreal Engine 3 game engine with PhysX. For benchmark testing of Batman: Arkham City we disabled PhysX to keep it fair and ran the game in DirectX 11 mode with 8x MSAA enabled and all the image quality features cranked up. You can see all of the exact settings in the screen captures above.

Batman: Arkham City Benchmark Results

Benchmark Results: In Batman: Arkham City the NVIDIA GeForce GTX 680 dominated all the other video cards that we have ever tested and performed better than the mighty NVIDIA GeForce GTX 590 and even the EVGA GeForce GTX 560 TI 2Win video cards that have two GPUs on them! The NVIDIA GeForce GTX 680 had an easy time beating the AMD Radeon HD 7970 reference card in this game title. At a resolution of 1920x1200 the GeForce GTX 680 was 19FPS faster than the Radeon HD 7970, which made it 34% faster! Not too shabby!

Batman: Arkham City Benchmark Results

Benchmark Results:
We normally don't include 30-inch monitor test results, but since the NVIDIA GeForce GTX 680 has just 2GB of GDDR5 memory we wanted to see how the card performed at a resolution of 2560x1600. We didn't have time to go back and test all the cards over, but we did benchmark the MSI R7970 Lighting in all the games to see how performance did. As you can see the mighty MSI R7970 Lighting edition graphics card with a huge factory overclock wasn't able to hang with the stock NVIDIA GeForce GTX 680 at any resolution. 

Battlefield 3

Battlefield 3 Screenshot

Battlefield 3 (BF3) is a first-person shooter video game developed by EA Digital Illusions CE and published by Electronic Arts. The game was released in North America on October 25, 2011 and in Europe on October 28, 2011. It does not support versions of Windows prior to Windows Vista as the game only supports DirectX 10 and 11. It is a direct sequel to 2005's Battlefield 2, and the eleventh installment in the Battlefield franchise. The game sold 5 million copies in its first week of release and the PC download is exclusive to EA's Origin platform, through which PC users also authenticate when connecting to the game.

Battlefield 3 Screenshot

Battlefield 3 debuts the new Frostbite 2 engine. This updated Frostbite engine can realistically portray the destruction of buildings and scenery to a greater extent than previous versions. Unlike previous iterations, the new version can also support dense urban areas. Battlefield 3 uses a new type of character animation technology called ANT. ANT technology is used in EA Sports games, such as FIFA, but for Battlefield 3 is adapted to create a more realistic soldier, with the ability to transition into cover and turn the head before the body.

This game looks great and we tested with the highest settings possible.  This means we used 'ultra' settings and really punished the cards being tested. We ran FRAPS for two minutes on the single player map called 'Rock and a Hard Place' for benchmarking.

Battlefield 3 Screenshot

Benchmark Results:
The NVIDIA GeForce GTX 680 was 11.4 FPS or 18% faster than the AMD Radeon HD 7970 reference card! The MSI R7970 Lighting is factory overclocked up to 1070MHz on the core clock and was still unable to catch up to the GeForce GTX 680 at either 1920x1200 or 1280x1024. The MSI R7970 Lighting was found to be 5.2% slower than the GeForce GTX 680 at the very popular 1920x1080 screen resolution with ultra settings.

Battlefield 3 Benchmark Results

The MSI R7970 Lighting was found to be a tad slower than the NVIDIA GeForce GTX 680 at 1920x1080 and 1280x1024, but on a 30-inch monitor at 2560x1600 it was able to pull ahead by nearly 1 frame per second. Not a significant difference by any means!

Deus Ex: Human Revolution

Deus Ex Human Revolution PC Game

Deus Ex: Human Revolution is the third game in the Deus Ex first-person role-playing video game series, and a prequel to the original game. Announced on May 27, 2007, Human Revolution was developed by Eidos Montreal and published by Square Enix. It was released in August 2011. Human Revolution contains elements of first-person shooters and role-playing games, set in a near-future where corporations have extended their influence past the reach of global governments. The game follows Adam Jensen, the security chief for one of the game's most powerful corporations, Sarif Industries. After a devastating attack on Sarif's headquarters, Adam is forced to undergo radical surgeries that fuse his body with mechanical augmentations, and he is embroiled in the search for those responsible for the attack.

Deus Ex Human Revolution Game Settings

Deus Ex Human Revolution Game Settings

Deus Ex: Human Revolution uses a modified Crystal Dynamics Crystal game engine, which some of you might know as the game engine from the last Tomb Raider game title. The game developers did some rather hefty modifications to this engine though as the graphics are superb in this title.

Deus Ex Human Revolution Benchmark Results

Benchmark Results: The NVIDIA GeForce GTX 680 did very well in Deus Ex Human Revolution and scored just slightly higher than the EVGA GeForce GTX 560 Ti 2Win video card and a tad slower than the GeForce GTX 590.  It easily beat the stock AMD Radeon HD 7970, but was beat by the MSI R7970 Lighting at 1920x1080 by less than 5 FPS.  Not a big difference considering the NVIDIA GeForce GTX 680 costs $499 and the MSI R7970 Lighting costs $599!

Deus Ex Human Revolution Benchmark Results

Benchmark Results: The performance gap widens at a 30-inch resolution of 2560x1600, but for an extra $100 you would hope for more than 8 FPS! 

DiRT 3

Dirt 3 PC Game

Dirt 3 (stylized DiRT 3) is a rallying video game and the third in the Dirt series of the Colin McRae Rally series, developed and published by Codemasters. However, the "Colin McRae" tag has been completely removed from this iteration. The game was released in Europe and North America on the 24 May 2011.

Dirt 3 Game Settings

Dirt 3 Game Settings

Dirt3 uses Ego 2.0 Game Technology Engine (more commonly referred to as Ego Engine or EGO, stylised ego), which is a video game engine developed by Codemasters. Ego is a modified version of the Neon game engine that was used in Colin McRae: Dirt and was developed by Codemasters and Sony Computer Entertainment using Sony Computer Entertainment's PhyreEngine cross-platform graphics engine. The Ego engine was developed to render more detailed damage and physics as well as render large-scale environments.

Dirt 3 PC Game Benchmark Results

Benchmark Results: The NVIDIA GeForce GTX 680 was roughly 6 FPS slower than the EVGA GeForce GTX 560 Ti 2Win, but faster than both the AMD Radeon HD 7970 and the MSI R7970 Lighting. At a resolution of 1920x1080 the GeForce GTX 680 was 16.5% faster than the AMD Radeon HD 7970 reference card.

Dirt 3 PC Game Benchmark Results

Benchmark Results: The NVIDIA GeForce GTX680 takes the win at a 2560x1600 resolution here as well against the MSI R7970 Lighting, which costs $100 more and has 50% more frame buffer! 

H.A.W.X. 2

Tom Clancy's HAWX 2

Aerial warfare has evolved. So have you. As a member of the ultra-secret H.A.W.X. 2 squadron, you are one of the chosen few, one of the truly elite. You will use finely honed reflexes, bleeding-edge technology and ultra-sophisticated aircraft - their existence denied by many governments - to dominate the skies. You will do so by mastering every nuance of the world's finest combat aircraft. You will slip into enemy territory undetected, deliver a crippling blow and escape before he can summon a response. You will use your superior technology to decimate the enemy from afar, then draw him in close for a pulse-pounding dogfight. And you will use your steel nerve to successfully execute night raids, aerial refueling and more. You will do all this with professionalism, skill and consummate lethality. Because you are a member of H.A.W.X. 2 and you are one of the finest military aviators the world has ever known. H.A.W.X. 2 was released on November 16, 2010 for PC gamers.

Tom Clancy's HAWX 2

We ran the benchmark in DX11 mode with the image quality settings cranked up as you can see above.

Tom Clancy's HAWX 2

The H.A.W.X. 2 PC game title runs on what looks like seven threads as you can see from the task manager shot seen above that was taken on the test system running the Intel Core i7-3960X processor.

Tom Clancy's HAWX 2 Benchmark Results

Benchmark Results: The NVIDIA GeForce GTX 680 performed slower than the overclocked GeForce GTX 580 cards that we tested, which is strange.

Tom Clancy's HAWX 2 Benchmark Results

Benchmark Results: The factory overclocked MSI R7970 Lighting was able to perform better than the NVIDIA GeForce GTX 680 at a resolution of 2560x1600, but fell behind on the lower resolutions.

Just Cause 2

Just Cause 2

Just Cause 2 is a sandbox style action video game developed by Swedish developer Avalanche Studios and Eidos Interactive, published by Square Enix. It is the sequel to the 2006 video game, Just Cause. 

Just Cause 2 Game Settings

Just Cause 2 employs a new version of the Avalanche Engine, Avalanche Engine 2.0, which is an updated version of the engine used in Just Cause.  The game will be set on the other side of the world, compared to Just Cause, which is on the fictional tropical island of Panau in Southeast Asia. Rico Rodriguez will return as the protagonist, aiming to overthrow the evil dictator Pandak "Baby" Panay and confront his former boss, Tom Sheldon.

Just Cause 2 Benchmark Results

Benchmark Results: Once again we see the $499 NVIDIA GeForce GTX 680 performing better than the ASUS GeForce GTX 590 that cost $699 when it came out on March 24th, 2011. It's hard to believe that one year later you are getting better performance for such a lower cost!

Just Cause 2 Benchmark Results

Benchmark Results: The MSI R7970 Lighting was unable to keep up with the stock NVIDIA GeForce GTX 680 even on the Dell 30-inch monitor at a resolution of 2560x1600 on the game title Just Cause 2!

Metro 2033

Metro 2033

Metro 2033 is an action-oriented video game with a combination of survival horror and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in the Ukraine. The game is played from the perspective of a character named Artyom. The story takes place in post-apocalyptic Moscow, mostly inside the metro station where the player's character was raised (he was born before the war, in an unharmed city), but occasionally the player has to go above ground on certain missions and scavenge for valuables.

Metro 2033 Settings

This is another extremely demanding game. Image quality settings were raised to Very High quality with 4x AA and 16x AF. We turned off PhysX, but turned on DOF (Depth of Field) for benchmarking.

Metro 2033

Benchmark Results: Metro 2033 is the only gaming benchmark that we ran that shows the NVIDIA GeForce GTX 680 getting beat solidly by the AMD Radeon HD 7970.

Metro 2033

Benchmark Results: Metro 2033 clearly favors AMD hardware as we have DOF and AA enabled.

3DMark 11

Futuremark 3DMark 11 Benchmark

3DMark 11 is the latest version of the world’s most popular benchmark for measuring the 3D graphics performance of gaming PCs. 3DMark 11 uses a native DirectX 11 engine designed to make extensive use of all the new features in DirectX 11, including tessellation, compute shaders and multi-threading.

Futuremark 3DMark 11 Benchmark Settings

Since Futuremark has recently released 3DMark11 we decided to run the benchmark at both performance and extreme presets to see how our hardware will run.

3DMark11 Performance Benchmark Results:

Futuremark 3DMark 11 Benchmark Results

 Futuremark 3DMark11 with the performance preset showed that the NVIDIA GeForce GTX 680 reference card was able to score nearly 9800 3DMarks, which was just behind the GeForce GTX 590 and well over the AMD Radeon HD 7970 reference card that scored just 8100 3DMarks!

3DMark11 Extreme Benchmark Results:

Futuremark 3DMark 11 Benchmark Results

After running Futuremark 3DMark11 with the extreme present we again see the GeForce GTX 680 performing really well with a score of X3237.  The MSI R7970 Lighting scored X3033, which isn't too far behind, but remember this card is an overclocked Radeon HD 7970 that costs $100 more than the GeForce GTX 680.

Unigine Heaven 3.0

Unigine DirectX 11 benchmark Heaven

The 'Heaven' benchmark that uses the Unigine easily shows off the full potential of DirectX 11 graphics cards. It reveals the enchanting magic of floating islands with a tiny village hidden in the cloudy skies. With the interactive mode emerging, experience of exploring the intricate world is within reach. Through its advanced renderer, Unigine is one of the first to set precedence in showcasing the art assets with tessellation, bringing compelling visual finesse, utilizing the technology to the full extent and exhibiting the possibilities of enriching 3D gaming. The distinguishing feature of the benchmark is a hardware tessellation that is a scalable technology aimed for automatic subdivision of polygons into smaller and finer pieces so that developers can gain a more detailed look of their games almost free of charge in terms of performance. Thanks to this procedure, the elaboration of the rendered image finally approaches the boundary of veridical visual perception: the virtual reality transcends conjured by your hand.

DirectX 11 benchmark Unigine engine

For this benchmark we used Heaven DX11 Benchmark Version 3.0, which just came out on March 7th, 2012. We haven't run this benchmark in over a year, so it will be interesting to see how this new generation of video cards handles this benchmark.  We wanted to see how the cards would do with mild settings, so we disabled AA and AF and set the Tessellation to moderate. We ran the benchmark at 2560x1600, 1920x1200 and 1280x1024 to see how the cards would perform with a wide variety of settings.

Heaven 3.0 Benchmark Results

With moderate tessellation enabled and AA and AF disabled the benchmark results were very close. The NVIDIA GeForce GTX 680 performed really better than the MSI R7970 Lighting 3GB at both 1280x1024 and 1920x1080 screen resolutions, but at an Ultra HD resolution like 2560x1600 the MSI R7970 Lighting pulls ahead just slightly. 

DirectX 11 benchmark Unigine engine

Many feel that tessellation is one of the most important features of game titles today and in the future, so we set tessellation to extreme and maxed out the anti-aliasing at 8x and the anisotropic filtering at 16x. These are as high as you can set the tessellation and image quality settings in Heaven DX11 Benchmark v3.0, so it will really punish these cards.

Heaven 3.0 Benchmark Results

As you can see performance is less than half of what it was with the previously and the performance between the NVIDIA GeForce GTX 680 2GB and the MSI R7970 Lighting 3GB card are much closer. Without a doubt the extra memory (frame buffer) available on the Radeon HD 7970 helps here and the MSI R7970 Lighting was able to take the performance lead in both the 1920x1080 and 2560x1600 resolution tests.

NVIDIA Tessellation Performance

We showed NVIDIA these performance results before the launch of the GeForce GTX 680 as we were told the GTX 680 is about 4x faster for DirectX 11 tessellation than the AMD Radeon HD 7970 and we didn't see that at all in this benchmark. This is what NVIDIA had to say:

We should have a pretty comfortable lead in Heaven 2.5 and 3.0. I’m surprised we lost in your moderate tessellation testing at 25x16. Running 8xMSAA we’re likely memory bandwidth bound as Heaven isn’t a pure tessellation benchmark. It also stresses other parts of the GPU besides just tessellation.

If you run a tessellation test like tessmark or Microsoft’s SubD11 tessellation test (which both AMD and NVIDIA  use when quoting tessellation perf) you’ll see that 4x difference in tessellation horsepower we’re referring to. - NVIDIA PR

We don't have the time to run every benchmark available, so we'll have to take NVIDIA's word on that one!

Folding & LuxMark v2.0

Many of our readers have been asking for GPGPU tests with our video card reviews. We usually run Folding@home (F@H) on our video cards, but right now it's nearly impossible to do. For the AMD Radeon HD 7970 we could only get F@H to work with Catalyst 12.2 drivers and the latest v7 client with the AMD beta Core_16 executable. After working with AMD and Stanford to get the Radeon HD 7970 up and running we then switched our attention to the NVIDIA GeForce GTX 680 only to find out that it wasn't able to run GPU folding at all right now.

GeForce GTX 680 Folding

Since running GPU folding on the NVIDIA GeForce GTX 680 graphics card is out of the question we threw in the towel and moved on to plan b. OpenCL has certainly taken off in recent years, so we will be looking at LuxMark. LuxMark is a OpenCL benchmark tool and a great test of GPU computing performance. We used LuxMark v2.0 with the Sala benchmark scene. Sala is a 488,000+ triangles benchmark. This scene has been designed by Daniel "ZanQdo" Salazar and adapted for SLG2 by Michael "neo2068" Klemm.

LuxMark v2.0 Sala

This is what the Sala Benchmark scene looks like when fully rendered. Let's take a look to see how the NVIDIA GeForce GTX 680 and MSI Radeon HD 7970 Lighting do on this scene.

NVIDIA GeForce GTX 680:

GeForce GTX 680 Folding

The NVIDIA GeForce GTX 680 scored on average 618 points on LuxMark v2.0 on the Sala benchmark scene. Not too bad, but as you can see it is much more pixelated than the fully rendered image. 

MSI R7970 Lighting:

GeForce GTX 680 Folding

The MSI R7970 Lighting was able to better render this scene as you can tell from the image above and the final score was 1844 points on average.

LuxMark Benchmark Results

Here are the benchmark results for those that like to see a chart!  The score is actually the results in thousands of samples per second, so as you can see the compute performance of the MSI R7970 Lighting is about three times higher than that of the NVIDIA GeForce GTX 680 in this particular benchmark.

Power Consumption

For testing power consumption, we took our test system and plugged it into a Kill-A-Watt power meter. For idle numbers, we allowed the system to idle on the desktop for 15 minutes and took the reading. For load numbers we measured the peak wattage used by the system while running the OpenGL benchmark FurMark 1.9.2 at 640x480 resolution. We also ran the game HAWX 2 and looped the benchmark three times and recorded the highest Wattage seen on the meter. 

GeForce GTX680 Power Class

The NVIDIA GeForce GTX680 is NVIDIA's very first 28nm desktop graphics card and it has been designed to be very power efficient. The maximum board power is 195 Watts and the heat output on the GTX680 is 660 BTUs. This is a pretty big deal as this card has a lower power rating than the NVIDIA GeForce GTX 580 and AMD Radeon HD 7970 as those cards are both rated at 250 Watts! The good news is that the NVIDIA GeForce GTX 680 has reduced power supply requirements!

Total System Power Consumption Results

Power Consumption Results: With the NVIDIA GeForce GTX 680 video card in our test system we found that the  idle state power draw at the wall outlet was 97 Watts.  During HAWX 2 gaming the power draw jumped up to 317 Watts and when running Furmark we hit a peak of 353 Watts! These are pretty darn impressive power numbers and fairly consistent. For example in Batman Arkham City the GeForce GTX 680 peaked at 342 Watts and in Just Cause 2 we hit 343 Watts. It really depends on the game title, but we saw the power meter fluctuate between 315-345 Watts on all the game titles we tested here today.

Keep in mind the NVIDIA GeForce GTX 580 peaked at 472 Watts in Furmark and the GeForce GTX 680 peaked at just 353 Watts!  This is an impressive 119 Watt decrease in power use!

Temperature & Noise Testing

Since video card temperatures and the heat generated by next-generation cards have become an area of concern among enthusiasts and gamers, we want to take a closer look at how the graphics cards do at idle, during gaming and finally under a full load.

Video Card Temps

We recorded temperatures during several scenarios on each of the cards we tested today and the benchmark results are shown above. The GeForce GTX 680 had an idle temperature of 35C, which is mighty impressive for being such a high-end enthusiast graphics card. Firing up Furmark the card heated up to 81C and after running a few loops of the H.A.W.X. 2 benchmark the card peaked at 78C. The GPU cooler on this card is very quiet, but as you can see the temperatures do get up fairly high. We should note that the AMD Radeon HD 7970 with the reference cooler does get hotter in Furmark than the NVIDIA GeForce GTX 680!

Video Card Noise Levels

We recently upgraded our sound meter to an Extech sound level meter with ±1.5dB accuracy that meets Type 2 standards. This meter ranges from 35dB to 90dB on the low measurement range, which is perfect for us as our test room usually averages around 38dB. We measure the sound level two inches above the corner of the motherboard with 'A' frequency weighting. The microphone wind cover is used to make sure no wind is blowing across the microphone, which would seriously throw off the data.

Video Card Noise Levels

The NVIDIA GeForce GTX 680 did excellent in our audio testing.  At idle the GeForce GTX 680 came in at 42.7dB and peaked at 56.7dB in Furmark! Fan speed ramped up and down smoothly and we observed the GTX 680 at  1110 RPM at idle, 2100 RPM in games and 2400 RPM in Furmark. We found the NVIDIA GeForce GTX 680 with the reference cooler to be quieter than the MSI R7970 Lighting with the new Twin Frozr IV GPU cooler at both idle and full load in Furmark! In the games they were on par with each other!

Overclocking The GeForce GTX 680

EVGA Precision 3.0.1 Overclocking Utility

EVGA has been producing the Precision Overclocking Utility for years and it has been without a doubt one of our favorite tools to overclock video cards since it came out. When we first saw the GeForce GTX 680 we were shown a demo of a beta version of EVGA Precision 3.0.0 software that was unlike anything we have ever used before due to the new core architecture design used on the GTX680. You can now adjust the power target of the video card and the GPU and Memory clock offsets within a certain range.

EVGA Precision 3.0.1 Overclocking Utility

EVGA Precision v3.0.1 lets you increase the power target to 132%, the GPU clock offset to 549MHz and the Memory clock offset to 1000MHz. EVGA informed us that they are seeing +150MHz on the core and +500MHz on the memory as being attainable on pretty much all their cards. They said that on some cards they have been able to take the memory over +700MHz!

EVGA Precision 3.0.1 Overclocking Utility

The utility allows you to adjust the voltages from the default setting of 0.975 to 1.150 volts on the GPU. We found that it doesn't do any good to set the voltage in all honesty, because the GTX 680 overrides it at will. The good news is that the GeForce GTX 680 does a pretty good job at dynamically adjusting the voltages automatically for the overclock you are trying to reach.

EVGA Precision 3.0.1 Overclocking Utility

After spending an afternoon with our GeForce GTX 680 we found that we were able to reach +165MHz on the core and +155MHz on the memory. This is a great overclock on the GPU core from what we have been told, but obviously not that good on the memory clock. Luckily for us the memory overclock doesn't boost performance that much and a small overclock is better than no overclock at all.

EVGA Precision 3.0.1 Overclocking Utility

With our max overclock figured out we ran Futuremark 3DMark11 with the performance preset and got exactly P11000 for our overall score!  Reaching the 11,000 mark with a single video card is pretty impressive. You can click the image above to see all the details at full size. You can see how the clock speeds change dynamically on each of the four game tests.

EVGA Precision 3.0.1 Overclocking Utility

A quick look at GPU-Z v0.6.0 shows that we hit 132.6% of the cards TDP rating, which sounds about right as we set the power target to 132%. In Futuremark 3DMark11 the GPU core clock peaked at 1275.8 MHz and the memory at 1579.5 MHz. We manually set the fan to 60% to ensure efficient cooling (it goes up to 85%) and found the GPU temperature only reached 74C with the fan running at this speed. 

The dynamic clocks and the way you overclock the NVIDIA GeForce GTX 680 make it an interesting adventure. For starters no two cards will perform exactly the same, so performance has the potential to vary between two board samples. We also noticed that the dynamic clock that you can reach will change with temperature, so minor variations are most certainly seen in benchmarks. One thing is certain, this will change overclocking. How will you be able to claim you reached an overclock and then repeat it? Dynamic clocks are great for the average consumer, but for the overclocker they might cause some frustration as you can't manually override the dynamic functionality of the card.

GeForce GTX680 Overclocked Benchmarks

NVIDIA GeForce GTX 680 Overclocked Results

In Futuremark 3DMark11 with the performance preset we found the GeForce GTX 680 had an overall score of P9767. When overclocked to the max we hit P11000! This is a performance gain of ~12.6% and enough to put the GeForce GTX 680 ahead of the GeForce GTX 590 and even the AMD Radeon HD 6990 OC! This isn't a real game though, so let's take a look at some real game titles.

NVIDIA GeForce GTX 680 Overclocked Results

In the game title Just Cause 2 we found the that the overclock helped boost the frame rate significantly! At 1280x1024 we saw a 6.6% performance improvement, followed by 8.0% at 1920x1080 and 11.7% at 2560x1600.  Not a bad performance jump for free!

NVIDIA GeForce GTX 680 Overclocked Results

In Metro 2033 we saw a 6-8% performance improvement thanks to overclocking!

NVIDIA GeForce GTX 680 Overclocked Results

The final game title that we benchmarked the GeForce GTX680 overclocked on was Battlefield 3 and we saw a 9.4% performance improvement at 2560x1600 thanks to overclocking and was able to play on this Ultra HD screen resolution at above 50 FPS on average. We saw a solid 8-11% performance increase thanks to our overclock in BF3.

Final Thoughts & Conclusions

NVIDIA GeForce GTX 680 Video Card

NVIDIA told us from the start that the GeForce GTX 680 is designed for gamers. Our testing showed that this card did phenomenally well with DirectX 11 game titles and is currently the overall fastest graphics card for gaming. It didn't flat-out win every benchmark at every resolution, but it placed ahead of the AMD Radeon HD 7970 ($549) and the MSI R7970 Lighting ($599) more times than not. The NVIDIA GeForce GTX 680 also ran quieter, used less power, is smaller in length, and had a lower price. If the GeForce GTX 680 had lower temperatures and would have dominated in all the tests then it would have flat-out massacred the AMD Radeon HD 7970 'Tahiti' graphics card.

In the seven game titles that we tested today, we found the NVIDIA GeForce GTX 680 to be 14.34% faster than the stock AMD Radeon HD 7970 reference card when we compared all of the tests results at a resolution of 1920x1080. Factor in that the GeForce GTX 680 costs $50 or 9% less and you got yourself some pretty good reasons to go with an NVIDIA GeForce GTX 680 over an AMD Radeon HD 7900 series card. The fact that the card uses less power, is smaller, quieter and has lower power supply requirements is icing on the cake. NVIDIA also has innovative technologies like PhysX, SLI, full stereoscopic 3D with wireless NVIDIA 3D vision glasses, Adaptive Vertical Sync, TXAA and more.

The NVIDIA GeForce GTX 680 is unlike anything that we ever overclocked before, but after a few minutes the light bulb clicks and you figure it out. We were able to overclock the card to where we had GPU Boost hitting 1275.8MHz! This overclock gave us a solid 6-13% performance increase over the default GeForce GTX 680 clock speeds. This is a significant performance increase and one that you can notice when gaming at higher resolutions. 

It appears that NVIDIA really likes the $499 price point as the company has launched the GeForce GTX 480, GeForce GTX580 and now the GeForce GTX 680 with a suggested retail price of $499. NVIDIA says that retailers will begin selling cards tomorrow, but our sources have informed us that the supply tomorrow will be extremely limited. (Newegg does have the cards available, but the ASUS cards sold out in under thirty minutes) NVIDIA says that the supply will get better in the weeks and months ahead, so expect these cards to be out of stock from time to time. For $499 the NVIDIA GeForce GTX 680 appears to be the right choice for someone looking to spend $500 on a graphics card. AMD's competing graphics cards are the Radeon HD 7950 3GB is $457 shipped and the Radeon HD 7970 is $549.99 shipped.

If there was to be an Achilles' heel with the GeForce GTX 680 it appears it would be GPU compute performance as it is fairly obvious that NVIDIA is not focused on GPU compute performance on this desktop gaming graphics card. They did not give us a compute performance power rating, distributed computing programs like folding at home aren't working yet, and LuxMark didn't show great results. We asked NVIDIA about this and they said that a folding update is in the works and that they are focused on compute cases in actual games with things like PhysX. This makes sense and helps separate the GeForce and Quardro product lines a bit better. 

EVGA GeForce GTX 680 Hydro Video Card

We heard from ASUS and EVGA that they will be coming out with custom designed cards over the next 60 days! EVGA sent over a picture of their EVGA Geforce GTX680 Hydro Copper card as a teaser! The price isn't set on this card, but we expect it to be about $200 more than the reference card, so around $699.

Now that the NVIDIA GeForce 600 series and AMD Radeon HD 7000 series have both been announced we can see how the graphics card selection is going to look like for the rest of 2012. It appears that both companies have excellent solutions and that the gamers are going to be the real winners this time around. When we published our AMD Radeon HD 7970 launch article in December 2011 we concluded that article by asking if AMD had released a product that could hold off NVIDIA's upcoming Kepler based GPU's. AMD had the lead three months ago, but as you can see NVIDIA managed to catch up and for the most part pass up the AMD Radeon HD 7970!


Editors Choice Award

Legit Bottom Line: We didn't expect to see NVIDIA launching the GeForce GTX 680 in March, but we are glad the card was launched early and very happy with the performance numbers and features!