NVIDIA Maxwell Brings GeForce GTX 980Earlier this year NVIDIA gave us our first look at the Maxwell GPU architecture with the release of the GeForce GTX 750 Ti video card. We were thoroughly impressed by the little card and have been waiting for more than half a year for the 'larger' GPU cores to come to market. NVIDIA is ready to release the latest Maxwell GPU, GM204, that will be featured on the new GeForce GTX 980 and GeForce GTX 970 video cards that are coming to market today. NVIDIA claims that Maxwell is the most advanced GPU ever made and began development of Maxwell in 2011. NVIDIA had three main design goals when it came to the GM204 Maxwell GPU:
- Extraordinary Gaming Performance for the Latest Displays (4K and beyond as well as the emerging Virtual Reality displays)
- Incredible Energy Efficiency (designed to be 2x the performance/watt of Kepler powered GeForce GPUs)
- Dramatic Leap Forward in Lighting With Voxel Global Illumination or VXGI for short (fully dynamic global illumination at playable frame rates)
|GTX 980||GTX 780||GTX 680||GTX 580|
|GDDR5 Memory Clock||7,000MHz||6,008MHz||6,008MHz||4,008MHz|
|Memory Bus Width||256-bit||384-bit||256-bit||384-bit|
|FP64||1/32 FP32||1/24 FP32||1/24 FP32||1/8 FP32|
|Manufacturing Process||TSMC 28nm||TSMC 28nm||TSMC 28nm||TSMC 40nm|
NVIDIA Maxwell GPU ArchitecturePerformance is a big deal for any graphics card, but NVIDIA really wanted to bring a card to market that would impress gamers with the fastest frame rates possible and also help bring 4K displays and VR technologies to the mainstream market. For many years the PC displays haven't really changed. The majority of gamers are playing on a 1080p display and worldwide the vast majority of gamers are on sub-4K displays. NVIDIA knows that the display market is going through some rather dramatic changes right now and recognizes that the 4K display prices on monitors aimed at gamers have been quickly falling over the past year. With 4k adaptation increasing, NVIDIA knew that Maxwell would have to support 4K displays and beyond. That is why Maxwell supports HDMI 2.0 (4k resolution at 60Hz) and something called Dynamic Super Resolution (DSR) that allows you ton run a game at a higher resolution and then resizes the frames with a 13-tap Gaussian filter. This new form of 'downsampling' gives you the crisper visuals of a 4K panel without the need to run out and purchase a 4K display. Virtual Reality (VR) also needs higher performing GPUs as there are two displays instead of one and that is doubling the performance demands. NVIDIA had to figure out an architecture that allowed consumers to go out and buy a 4K display and triple A game title and enjoy them an an acceptable frame rate. Performance is easy to get when you can go bigger and better, but NVIDIA was facing a market that wanted smaller PCs and better energy efficiency. The days of building PC's with multiple-GPUs in massive full towers might very well be slowing down and in the notebook market we are all finding that notebooks are becoming thinner. NVIDIA had to find a way to get the performance they knew the market was needing, but needed to deliver it in a package that would work both desktops and laptops. That is why NVIDIA took what they learned on building the NVIDIA Tegra K1 and did some major things with with the Maxwell architecture to make everything work. The image above is a top level overview of the GM204 block diagram. The GM204 GPU has 5.2 billion transistors and it is double the performance of the GK104 'Kepler' GPU found on the GeForce GTX 680. NVIDIA has doubled up on the SMM's, Geometry and the ROPs. The texture units were kept the same as NVIDIA felt that they were doing fine there and kept it the same. The GDDR5 memory clock speeds was increased as well as overall memory efficiency, so NVIDIA was able to go with a 256-bit bus interface on Maxwell. The Maxwell GM204 has a new Streaming Multiprocessor (SM) design and it uses the same basic architecture used on the first version of Maxwell, which was the GM107 GPU used on the GeForce GTX 750 and 750 Ti. NVIDIA was able to come up with a more efficient design by changing the data path organization. NVIDIA moved from a 192-CUDA core arrangement (a non-power-of-two organization) on Kepler to a 128-CUDA core arrangement per SM. This configuration aligns with warp size to save space and power versus the design on Kepler as at times NVIDIA often found all the SM units were not full and that hurt the efficiency of the GPU. By going down to 128 cores per SM and some scheduler improvements the designers at NVIDIA were able to get 40% more performance per CUDA core. NVIDIA also increased the L2 cache size from 512K in the GK104 to 2MB in GM204! Let's move along and take a look at the GeForce GTX 980 reference card!
GeForce GTX 980 4GB Reference CardNVIDIA could have just sent us a reference card, but rather than being boring, they sent us some killer retail packaging to help launch what they are calling "The World's Most Advanced GPU." The product packaging was designed to lift off from the top and reveal the GeForce GTX 980. The end result was basically a display showing off the GTX 980 in an upright position. NVIDIA is certainly known for their showmanship and this is a heck of a way to open up and see the most advanced and arguably one of the best looking graphics cards that we have ever seen. The NVIDIA GeForce GTX 980 reference card uses a similar design to that of the GeForce GTX 690 that came out in 2012 and was later seen on cards like the GeForce GTX Titan. NVIDIA appears to have fallen in love with this overall look and the styling on the GeForce GTX 980 reference card is very similar to that of the GTX 690 from two years ago. You have the magnesium alloy fan housing with an aluminum frame that was trivalent chromium plated and the GeForce logo is still LED backlit and glows NVIDIA green. The NVIDIA GeForce GTX 980 reference card measures 10.5-inches in length and takes up two PCI slots. This card is not big and if you are running a GeForce GTX 580, GTX 680 or a GTX 780 you'll be able to drop it in your current gaming system without any issues since the size is no larger than what we have seen in years past and the power requirements are actually lower. When it comes to the video outputs on the GeForce GTX 980 reference card we can see that NVIDIA has gone a new direction with the I/O bracket design and connector choices. NVIDIA went with three DisplayPort connectors, an HDMI 2.0 connector (supporting 4k@60Hz) and a single dual-link DVI outport. This means that NVIDIA now offers a total of five video connections, but only four can be used simultaneously. This new video output arrangement means that you can run three NVIDIA G-Sync enabled displays off of one GeForce GTX 980 video card if one desires to do so. If you want to run a multi-panel setup and don't want to sacrifice any image quality, you'll likely still need to run a 2-way or 3-way SLI setup to get the performance needed to power the resolution garnered by such a display setup. NVIDIA also changed up the way the exhaust ports are shaped on the I/O bracket. The ability to support HDMI 2.0 is a pretty big deal and NVIDIA has the world's first GPU that is able to support it. Previous generation GPU's supported HDMI 1.4 and could only officially support 4k displays at 30Hz for '444' RGB pixels and 60Hz for '420' YUV pixels. The GeForce GTX 980 now supports full-resolution '444' RGB pixels at 60Hz for 4k displays. All GM2xx Maxell GPUs also ship with an enhanced NVENC encoder that adds support for H.265 encoding. NVIDIA claims that Maxwell's video encoder improves H.264 video encode throughput by 2.5x over Kepler and that it can encode 4k video at 60 FPS. The max resolution supported by Maxwell is 5120x3200, so get ready for displays that go way beyond Ultra HD in the years to come! The NVIDIA GeForce GTX 980 is rated at 165 Watts when it comes to the Thermal Design Power (TDP) and needs just a pair of 6-pin PCIe Power Connectors that need to be connected for proper operation along the top edge of the graphics card. NVIDIA recommends a 500W or larger power supply and we've seen the Add-In-Board (AIB) partners recommending a 600W or larger power supply on the custom cards that are also overclocked beyond the NVIDIA GTX 980 reference card specifications. This is a pretty big feature of this card as NVIDIA has really dropped the TDP on their cards over the years and is offering a tremendous value when it comes to performance per Watt. The NVIDIA GeForce GTX 580 'Fermi' graphics card had a 244W TDP rating, the NVIDIA GeForce GTX 680 'Kepler' graphics card had a 195W TDP rating and now the NVIDIA GeForce GTX 980 'Maxell' graphics card is just 165W TDP. NVIDIA has shaved power consumption down by nearly one-third (32.4%) over the past five years, while greatly improving performance and constantly adding new features along the way. Does the backplate on the GTX 980 reference card look familiar? NVIDIA never blessed us with a sample of the 'ultimate' graphics card known as the GeForce GTX Titan Z, but this backplate design originated from the flagship card down to a more affordable card and we aren't complaining. NVIDIA included the metal blackplate to dissipate heat from the VRM and GDDR5 memory IC's, protect the components on the back of the card from installation mistakes and to just make it look good. We have always been a fan of backplates for ascetic reasons alone and we are glad that NVIDIA went with one on this reference card. The raised section at the end of the card can be removed to improve airflow. This was done to help out multi-GPU users running SLI setups where the airflow between cards is limited. NVIDIA engineers studied airflow patterns between video cards and determined that opening up this area helped bring cooler air to the adjacent fan. It won't do much of anything for those running one card, but if you have more than one GeForce GTX 980 or GTX 970 in your system you'll want to remove the part of the backplate to significantly improve the airflow between the cards. The NVIDIA GeForce GTX 980 uses an aluminum heatink that has three embedded heatpipes that help keep the Maxwell GM204 GPU nice and cool. NVIDIA says that the default GPU Boost 2.0 settings will allow the GTX 980 to boost up to the highly clock frequency and remain there as long as the GPU temperature remains at or below 80C. Once you pull the CPU cooler entirely off you can see the PCB of the GTX 980 reference card along with the GM204 GPU, GDDR5 memory ICs and the power phases. NVIDIA went with a 4-phase VR circuit with integrated dynamic power balancing circuity for the GTX 980's GM204 GPU and there is one additional power phase for the boards GDDR5 memory. There are places for two more power phases, but it doesn't appear that NVIDIA needed them. NVIDIA says that the 4-phase power supply setup has plenty of overvolting capabilities and that reference cards in their labs are able to run at speeds of up to 1400MHz when overclocked. The GTX 980 has a base clock of 1126MHz and a boost clock of 1216MHz, so NVIDIA is getting some impressive speeds out of this particular Maxwell GPU. In fact, NVIDIA says that the GeForce GTX 980 runs at higher clock frequencies out of the box than any other GPU the company has ever built. Being able to overclock the GTX 980 regularly means that they are easily getting a 15% boost in clock speeds and that should mean there will be a significant improvement in performance when gaming. NVIDIA also claims that the GTX 980 only runs moderately hotter and still is relatively quiet when overclocked, so that is impressive if true.
GeForce GTX 970 and 980 Retail CardsNVIDIA didn't give us any GeForce GTX 970 video cards to try out, but luckily our friends at EVGA and Zotac were able to send us over retail GTX 970 cards to try out. Gigabyte also sent use out our first retail GeForce GTX 980 video card, so we'll be including that in the performance testing as well. The card Gigabyte sent over is the GV-N980G1 GAMING-4GD. This card is factory overclocked and features a base clock of 1228MHz, a boost clock of 1329MHz and the memory is kept stock at 7000MHz. The GM204 Maxwell GPU is kept cool by a Gigabyte WindForce 3X 600W GPU cooler and requires a 600W or larger power supply with two 8-pin PCIe external power connectors. This dual-slot card measures in at 312mm x 129mm x 43mm (LxWxH), so at 12.28" in length it is pretty long. The next card is the ZOTAC GeForce GTX 970 AMP! Omega Edition that is sold under part number ZT-90102-10P. This card is ZOTAC's mid-range model for the GeForce GTX and it features a base clock of 1102MHz and a 1241MHz boost clock. ZOTAC also overclocked the memory up to 7046MHz, so this is one of the few cards that we have seen so far that has both the CUDA Cores and GDDR5 memory overclocked out of the box. The ZOTAC Geforce GTX 970 AMP! Omega Edition is kept cool by the dual-fan IceStorm w/ ExoArmor GPU cooler and it is big and beefy looking. It also features customized Power+ circuitry and OC Plus real-time performance. ZOTAC OC Plus is an exclusive power regulation controller module that communicates directly with the GPU via internal bus and the new ZOTAC FireStorm real-time overclocking software via internal USB interface to bring detailed real-time monitoring intelligence and overclocking capabilities. FireStorm features an all new easy to use interface featuring quick overclock presets, precise overclocking adjustments. The This card has two 8-pin PCIe power connectors and needs a 500W or larger power supply for proper operation. The ZOTAC GeForce GTX 970 AMP! Omega Edition is 10.5-inches in length and takes up 2.5 PCI slots due to how thick the fan is. Last, but certainly not least is the EVGA GeForce GTX 970 with part number 04G-P4-0972-KR. This card comes with the NVIDIA GeForce GTX 970 reference card clock speeds of 1050MHz base and 1178MHz boost, but is topped off by the EVGA ACX 1.0 GPU cooler. This card doesn't have many frills and as a result is just 9.5-inches in length. EVGA went with the standard configuration of two 6-pin PCIe power connectors on this card as that should be plenty for this 145W TDP card. All three of the cards feature custom GPU coolers and just happen to have black fan shrouds with differing accent colors. The Gigabyte and ZOTAC cards have back plates, but EVGA does not include one on this particular GeForce GTX 970 model. When it comes to the display outputs, only ZOTAC kept the original NVIDIA configuration for Maxwell. EVGA went with a familiar design as it is the one they used on the Kepler series, which means you get a pair of dual-link DVI outputs, a single DisplayPort and a HDMI 2.0 port. Nothing wrong with this, but you won't be able to run a triple G-Sync configuration on this card. Would you want to run that on a single GeForce GTX 970 though? Gigabyte went with a the most robust design of the bunch and has three DisplayPorts, two dual-link DVIs and a HDMI 2.0 port. The downside to all those connectors on the Gigabyte card means that there is hardly any room to exhaust hat air from the system. Let's take a quick look at some more Maxwell features and get on to testing!
NVIDIA VXGI, DSR & MFAAVoxel Global Illumination (VXGI) Lighting has always been a battle for video games graphics, in order to hit the photorealism it needs to be overcome. The new NVIDIA VXGI is going to be the next evolution in lighting. The NVIDIA Voxel Global Illumination (VXGI) is based on the work of Cyril Crassin which was started back in 2011. When the concept was originally pioneered, it used a 3D data structure ("voxelks") to capture coverage and lighting information at every point in the scene. The data structure could then be traced during the final rendering stage to determine the end result of the light bouncing around in the scene. The original implementation was able to run successfully on a GeForce GTX 680, it was ultimately limited. Over the last three years NVIDIA has worked on refining and developing an implementation which can be accelerated natively by the graphics engine. During this time NVIDIA has also worked on refining the algorithm used, the end result of the last three years of work has brought us here with VXGI. The key point of VXGI is that it renders lighting in real time on the GPU. Using a voxel grid and cone tracing technique, VXGI calculates and approximation of GI (global illumination) and ambient occlusion, all in real time. The voxelization process is sped up by a number of new graphics features in VXGI. Conservative raster is being used to test which of the various pixels touch the triangle, this is used to accurately converth the 3D scene geometry into voxels. There is also a multi-projection engine included in VXGI that takes the geometry and re-projects it onto multiple surfaces simultaneously. Dynamix Super Resolution (DSR) While the cost of 4K monitors has come down dramatically since the initial wave of them, they are still to pricey for some. Without a doubt though, 4K resolution is going to be the go to resolution for PC gamers as it is truly stunning to look at. While some of us continue to save our dollars for a 4K monitor, NVIDIA is breaking out a new technology with the GeForce GTX 980/970, Dynamic Super Resolution. The NVIDIA Dynamic Super Resolution is a technology that allows the GPU to render the images at the 4K resolution, and display it on a lower resolution display. While you're not truly getting a 4K display this way, the GPU is rendering the scene at 4K. This will ultimately give us a crisper and better looking image. In a way this technically isn't new technology, people have been turning to 'downsampling'. Downsampling is a technique that is used to trick the GPU into thinking that it is really rendering at a higher resolution. Though there are some usibility and quality issues with that. In order to downsample, the end user needs to set up custom displays within the driver control panel, and adjust a handful of low-level display parameters. While some are comfortable with doing this, it's not the most user friendly operation to complete. Aside from having the know how to do this, there can also be a quality issue. At times there can be artifacting observed on textures when certain post-processing effects are applied. NVIDIA Dynamic Super Resolution (DSR) pretty much works like traditional downsampling, though considerably easier to use. The artifacting is combated by using a 13-tap Gaussian filter during the conversion to display resolution. The 13-tap Gaussian filter is a high quality filter that will reduce or eliminate the aliasing artifacts that can be seen with simple downsampling. When it comes to the end-user ability to operate the Dynamic Super Resolution, it doesn't get much easier, all that needs to be done is to turn it on or off. Multi-Frame Sampled AA (MFAA) 4X MSAA (Multi-Sample Anti Aliasing) can be taxing to a GPU on todays games, especially if the system is running on a 4K display. Depending on the game, running with 4X MSAA the system wont be able to deliver playable frame rates, and if you're going for a quality image, disabling the 4X MSAA really isn't an option either. NVIDIA has come up with a better option than disabling while maintaining the image quality. Multi-Frame Sampled AA (MFAA) is a new hardware feature that will alternate between multiple AA sample patterns to produce the best image quality while running ~30% faster according to NVIDIA. MFAA alternates between different 2xaa patterns that get blended together with a temporal synthesis filter, the end results is an image that looks like a 4xMSAA image to the naked eye.
Test SystemBefore we look at the numbers, let's take a brief look at the test system that was used. All testing was done using a fresh install of Windows 8 Pro 64-bit and benchmarks were completed on the desktop with no other software programs running. It should be noted that we average all of our test runs. There has been some concern of people testing a cold card versus a hot card, but we've always done out testing 'hot' since the site started back more than a decade ago. Video Cards & Drivers used for testing:
- NVIDIA GeForce 344.07 For All Maxwell/Fermi Cards and 335.23 for the Kepler Cards
- AMD CATALYST 14.5
Intel X79/LGA2011 Platform
|The Intel X79 Test Platform|
|Processor||Intel Core i7-4960X|
ASUS P9X79-E WS
16GB Kingston 2133MHz
|Solid-State Drive||OCZ Vertex 460 240GB|
|Cooling||Intel TS13X (Asetek)|
|Power Supply||Corsair AX860i|
|Operating System||Windows 8.1 Pro 64-bit|
|Monitor||Sharp PN-K321 32" 4K|
Batman: Arkham OriginsBatman: Arkham Origins is an action-adventure video game developed by Warner Bros. Games Montréal. Based on the DC Comics superhero Batman, it follows the 2011 video game Batman: Arkham City and is the third main installment in the Batman: Arkham series. It was released worldwide on October 25, 2013. For testing we used DirectX11 Enhanced, FXAA High Anti-Aliasing and with all the bells and whistles turned on. It should be noted that V-Sync was turned off and that NVIDIA's PhysX software engine was also disabled to ensure both the AMD and NVIDIA graphics cards were rendering the same objects. We manually ran FRAPS on the single player game instead of using the built-in benchmark to be as real world as we possibly could. We ran FRAPS in the Bat Cave, which was one of the only locations that we could easily run FRAPS for a couple minutes and get it somewhat repeatable. The CPU usage for Batman: Arkham Origins was surprising low with just 10% of the Intel Core i7-4960X being used by this particular game title. You can see that the bulk of the work is being done by one CPU core. Benchmark Results: The NVIDIA GeForce GTX 680 reference card was able to average 34.4 FPS on Batman: Arkham Origins, which is a respectable score for a video card that is nearly 2.5 years old. The NVIDIA GeForce GTX 980 reference card came in at 51.4 FPS on average in a cold state and 49.4 in a hot state. The GeForce GTX 680 didn't have that much variance on it between hot and cold runs, so it looks like GPU temperatures and NVIDIA GPU Boost 2.0 is playing a big role in the performance numbers we are seeing on cards today. Benchmark Results: When you look at performance over time, the GeForce GTX 980 never dropped started out at 45 FPS and never dropped below that threshold during the benchmark run on our Ultra HD (3840x2160) test setup.
Battlefield 4Battlefield 4 is a first-person shooter video game developed by EA Digital Illusions CE (DICE) and published by Electronic Arts. It is a sequel to 2011's Battlefield 3 and was released on October 29, 2013 in North America. Battlefield 4's single-player Campaign takes place in 2020, six years after the events of its predecessor. Tensions between Russia and the United States have been running at a record high. On top of this, China is also on the brink of war, as Admiral Chang, the main antagonist, plans to overthrow China's current government; and, if successful, the Russians will have full support from the Chinese, bringing China into a war with the United States. This game title uses the Frostbite 3 game engine and looks great. We tested Battlefield 4 with the Ultra graphics quality preset as most discrete desktop graphics cards can easily play with this IQ setting at 1080P and we still want to be able to push the higher-end cards down the road. We used FRAPS to benchmark each card with these settings on the Shanghai level. Battlefield 4 is more CPU intensive than any other game that we benchmark with as 25% of the CPU is used up during gameplay. You can see that six threads are being used and that the processor is running in Turbo mode at 3.96GHz more times than not. Benchmark Results: In Battlefield 4 with Ultra settings at 3840x2160 we were able to average 33.78 on the GeForce GTX 980 reference card versus 30.07 FPS on the Sapphire R9 290X Vapor-X. The NVIDIA GeForce GTX 680 reference card from March 2012 was having a tough time keeping up at 4K and averaged 17.8 FPS. Benchmark Results: The GeForce GTX 980 ran BF4 pretty smoothly with these settings and if you reduce the image quality just slightly you'll be able to get above 30 FPS most of the time at 3840x2160. We dropped down under 30 FPS in a couple spots during our gameplay, but for the most part we were above the 30 FPS threshold.
Crysis 3Like the others, it is a first-person shooter developed by Crytek, using their CryEngine 3. Released in February 2013, it is well known to make even powerful system choke. It has probably the highest graphics requirements of any game available today. Unfortunately, Crytek didn’t include a standardized benchmark with Crysis 3. While the enemies will move about on their own, we will attempt to keep the same testing process for each test. Crysis 3 has a reputation for being highly resource intensive. Most graphics cards will have problems running Crysis 3 at maximum settings, so we settled on no AA with the graphics quality mostly set to Very High with 16x AF. We disabled v-sync and left the motion blur amount on medium. Crysis 3 appeared to run for the most part on just 3 CPU threads and used up about 15-18% of our Intel Core i7-4960X processor with these settings. Notice that the processor speed was at 3.53GHz and we very seldom, if ever, saw the processor go into turbo mode on Crysis 3. Benchmark Results: TheNVIDIA GeForce GTX 680 reference card really struggles on Crysis 3 at 3840x2160 and can only average 12 FPS. The new GeForce GTX 980 reference card is at nearly 20 FPS, so there is a big difference between the two cards on Crysis 3. The GeForce GTX 780 also gets beat, but the GeForce GTX 780 Ti that we are using has a nice factory overclock on it and was actually faster than the GeForce GTX 980 once again. Benchmark Results: It is extremely tough to get identical FRAPS runs on Crysis 3, but nothing out of the ordinary here.
Far Cry 3Far Cry 3 is an open world first-person shooter video game developed by Ubisoft Montreal and published by Ubisoft for Microsoft Windows, Xbox 360 and PlayStation 3. It is the sequel to 2008's Far Cry 2. The game was released on December 4th, 2012 for North America. Far Cry 3 is set on a tropical island found somewhere at the intersection of the Indian and Pacific Oceans. After a vacation goes awry, player character Jason Brody has to save his kidnapped friends and escape from the islands and their unhinged inhabitants.
Far Cry 3 uses the Dunia Engine 2 game engine with Havok physics. The graphics are excellent and the game really pushes the limits of what one can expect from mainstream graphics cards. We set game title to 2x MSAA Anti-Aliasing and ultra quality settings. Far Cry 3 appears to be like most of the other games we are using to test video cards and uses up about 20% of the processor and is running on multiple cores.
Benchmark Results: The Sapphire R9 290X Vapor-X OC averaged 26.16 FPS, which is good, but the NVIDIA GeForce GTX 980 was slightly better with a average FPS of 28.57.
Benchmark Results: Some small variations here and there, but no big frame drops on any of the cards to report back about. The GeForce GTX 980 was almost able to stay above 25FPS during our entire benchmark run.
Metro Last Light
Metro: Last Light is a first-person shooter video game developed by Ukrainian studio 4A Games and published by Deep Silver. The game is set in a post-apocalyptic world and features action-oriented gameplay with a combination of survival horror elements. It uses the 4A Game engine and was released in May 2013. Metro: Last Light was benchmarked with very high image quality settings with the SSAA set to off and 4x AF. These settings are tough for entry level discrete graphics cards, but are more than playable on high-end gaming graphics cards. We benchmarked this game title on the Theater level.
We again found around 20% CPU usage on Metro: Last Light.Benchmark Results: In Metro: Last Light the GeForce GTX 980 averaged of 38.08 FPS versus the 36.84 FPS of the PowerColor PCS+ AXR9 290X and the 35.5 FPS of the ASUS Poseidon GeForce GTX 780 video card. The GeForce GTX 680 2GB reference card is starting to show signs of age at 4K resolutions as the best it could average was 22.04 FPS. Benchmark Results: No big performance dips or spikes that are out of the ordinary here! We were happy to see the GTX 980 stay above 30FPS for the entire benchmark run!
ThiefThief is a series of stealth video games in which the player takes the role of Garrett, a master thief in a fantasy/steampunk world resembling a cross between the Late Middle Ages and the Victorian era, with more advanced technologies interspersed. Thief is the fourth title in the Thief series, developed by Eidos Montreal and published by Square Enix on February 25, 2014. We ran Thief with the image quality settings set at normal with VSYNC disabled. Thief appears to be running on the six physical cores of the Intel Core i7-4960X processor and averages around 17-24% CPU usage from what we were able to tell from the CPU utilization meter that is built into the Windows 8.1 task manager. Benchmark Results: The NVIDIA GeForce GTX 980 came in with an average benchmark run of 48.33 FPS on Thief. The PowerColor PCS+ Radeon R9 290X averaged 42.34 FPS and the now 'old' GeForce GTX 680 reference card came in at the back of the pack with an average FPS of 24.80. These results are with normal image quality settings. Not bad performance on a game title that came out in Q1 2014. Benchmark Results: The performance over time chart showed that the GTX 980 video card never dipped below 35FPS during the benchmark run.
3DMark 20133Dmark Fire Strike Benchmark Results - For high performance gaming PCs Use Fire Strike to test the performance of dedicated gaming PCs, or use the Fire Strike Extreme preset for high-end systems with multiple GPUs. Fire Strike uses a multi-threaded DirectX 11 engine to test DirectX 11 hardware.
Fire Strike Benchmark Results:
Benchmark Results: The 3DMark Fire Strike benchmark had the NVIDIA GeForce GTX 980 video card coming in with an overall score of 11,327, which was actually just higher than a factory overclocked GeForce GTX 780 Ti! The NVIDIA GeForce GTX 680 reference card scored 6,279, so we are nearly twice as fast.
Temperature & Noise TestingTemperatures are important to enthusiasts and gamers, so we took a bit of time and did some temperature testing on the NVIDIA GeForce GTX 980 video card. NVIDIA GeForce GTX 980 Idle Temps: At idle we found the GPU core temperature was 35C we were hoping to see some VRM temperatures, but we were told by W1zzard over at TechPowerUp that the voltage controller has no support for VRM readouts. When gaming we hit 80-81C, which is the default GPU temp target for the GeForce GTX 980, so the performance was capped there. Our room temperature was 70F (21C), so we were a bit dissapointed to find out that the temp target was limiting performance. For example with the card at all default settings we fired up a game title and ran it for 90 seconds. When the game started the GPU clock speed on the GeForce GTX 980 was running at 1252MHz, but after about 20 seconds the clock speeds started to slowly decline and then it stabilized at 1138.7MHz as you can see above. We tried out best to benchmark all the game titles and benchmarks that we tested today in an 'active' or 'hot' state as that is the performance you'll be seeing when you game for more than a minute or two.
We test noise levels with an Extech sound level meter that has ±1.5dB accuracy that meets Type 2 standards. This meter ranges from 35dB to 90dB on the low measurement range, which is perfect for us as our test room usually averages around 36dB. We measure the sound level two inches above the corner of the motherboard with 'A' frequency weighting. The microphone wind cover is used to make sure no wind is blowing across the microphone, which would seriously throw off the data.The NVIDIA GeForce GTX 980 is a pretty quiet card, but it is bittersweet as we were performance capped due to the card running at the default thermal threshold when we were gaming. If you were to increase the temperature target the gaming performance would increase, but so would the temperature and power consumption.