NVIDIA Introduces The GeForce 9800 GTX
Video cards are launching left and right these days and it’s been a confusing time for many consumers. Trying to tell people that many of the latest GeForce 8 series cards have the same core as the GeForce 9 series isn’t an easy thing to do. To complicate things even more, NVIDIA has been reducing the frame buffer size on the latest video cards. This is not only confusing to consumers, but the media as well. Today is no different, so sit back and we will try to walk you through what is going on.
The video card that is being launched today is the GeForce 9800 GTX, which has 754 million transistors that are manufactured on the 65nm process. This will be the direct replacement of the GeForce 8800 GTX, which has been the high-end work horse for NVIIDA since 2006. The GeForce 9800 GTX sounds like it isn’t too much of an upgrade over the GeForce 8800 GTX, but NVIDIA claims that this GPU is this is the most complex GPU ever created and the specifications can be seen below. The GeForce 8800 GTX has a 768MB frame buffer with a 384-bit bus and the just released GeForce 9800 GTX just a 512MB frame buffer on a 256-bit bus. On paper this doesn’t seem to be an improvement, but NVIDIA says they have optimized the G92 core so much that it doesn’t need a larger frame buffer. NVIDIA atually said that internal lab testing has showed that a larger frame buffer doesn't significantly improve performance.
The reference design features 128 processors cores operating at 1688 MHz, which produces an astounding 432 GigaFLOPs of processing power. Each processor core is capable of being dynamically allocated to vertex, pixel, and geometry operations for the utmost efficiency in GPU resource allocation, and maximum flexibility in load balancing shader programs. Working alongside the processors cores are 64 texturing processors (eight texture processors per shader block) each capable of one addressing and filtering operation per clock. With a peak bilinear fillrate of 43.2 gigatexels, it offers unprecedented texturing performance for any GPU. The chip features sixteen render back-end units (ROP) with full support for 128-bit high-dynamic-range rendering and NVIDIA’s exclusive 16x Coverage Sampling Antialiasing (CSAA) algorithm. The ROP compression system has also been enhanced to improve performance at extreme resolutions such as 2560 x 1600. The enhanced compression will help keep memory usage in check and help performance in high resolution, antialiased scenarios.
Both Palit and XFX sent us graphics cards, but NVIDIA forgot to mail out a third card for triple SLI testing. They overnight mailed us a card that should arrive today, so for now let's take a look at just these two cards.
GeForce 9800 GTX Features
The GeForce 9800 GTX uses a dual-slot design that requires two 6-pin PCIe power connectors for proper usage. If power is not properly hooked up to the GeForce 9800 GTX the video card will sound an alarm and you will definitely hear it. When it comes to power supplies a single GeForce 9800 GTX, needs at least a 450-watt power supply unit for proper operation. For each additional GeForce 9800 GTX added to the system, 156 watts more power is recommended as that is the maximum board power that each card can consume. This means those that want to run triple SLI with three of these cards will need at least a 762W power supply according to NVIDIA. If you plan on running triple SLI make sure you have six 6-pin PCIe connectors available as many older power supplies will need adapters as they didn’t come with this many power headers. NVIDIA also did something we have never seen a video card company do, with this launch. They suggested that with a single GeForce 9800 GTX one should use at least an Intel Core 2 Duo E4500 CPU and for SLI and 3-way SLI a Core 2 Duo E8500 CPU. While this is not shocking it is interesting to see CPU suggestions for video cards.
On the rear of the GeForce 9800 GTX one will one will find the usual dual dual-link, HDCP-enabled DVI-I outputs for connection to analog and digital PC monitors and HDTVs, a 7-pin analog video-out port that supports S-Video directly, plus composite and component (YPrPb) outputs via an optional dongle.
The GeForce 9800 GTX also provides native support for HDMI output, using a certified DVI-to-HDMI adaptor in conjunction with the built-in SPDIF audio connector. This solution differs from ATI by the fact is requires an audio connection from the motherboard or sound card to the video card. Both ATI and NVIDIA solutions have the outgoing audio going through the HDMI cable to the TV or receiver, but only NVIDIA needs internal connections to make this happen.
Once the audio connection is taken care, all one needs to do is attach the DVI-to-HDMI adaptor to one of two dual-link, HDCP-enabled DVI-I outputs. After the SPDIF audio connection has been made and the adapter is used with an HDMI cable you'll be able to output video on an HDTV.
Under The GeForce 9800 GTX Heat Spreader
With the massive heat spreader removed, we can see what the PCB and layout looks like on the GeForce 9800 GTX. Plenty of thermal paste was installed as you can tell!
Here, we can see the SLI connectors that will be used for SLI and Triple-SLI for those that want to add another one or two cards. Also take note of the LED indicator, which tells you if the power connectors are properly attached.
Just below the two 6-pin PCIe power connectors, you can see all the solid state capacitors that are used to make sure the card has just the right amount of power.
With the thermal paste cleared off the 675MHz G92 core, we can see what powers this beast. In case you don't know, the core on the GeForce 8800 GTS 512MB is labeled G92-400-A2, which is different than the core on the GeForce 8800 GT series as they are labled G92-270-A2. The GeForce 9800 GX2 that we reviewed a couple weeks ago has a core labeled G92-450-A2. This core is G92-420-A2, which is yet another core! The GDDR3 memory IC's on our card were Samsung K4J52324QE-BJ08, which are rated at 1200MHz, which means we should be able to overclock the memory as the reference clocks are 1100MHz.
Palit GeForce 9800 GTX Box & Bundle
Palit might be a new name to many enthusiasts, but they have been around since 1988 and are only now trying to break into the enthusiast graphics card market. Palit is one of the largest providers of graphics cards in the world and has the capacity to produce 2 million graphics cards and motherboards per month. The GeForce 9800 GTX tht Palit sent over is based off the NVIDIA reference design. The retail box has the usual frog that you will either love or hate.
The back of the retail box is pretty boring as it lists five technical specifications of the card in twelve different languages.
- NVIDIA GeForce chipset
- Advanced display pipeline with full nView capabilities
- Complete DirectX support, including DirectX 10.0 and lower
- Full OpenGL 2.1 and lower support
- Drivers included for Windows 2000, XP, XP (64-bit) and Vista
The entire back of the box has those five features listed, which seems like a waste of space if you ask us.
The bundle with the Palit GeForce 9800 GTX includes a single DVI-to-VGA adapter and a video output cable. The Driver disc included NVIDIA Forceware drivers and some other software like VDOTool for unlocking overclocking features. Palit also included the full DVD version of Lara Croft Tomb Raider: Anniversary, which runs $27.99 plus shipping at stores like Newegg and Best Buy, so it makes the bundle very solid.
XFX GeForce 9800 GTX Box & Bundle
The retail packaging on the XFX GeForce 9800 GTX graphics card is car themed with the GeForce 9 series number made from car parts. The air filter and carb fiber racing muffler with blue flames coming out looks pretty sweet. The core clock frequency and bundled game are clearly labeled on the front of the box, so you know what you are getting before you open the box.
The rear of the packaging lists more features and lists the features of the card for both gameplay and a premium home theatre experience. The racing car theme is also extended to the rear of the box with what looks like a turbo charger on the top left corner.
The bundle included with the XFX GeForce 9800 GTX 512MB video card includes two DVI-to-VGA adapters, DVI-to-HDMI adaptor. two 6-pin PCI Express power adapters, SPDIF audio cable, S-Video cable and installation instructions. The Driver disc included NVIDIA Forceware drivers and some other software like CoolBits for unlocking overclocking features. Also included is the full DVD version of THQ's Company of Heroes and another disc that includes the DirectX10 game update, which is worth using since you just purchased a DirectX 10 ready graphics card. The PC version of Company of Heroes runs $19.90 plus shipping through our price engine. Lastly, the bundle also includes the 'Do Not Disturb' door hanger like the other Alpha Dog series cards.
Now that we know what the features of the card are, what the retail box looks like and what comes with it we can look at performance numbers!
The Test System
The test system was running Windows Vista Ultimate 64-bit with all available Microsoft updates including the hotfixes required for enthusiast video cards to run correctly. ATI CATALYST 8.3 drivers were used on all the Radeon HD graphics cards and NVIDIA Forceware 174.12 drivers were used on all GeForce graphics cards except for the GeForce 8800 GTX and GeForce 9800 GTX as they used Forceware 174.74 drivers. All results shown in the charts are averages of at least three runs from each game or application used.
The Palit GeForce 9800 GX2 GPU-Z Screen Shot:
The XFX GeForce 9800 GX2 GPU-Z Screen Shot:
The Video Cards:
- EVGA GeForce 9800 GX2 (600MHz/2000MHz)
- Palit GeForce 9800 GTX (6750MHz/2200MHz)
- XFX GeForce 9800 GTX (675MHz/2200MHz)
- EVGA GeForce 9600 GT (740MHz/1950MHz)
- Palit GeForce 9600 GT (700MHz/2000MHz)
- XFX GeForce 9600 GT (625MHz/1800MHz)
- eVGA GeForce 8800 GTS 512MB (670MHz/1944MHz)
- XFX GeForce 8800 GT 512MB (625MHz/1.8GHz)
- XFX GeForce 8800 GT 256MB (650MHz/1.6GHz)
- ATI Radeon HD 3870 X2 - 1GB (825MHz/1.800GHz GDDR3)
- ATI Radeon HD 3870 - 512MB (777MHz/1.126GHz GDDR4)
- Diamond Radeon HD 3850 - 512MB(669MHz/1.658GHz GDDR3)
- ATI Radeon HD 3850 - 256MB (669MHz/1.658GHz GDDR3)
- ATI Radeon HD 3650 - 256MB (722MHz/1.584GHz GDDR3)
- Sapphire Radeon HD 3450 - 256MB (600MHz/1.000GHz GDDR2)
All of the video cards were tested on the Intel X38 Express Test platform, which is loaded with the latest and greatest hardware. The Intel Core 2 Quad QX9650 'Yorkfield' processor was used for testing as it proved to be the best desktop processor when it comes to game performance. The test system was also loaded with 4GB of memory and water cooled to ensure throttling of the processor or memory wouldn't cause any issues.
|Intel Test Platform|
Intel Core 2 Quad QX9650
4GB OCZ Reaper PC2-6400
Western Digital SATA 250Gb
Corsair Nautilus 500
Windows Vista Ultimate
Now that we know exactly what the test system is, we can move along to performance numbers. It should be noted that since both Palit and XFX use the same PCB, BIOS, Memory IC's, Clock Speeds and cooling solutions we put the cards at the same data point. There is no performance difference between the two cards.
Tomb Raider: Anniversary
Lara Croft Tomb Raider: Anniversary, is the eighth release of the Tomb Raider series. It is a remake of the original Tomb Raider game from 1996 and includes all of the original 'worlds' from Tomb Raider. Created with an advanced version of the engine used for Lara Croft Tomb Raider: Legend, the gameplay mechanics, artificial intelligence and level puzzles of Tomb Raider: Anniversary are now more refined, in-depth and complex. The PC version of this title was released in North American on June 5, 2007.
Tomb Raider: Anniversary has a setting for full screen Anti-Aliasing that allows NVIDIA cards to run 16xQ AA and ATI cards to run at 4x AA. The highest Anti-Aliasing settings were used on both sets of cards at 1920x1200 to see how Quad GPUs do on this game title that actually comes bundled with many of the GeForce 9 series cards.
Results: This benchmark is a new one here at Legit Reviews and it has interesting results at 16xQ AA as the GeForce 8800 GTX does better than the GeForce 9800 GTX at 1920x1200. This is because the GeForce 8800 GTX has a larger frame buffer, which helps in cases like this at high resolutions with extreme AA settings.
Company of Heroes
Company of Heroes (CoH) is a real-time strategy (RTS) computer game developed by Relic Entertainment that was released on September 14, 2006. On May 29, 2007 Relic released a patch for Company of Heroes that supports DirectX 10 and we used the latest patch to test DirectX 10 game performance. Company of Heroes is set during World War II where the player follows a military unit, known as Able Company, as they fight their way through some of the greatest and bloodiest battles, including the D-Day landings at Normandy.
Benchmark Results: Company of Heroes is tough on graphics cards with the latest DirectX 10 patch installed. The XFX/Palit GeForce 9800 GTX was faster than the GeForce 8800 GTX.
World in Conflict
World in Conflict (also known as WiC or WIC) is a real-time tactical video game developed by Massive Entertainment and published by Sierra Entertainment for Windows and the Xbox 360. The game was released in North America on 18 September 2007 and was included in our testing as it is a recent DirectX 10 game title. It also has a threaded engine for multi-core processor support, which is ideal for this testing. The plot in World in Conflict is to defend their country, their hometown, and their families in the face of Soviet-led World War III, delivering an epic struggle of courage and retribution. You are a field commander leading the era's most powerful military machines in the heroic effort to turn back the invasion…one city and suburb at a time. Let's get on to the benchmarking! WIC was tested using the most recent patch available, which is patch number 002.High DX10 Quality Settings -
Results: With medium quality settings the GeForce 9800 GTX was faster than the GeForce 8800 GTX and started to look better at higher resolutions, but this is medium quailty! Let's check out high quality.
Results: At a resolution of 1600x1200 with high quality settings the GeForce 8800 GTX pulls ahead and leaves the GeForce 9800 GTX in the dust. At 1600x1200 with high quality settings we know it is CPU limited at 50FPS as Quad-SLI and CrossFireX all come in at 50FPS and were removed from the chart. The Intel QX9650 might be one of the fastest processors money can buy, but it just happens to be limiting performance here.
S.T.A.L.K.E.R.: Shadow of Chernobyl
S.T.A.L.K.E.R.: Shadow of Chernobyl uses the 'X-ray Engine' to power the graphics. It is a DirectX 8/9 Shader Model 3.0 graphics engine. Up to a million polygons can be on-screen at any one time, which makes it one of the more impressive engines on the market today. The engine features HDR rendering, parallax and normal mapping, soft shadows, widescreen support, weather effects and day/night cycles. As with other engines that utilize deferred shading (such as Unreal Engine 3 and CryENGINE2), the X-ray Engine does not support anti-aliasing with dynamic lighting enabled. However, a "fake" form of anti-aliasing can be enabled with the static lighting option; this format utilizes a technique to blur the image to give the false impression of anti-aliasing. The game takes place in a thirty square kilometer area, and both the outside and inside of this area is rendered to the same amount of detail.
The game was benchmarked with full dynamic lighting and maximum quality settings at 1920x1200 and 1280x1024 resolutions.
Benchmark Results: The XFX GeForce 9800 GTX wasn't able to keep up with the XFX GeForce 8800 GTX in S.T.A.L.K.E.R., which is a shock. Remember both video cards used ForceWare 174.74 drivers and were tested on the same system with identical settings.
BioShock is a game published by 2K Boston/2K Australia, and designed by Ken Levine. The game is a PC and Xbox 360 title released on August 21, 2007 in North America. BioShock is a first-person shooter with role-playing game customization elements that was developed using the Unreal Engine 3.0 and is a DirectX 10 game title that is multithreaded.
Since 2K didn't include a benchmark script or utility in BIOSHOCK, I used FRAPS version 2.9.2 to capture the frame rates for 240 seconds at scenes that I personally selected from the game. If you don't know anything about Bioshock, let me set the scene for you: After your plane crashes into icy uncharted waters, you discover a rusted bathysphere and descend into Rapture, a city hidden beneath the sea. Constructed as an idealistic society for a hand picked group of scientists, artists and industrialists, the idealism is no more. Now the city is littered with corpses, wildly powerful guardians roam the corridors as little girls loot the dead, and genetically mutated citizens ambush you at every turn and you get to kill them. Let's look at the benchmark results!
Benchmark Results: In BioShock the GeForce 9800 GTX was just a frame or two faster than the GeForce 8800 GTX, nothing too impressive for a card that can hardly beat a card that came out two years prior.
Crysis is a science fiction first-person shooter computer game that was developed by Crytek, and published by Electronic Arts. It was released on November 15, 2007 in the United States. The game is based off the CryENGINE2 game engine, which is an extended version of CryENGINE, the game engine behind the hit game Far Cry a number of years ago.
The full retail version of the game Crysis was used with patch 1.2 for benchmarking. FRAPS was used over the internal benchmark utility to help avoid driver enhancements. Legit Reviews has just NVIDIA data for this game as we just recently updated the game to version 1.2 and picked a new scene to run FRAPS on and didn't have time to re-test all the cards in time for this article, but managed to get a number of the high end cards tested.
Results: With the latest patch of Crysis all of the high end cards look great and the GeForce 9800 GTX seems to be an improvement over the GeForce 8800 GTX with high quality settings.
Results: Very high quality settings push any graphics card to the limits of what they can do even with no AA and here we see the how the cards do at 1920x1200. The GeForce 9800 GTX was found to be 15% faster than the GeForce 8800 GTX with these settings.
Call of Duty 4
Call of Duty 4: Modern Warfare is a first-person shooter developed by Infinity Ward and published by Activision for Xbox 360 , Playstation 3 and PC. It is the fourth installment in the Call of Duty video game series. It was announced on April 25, 2007 and was released on November 6, 2007 in North America. The single player game can be completed in well under seven hours, but the graphics are awesome.
Call of Duty 4: Modern Warfare runs on a proprietary graphics engine, and has features such as true world-dynamic lighting, HDR lighting effects, dynamic shadows and depth-of-field. "Bullet Penetration" is calculated by the engine, taking into account things such as surface type and entity thickness. Certain objects, such as cars, and some buildings are destructible. This makes distinguishing cover from concealment important, as meager protection such as wooden fences, thin walls and such no longer provide sufficient protection. The bullet's speed and stopping power are decreased after penetrating an object, and this decrease is calculated realistically depending on the thickness and surface of the object penetrated. The game also makes use of a physics engine, which was not implemented in previous Call of Duty titles for the PC. Death Animations are a combination of pre-set animations and ragdoll physics. Some mistook the game's graphics to be DirectX 10 based, but it is stated that the graphics use DirectX 9.
Results: The GeForce 9800 GTX couldn't pull ahead of the GeForce 8800 GTX at 1920x1200 resolution with 4x AA enabled. The GeForce 8800 GTX was found to be 16% faster at 1920x1200!
3DMark 06 is the worldwide standard in advanced 3D game performance benchmarking and the latest version in the popular 3DMark series! 3DMark06 tests include all new HDR/SM3.0 graphics tests, advanced SM2.0 graphics tests, AI and physics driven single and multiple cores or processor CPU tests and a collection of comprehensive feature tests to reliably measure next generation gaming performance today.
Default 3DMark06 settings were used for testing, so a resolution of 1280x1024 was used.
Benchmark Results: The GeForce 9800 GTX did great in 3dmark06 and was found to be 9.7% faster than the GeForce 8800 GTX.
LIGHTSMARK is a new benchmark/demo with real-time global illumination and penumbra shadows created by Stepan Hrbek. Lightsmark version 1.3 was used as it contains new rendering paths for ATI Radeon HD 2xxx and 38XX series graphics cards. Before version 1.3 was released, the ATI Radeon HD 38xx series video cards were unable to render objects in the benchmark.
- realtime global illumination
- realtime penumbra shadows
- realtime color bleeding
- infinite light bounces
- fully dynamic HDR lighting
- 220000 triangles in scene
It should be noted that ATI questioned our use of this benchmark as the developer is a private individual that never contacted developer relations at ATI. ATI also made it clear to us that although this benchmark uses global illumination, that it is not similar to the DX10.1 demo that ATI been showing. ATI isn't sure how the benchmark is made as they have not had time to look into it, but if its rendering to cube maps then its likely that performance could be increased if the app used DX10.1's indexed cube maps.
To be fair to both side we contacted the creator of Lightsmark 2007 and Stepan Hrbek had this to say.
"I developed it with 3 Radeon and 3 GeForce cards, randomly switching them, there are no special optimizations, IMHO it's fair. I bought all 6 cards, no gifts.. Small unfairness is only in quality. The same shader on Nvidia card produces smoother shadows but I don't give Nvidia any bonus points for quality, only fps is measured. It uses completely new technique where part of calculation runs on CPU, drivers were not optimized for it for years, so it's possible that it hits some unoptimized driver code. But both companies are in the same situation."
Since we ran the test, we will go ahead and include it, but what the results mean is up in the air.
Benchmark Results: Lightsmark 2007 v1.3 seems to love raw speed and with the higher clock frequencies on the GeForce 9800 GTX it did very well in this benchmark.
Since video card temperatures and the heat generated by next-generation cards have become an area of concern among enthusiasts and gamers, we want to take a closer look at how these cards generate heat under load.
With the ATI Tool open, the video card was rendering the object in the picture above at an average 1073 frames per second. This object puts the GPU at 99% load, which is great for heating up the card. We let it run for half an hour and it was enough to get the GeForce 9800 GTX up to 69 degrees Celsius from 54 degrees Celsius. It should be noted that this was done on the open test bench with the motherboard laying flat, so expect slightly higher temperatures with the card installed in a case. Let's take a look to see how the GeForce 9800 GTX does against other graphics cards.
Benchmark Results: The GeForce 9800 GTX does really good at load temperatures compared to the GeForce 8800 GTX. It's obvious that the GeForce 9800 GTX has a better cooling solution.
To overclock the Palit and XFX GeForce 9800 GTX graphics cards we ended up using Vtune, which comes with the Palit graphcis card.
The stock settings are shown above and we aimed to overclock from there using this easy software tool.
We were able to take the core from 675MHz all the way up to 800MHz and the memory from 1100MHz to 1200MHz with little effort. The most impressive part of the overclock though was the improvement on the shaders. We were able to overclock the shaders from 1688MHz to an impressive 2025MHz! This was the highest stable overclock we were able to get. We got a few benchmarks to run at 800/1250/2100, but the system would crash or lockup during testing. These are great overclocking results!
Using S.T.A.L.K.E.R. to test out the overclock we were able to notice a 15.4% performance improvement on the GeForce 9800 GTX! The GeForce 9800 GTX overclocks really good from what we have found on our two cards.
Power Consumption and Final Thoughts
For testing power consumption, we took our test system and plugged it into a Seasonic Power Angel. For idle numbers, we allowed the system to idle on the desktop for 15 minutes and took the reading. For load numbers, we measured the peak wattage used by the system while running the game World in Conflict at 1600x1200 with medium graphics quality.
Power Consumption Results: Looking at total system power consumption levels, the GeForce 9800 GTX fits in alongside the GeForce 8800 GTS 512, consuming roughly the same amount of power. This is not at all surprising given that both cards are around the G92 core and use similar clock specifications. The GeForce 9800 GTX used far less power than the GeForce 8800 GTX, partly because it uses less memory and has a more efficient core than the G80.
This is by far one of the toughest cards that we have had to draw a conclusion on since we started reviewing video cards when the site started back in 2002. The GeForce 9800 GTX is not faster across the board than the GeForce 8800 GTX that came out back in 2006 as the benchmarks showed. To many readers this may be a disappointment and we'd be lying if we said it doesn't bother us too. When it comes to just raw gaming performance the GeForce 9800 GTX is more of an evolutionary advancement than revolutionary. If you are looking to play at high resolutions with high anti-aliasing it seems that the GeForce 8800 GTX is still the best choice in the majority of the game titles we looked at here today. This shouldn't come as a shock to many as the GeForce 8800 GTX does have more memory on a wider bus.
Gamers and enthusiasts that recently picked up the GeForce 8800 GTS 512MB video card don't have anything to be upset about with the launch of the GeForce 9800 GTX. The GeForce 9800 GTX is slighty faster, but with nearly the same specifications many GeForce 8800 GTS 512MB owners will be able to overclock up to these levels without much trouble at all. The GeForce 8800 GTS 512MB is going to have to drop in price as at $299-$349 the GeForce 9800 GTX will cut into the same general pricing market, so that card should offer a very sexy price versus performance ratio in the coming weeks.
When it comes to price we have something better to talk about as the GeForce 9800 GTX has an MSRP between $299-$349, which is far less than the GeForce 8800 GTX when it came out a couple years ago at nearly $600. In all honesty the GeForce 9800 GTX costs far less to produce, so it should cost less! The GeForce 9800 GX2 that we reviews a few weeks ago is still the enthusiast card to have and the ATI Radeon HD 3870 X2 just got some tough competition.
The Palit and XFX GeForce 9800 GTX video cards both are great video cards with really no difference between them other than the bundle and price tag. The XFX GeForce 9800 GTX had by far the best bundle of the two, but it will cost you a bit more when you go to check out. If you are wondering where SLI and Triple SLI performance numbers are at you'll have to wait. NVIDIA forgot to send us a third graphics card, but they overnight mailed us one yesterday. Now we just have to re-test it and re-do all the charts!
Legit Bottom Line: The GeForce 9800 GTX doesn't offer a big jump in gaming performance over the GeForce 8800 GTX, but the price is right.