When NVIDIA introduced the GeForce GTX Titan back in February, they unleashed a video card that quickly became recognized as the fastest single-GPU on the desktop market. The GeForce GTX Titan uses the GK110 ‘Kepler’ GPU along with 6GB of GDDR5 memory to show gamers how current game titles are meant to be played. No doubt the GeForce GTX Titan runs circles around the competition, but it also costs $999 and that is out of reach for most mainstream and enthusiast gamers. Shortly after the NVIDIA GeForce GTX Titan was released we began hearing rumors of a less costly card with fewer stream processors and less memory to help fill the huge price gap between the GeForce GTX 680 at $449 and the GTX Titan at $999. The rumor mill was calling that card the GeForce GTX Titan LE, but it appears that was wrong. NVIDIA had other plans and introduced the GeForce GTX 780 last week, which is the first GeForce 700 series graphics card to be released!
If you take a look at the image above and compare the GeForce GTX Titan (top) to the GeForce GTX 780 (bottom) you should be able to figure out where this is all headed. The NVIDIA GeForce GTX 780 might be the first video card to be released in the GeForce GTX 700 series, but it’s not really ground breaking. It’s basically a GeForce GTX Titan with some things on the GK110 GPU disabled and half the memory. The best part about the new NVIDIA GeForce GTX 780 is that it comes with a suggested retail price of $649. That makes it 35% less than the cost of a GeForce GTX Titan. The question is what did they disable?
|GTX Titan||GTX 780||GTX 680|
|Memory Amount||6144MB GDDR5||3072MB GDDR5||2048MB GDDR5|
|Memory Bus Width||384-bit||384-bit||256-bit|
|Memory Bandwidth||288.4 GB/s||288.4 GB/s||192.26 GB/s|
|Suggested Power Supply||600W||600W||550W|
|Manufacturing Process||TSMC 28nm||TSMC 28nm||TSMC 28nm|
Here is a quick chart that breaks down the key specifications for you along for those that like to see everything in a table. The GeForce GTX 780 has base clock of 863MHz and a Boost click of 902MHz, which is higher than the clocks on a Titan, but notice that it has fewer stream processors (CUDA Cores). The GeForce GTX 780 has two SMs (Streaming Multiprocessors) disabled, which means it has 2304 CUDA cores versus 2688 on Titan. The number of texture units also decreases from 224 on Titan to 192 on the GeForce GTX 780.
NVIDIA reduced the memory down to just 3GB on the GTX 780, but notice that the clock speed is still 6008MHz (effective) and connected with a 384-bit memory interface. This means the memory bandwidth is is still 288.4 GB/s and the peak texture fill rate is 165.7 GTexels/s. The reduction of memory might be of a concern to some gamers, so we’ll be addressing those concerns by benchmarking the GeForce GTX Titan, GTX 780, GTX 680 and GTX 580 at 5760×1080, 2560×1600 and 1920×1080. This wide range of resolutions should cover the frame buffer performance questions that many have. We believe that Ultra HD resolutions are where gaming is headed, so we will be headed in that direction with our high-end GPU reviews.
The NVIDIA GeForce GTX 780 video card shares the same general PCB design and the identical GPU cooler. The GeForce GTX 780 looks sharp thanks to a silver aluminum casing for the cover and a clear polycarbonate window that allows you to see the vapor chamber and dual-slot heatsink used on the card.
The GeForce GTX logo on the edge of the board is also LED backlit just like the one on the GeForce GTX 690 and Titan. NVIDIA suggests a 600W or greater power supply for the GTX 780 and you’ll need both an 8-pin and a 6-pin PCIe power connector for proper operation.
The back of the NVIDIA GeForce GTX 780 doesn’t have a backplate and you can’t see any of the GDDR5 memory ICs that make up the cards 3GB frame buffer.
NVIDIA is using Samsung K4G20325FD-FC03 GDDR5 memory chips on the GeForce GTX 780.
Display outputs include two dual-link DVIs, one HDMI and one DisplayPort 1.2 connector. This video card easily supports 4K resolution monitors and supports up to four monitors concurrently. Keep in mind that the GeForce GTX 580 supported just two monitors, so NVIDIA has really been improving their monitor support and display connectivity options over the past couple years.
Now that we have the basics covered, let’s take a quick look at the test setup and move along to the game benchmarks!