NVIDIA GeForce GTX 750 Ti 2GB Video Cards Arrive

geforce-gtx-maxwell

NVIDIA today announced the new GeForce GTX 750 Ti and GTX 750 video cards, which are very interesting to use as they are the first cards based on NVIDIA's new Maxwell graphics architecture. NVIDIA has been developing Maxwell for a number of years and have decided to launch entry-level discrete graphics cards with the new technology first in the $119 to $149 price range. NVIDIA heavily focused on performance per watt with Maxwell and it clearly shows as the GeForce GTX 750 Ti 2GB video card measures just 5.7-inches in length with a tiny heatsink and doesn't require any internal power connectors!

nvidia-geforce-gtx750tiNVIDIA GeForce GTX 750 Series Cards:

maxwell-gpu

NVIDIA says that the Maxwell architecture used on the new GM107 (the GPU core used on the GeForce GTX 750 and 750 Ti) has allowed them to produce the most efficient GPU ever built. Kepler was revolutionary for performance per watt when it came out and so it Maxwell. Kepler was on manufactured on the 28nm process and was essentially an upgrade from the previous Fermi architecture that was done on the 40nm process node. NVIDIA has taken everything they've learned from their Tegra line of mobile processors and the Kepler desktop and enterprise processors and put all that expertise to good use in Maxwell. Maxwell has 135% more performance per core and twice the performance per watt. Added more control logic to the CUDA cores. Maxwell logic is better so they can control more cores controls fewer cores. The image above shows a very high-level block diagram that highlights the fact that NVIDIA greatly reduced the shader block sizes into smaller clusters with added control logic. This allows them to to better control the CUDA cores on each particular Maxwell GPU and really helps improve the efficiency.

kepler-versus-maxwell

Here is a nice table that shows a high-level comparison of the GM107 Maxwell GPU versus the previous generation Gk107 Kepler GPU. The GM107 'Maxwell' GPU has 640 CUDA cores with 1.87 Billion transistors and a die size of 148mm2. The NVIDIA GK107 'Kepler' GPU had just 384 CUDA cores with 1.3 billion transistors and a die size of 118mm2. The really cool thing is that NVIDIA's GM107 has higher performance specs and has a slightly lower TDP even though it has 570 million more transistors at the same 28nm manufacturing process!

maxwell-gains

NVIDIA says that gamers generally upgrade their video card on every four years on average and that the new GeForce GTX 750 Ti will give those people a huge boost in power efficiency. NVIDIA believes that the GeForce GTX 750 Ti will be about a 2x performance upgrade at a fraction of the power use for those still using a GeForce GTX 550 Ti video card from 2010.

mini-itx-gaming

NVIDIA has taken notice of the boom in Mini-ITX gaming PCs and think that the GeForce GTX 750 Ti is a perfect match for someone looking to build a 1080P small form factor (SFF) gaming PC as you need just a 300W power supply and there is no 6-pin power connector needed on the GeForce GTX 750 Ti reference card. Some manufactures will be putting a 6-pin PCie video card power connector on the boards, but from what we can tell it should be needed even when overclocking. Why is that? NVIDIA has placed overclocking restrictions on the GeForce GTX 750 Ti and right now the BIOS only allows the CUDA cores to be overclocked by 135MHz. The memory isn't locked down, so the sky is the limit there.

best-in-class

NVIDIA originally planned for the GeForce GTX 750 Ti to be in direct competition with the AMD Radeon R7 260X, but it appears that last weeks launch of the AMD Radeon R7 265 2GB video card at $149 spoiled that. The NVIDIA GeForce GTX 750 Ti looks like it should perform as fast or faster than the AMD Radeon R7 260X across the board and it should do it cooler and quieter.

maxwell-tdp

NVIDIA says that in this scenario (R7 260X versus GTX 750 Ti) that in your average gaming scenario that the typical board power will drop from 115W to 60W while at the same time you are getting better performance

gtx750ti-specs

NVIDIA says that the the first graphics cards to use the first-generation Maxwell architecture were designed to do more with less, so keep that in mind when looking at the specifications here. The NVIDIA GeForce GTX 750 Ti has 640 CUDA Cores that have a base clock of 1020MHz and a boost clock of 1085MHz. The GTX750 Ti supports NVIDIA Boost 2.0 technology, which means that it the parameters are okay that they card can clock higher than this. The GeForce GTX 750 Ti uses a 128-bit GDDR5 memory bus and you have the option of purchasing a 1GB or 2GB model. The memory is clocked at 1350MHz (5400MHz effective) and that gives you about 86.4 GB/s of memory bandwidth. The NVIDIA GeForce GTX 750 Ti should be used with a 300W or greater power supply as it has a TDP rating of just 60 Watts. The GeForce GTX 750 Ti supports DVI, VGA (D-Sub), HDMI and DisplayPort video outputs, so you'll see a variety of options in the retail market. The one thing we need to point out is that you might want to go out of your way to find a model with DisplayPort 1.2 if you pan on purchasing an NVIDIA G-Sync monitor as it is required for G-Sync to work. You'll need to look for an Add-In-Board (AIB) Partner that has DisplayPort 1.2 on their card. It would have been nice if NVIDIA didn't make Displayport 1.2 optional on these entry level gaming graphics cards, but it is too late now. The GeForce GTX 750 Ti also does not support HDMI 2.0.

gtx750-specs

The NVIDIA GeForce GTX 750 is basically the same thing as the as the except it has fewer CUDA cores, just 1GB of GDDR5 memory and a 5W lower TDP. NVIDIA reduced the number of CUDA cores from 640 to 512, which is a 20% reduction. The number of texture units drops from 40 to 32 and the number of ROPs remains the same at 16. NVIDIA is exploring the idea of releasing a 2GB version of the NVIDIA GeForce GTX 750, but right now there will just be a 1GB variant. The NVIDIA GeForce GTX 750 will cost $119.99 at launch and will be competing in the marketplace against the AMD Radeon R7 260X 1GB that also shares an MSRP of $119.99.

geforce-lineup

The NVIDIA GeForce GTX 750 and 750 Ti will replace the GeForce GTX 650 Ti series in the GeForce line-up. This means that they will be above the GeForce GTX 650 and below the GeForce GTX 660 in terms of price and performance.

geforce-gtx-750ti-cards

Today Legit Reviews will be looking at the NVIDIA GeForce GTX 750 Ti 2GB reference card along with custom cards by ASUS and MSI.

NVIDIA Maxwell Architecture

 

GeForce_GTX_750_Ti_Block_Diagram_FINAL

The block diagram for Maxwell looks familiar doesn't it? The GM107 Maxwell GPU in its full implementation contains one Graphcis Processor Cluster (GPC), five Maxwell Streaming Multiprocessors (SMM), and two 64-bit memory controllers (128-bit total). This is the exact GPU that is used on the GeForce GTX 750 Ti. The GPC includes a Raster Engine and each of the five Streaming Multiprocessors (SMM) house a Polymorph Engine along with the Texture Units. The Streaming Multiprocessors are what contain the CUDA cores. ROPs are still aligned with L2 cache slices and Memory Controllers.

GeForce_GTX_750_Ti_SM_Diagram_FINAL

Here is a closer look at one of the Streaming Multiprocessors (SMM) and inside you can see that NVIDIA has reorganized the various blocks to make Maxwell as power efficient as possible.  It should be noted that this is the first time we have seen SMM's as they used to be called SM's (or now what NVIDIA calls SMK's on Kepler). Rather than a single block of 192 shaders like before, the new SMM's are divided into four distinct blocks that each have a separate instruction buffer, warp scheduler and 32 dedicated, non-shared CUDA cores. This partitioning simplifies the design and scheduling logic, saving area and power, and reduces computation latency.

NVIDIA GeForce GTX 750 Ti Video Card Tear Down

 

nvidia-geforce-gtx750ti

The NVIDIA GeForce GTX 750 Ti 2GB reference card is very small at just 5.7-inches in length. It is a dual-slot card due to the round GPU cooler that is used with a black PCB and fan shroud. It should ne noted that there is no SLI interconnect on this card and that is because the GeForce GTX 750 and 750 Ti both do not support NVIDIA SLI technology. The NVIDIA GeForce GTX 650 Ti was also like this and it is because NVIDIA doesn't think it is needed at the sub $150 price point. They can also upsell with SLI and use it to differentiate between their cards as you go up the product stack.

nvidia-geforce-gtx750ti-power

NVIDIA included a spot for a 6-pin power header, but there is not a power header installed. The GeForce GTX 750 Ti 2GB doesn't need any internal system power as this 60W TDP card gets more than enough power through the PCI Express 3.0 slot in which is resides. NVIDIA went with SKHynix GDDR5 memory chips on their reference card.

nvidia-geforce-gtx750ti-video

When it comes to video outputs, the GeForce GTX 750 Ti has a pair of Dual-Link DVI (DVI-I and DVI-D) connectors and a single mini-HDMI connection. Why NVIDIA didn't put DisplayPort on their first Maxwell reference card is beyond us as you need DisplayPort 1.2 to run an NVIDIA G-Sync monitor. We think this card would be perfect for a G-Sync setup! The good news is that board partners can use DisplayPort if they so choose.

nvidia-geforce-gtx750ti-back

The back of the GeForce GTX 750 Ti is pretty barren with no major components present. The GPU mounting holes appear to be unchanged, so good news for those water cooling enthusiasts.

NVIDIA-GM107-GPU

Here is a shot of the NVIDIA GM107 Maxwell GPU with the GPU cooler removed from the reference card. The GPU is labeled GM107-400-A2 and was made in week 49 of 2013.

nvidia-geforce-gtx750ti-quarter

Here is the GM107 GPU with a US quarter dollar sitting next to it to give you an idea of the size. The GM107 'Maxwell' GPU has 640 CUDA cores with 1.87 Billion transistors and a die size of 148mm2. This makes it about 25% larger than the GK107 'Kepler' GPU that it directly replaces.

ASUS GTX750Ti-OC-2GD5 and MSI N750Ti TF 2GD6/OC

In addition to the NVIDIA GeForce GTX 750 Ti reference card we'll be taking a look at to partner boards.

asus-gtx750ti-oc

ASUS sent out the GTX750Ti-OC-2GD5, which will retail for $154.99 shipped and it is a fully custom card with a factory overclock. The ASUS card has dual fans for improved cooling and Super Allow Power (SAP) components to ensure long lasting performance. ASUS set the core clock to 1072MHz instead of 1020MHz, which is 52MHz faster than the NVIDIA GeForce GTX 750 Ti 2GB reference card. ASUS did not overclock the memory and left it at 1350MHz.

msi-750ti-gaming

MSI sent over the N750Ti TF 2GD5/OC Gaming for use to take a look at and it retails for $169.99 shipped, but there is currently a $10 rebate on it. This is a fully custom card that is also factory overclocked. MSI went all out on this card with a TwinFrozr GPU cooler that has twin fans and a heatink with two heatpipes. The clock speeds on this card are 13MHz higher than the ASUS card above, so you are looking at 1085MHz on the core clock. The memory is set to 1350MHz, so all three of the GeFroce GTX 750 Ti cards that we are looking at today are running the same memory speeds.

gtx750ti-aib-video-cards

Both ASUS and MSI are going with a red and black color scheme, so in terms of appearance they are very similar. The main difference between the cards is their length.

gtx750ti-aib-backFlipping the cards over we can see how drastically different the PCB lengths are. The ASUS GTX750Ti-OC-2GD5 is 8.5-inches in overall length, but the PCB is just 7-inches long. The MSI N750Ti TF 2GD5/OC Gaming graphics card is 10-inches in overall length with an 8.75" PCB.

gtx750ti-aib-video-outputs

When it comes to video outputs there are some minor differences. ASUS has two DVI, a mini-HDMI and VGA (D-Sub) output, whereas MSI went with DVI-D, VGA (D-Sub) and a mini-HDMI connections. Not a huge difference, but if you wanted to run two displays with DVI connectors the ASUS card is the way to go. We are shocked that neither card has DisplayPort 1.2 support. This means of the three NVIDIA GeForce GTX 750 Ti cards that we have in our possesion will support NVIDIA G-Sync displays. Where is the DisplayPort love? Putting VGA video outputs on $159 graphics cards in 2014 doesn't make much sense to us, especially when there are more features to be had if DisplayPort was added instead. The board partners are missing an opportunity here to differentiate themselves.

gtx750ti-aib-coolers

One of the major differences that is worth talking about is the GPU coolers. MSI went with with their full fledged Twin Frozr GPU cooler that has heatpipes and nice cooling fin arrays. It is a really nice looking GPU cooler for a 60W TDP card. ASUS on the other hand didn't use their full DirectCU cooling solution and opted to use something called DC Lite, which is basically a large aluminum block that has no heatpipes and rather large chunky cooling fins. There will most certainly be a difference in cooling performance between these two cards and that likely means there will be a performance difference as when one card is drastically cooler than the other it can run at higher core clock speeds thanks to NVIDIA GPU Boost 2.0 technology. Sure there might be a 13Mhz clock difference on the cores, but that likely won't be the case when gaming.

aib-power

The last difference that we wanted to point out between these cards is that ASUS went with a 6-pin power connector whereas MSI skipped the plans for that (even though the PCB supports it) and went with a dual-BIOS design.

These cards have the same price and color scheme, but there a number of key design differences between them.

Test System

Before we look at the numbers, let's take a brief look at the test system that was used. All testing was done using a fresh install of Windows 8 Pro 64-bit and benchmarks were completed on the desktop with no other software programs running. It should be noted that we average all of our test runs. There has been some concern of people testing a cold card versus a hot card, but we've always done out testing 'hot' since the site started back more than a decade ago.

Video Cards & Drivers used for testing:

Intel X79/LGA2011 Platform

video-card-test-rig

The Intel X79 platform that we used to test the all of the video cards was running the ASUS P9X79 Deluxe motherboard with BIOS 1501 that came out on 01/15/2014. We went with the Intel Core i7-4960X Ivy Bridge-E processor to power this platform as it is PCIe 3.0 certified, so all graphics cards are tested with PCI Express Gen 3 enabled. The Kingston HyperX 10th Anniversary 16GB 2400MHz quad channel memory kit was set to XMP Profile #2. This profile defaults to 2133MHz with 1.65v and 11-12-12-30 1T memory timings. The OCZ Vertex 460 240GB SSD was run with latest firmware available. A Corsair AX860i digital power supply provides clean power to the system and is also silent as the fan hardly ever spins up. This is critical to our testing as it lowers the ambient noise level of the room and gives us more accurate sound measurements than the old Corsair AX1200 power supply that we used from 2012 till this year that had a loud fan that always ran.

gpu-test-system-specs

Here are the exact hardware components that we are using on our test system:

The Intel X79 Test Platform

 

Component

 

Brand/Model

 

Live Pricing

 

Processor

 

Intel Core i7-4960X

 

Motherboard

ASUS P9X79-E WS

 

Memory

16GB Kingston 2133MHz

 

Video Card

 

Various

 

Solid-State Drive

 

OCZ Vertex 460 240GB

 

Cooling

 

Intel TS13X (Asetek)

 

Power Supply

Corsair AX860i

 

Operating System

 

Windows 8.1 Pro 64-bit

 

Monitor

 

Sharp PN-K321 32" 4K

 

NVIDIA GeForce GTX 750 Ti Reference Card GPU-Z Information:

750ti-gpuz

ASUS GeForce GTX 750 Ti OC Video Card GPU-Z Information:

asus-750ti-gpuz

MSI GeForce GTX 750 Ti Gaming OC Video Card GPU-Z Information:

msi-750ti-gpuz

 

Batman: Arkham Origins

BatmanOrigins-SS

Batman: Arkham Origins is an action-adventure video game developed by Warner Bros. Games Montréal. Based on the DC Comics superhero Batman, it follows the 2011 video game Batman: Arkham City and is the third main installment in the Batman: Arkham series. It was released worldwide on October 25, 2013.

BatmanOrigins-SS

For testing we used DirectX11 Enhanced, FXAA High Anti-Aliasing and with all the bells and whistles turned on. It should be noted that V-Sync was turned off and that NVIDIA's PhysX software engine was also disabled to ensure both the AMD and NVIDIA graphics cards were rendering the same objects. We manually ran FRAPS on the single player game instead of using the built-in benchmark to be as real world as we possibly could. We ran FRAPS in the Bat Cave, which was one of the only locations that we could easily run FRAPS for a couple minutes and get it somewhat repeatable.

batman-cpu-utilization

The CPU usage for Batman: Arkham Origins was surprising low with just 10% of the Intel Core i7-4960X being used by this particular game title. You can see that the bulk of the work is being done by one CPU core.

batman-fps

Benchmark Results: Right off the bat we can see the NVIDIA GeForce GTX 750 Ti easily beats the AMD Radeon R7 260X when it comes to performance. The factory overclocked GeForce GTX 750 Ti cards by ASUS and MSI perform better than the reference card and the MSI GeForce GTX 750 Ti 2GB gaming card was able to outperform the AMD Radeon R7 265 2GB video card in Batman: Arkham Origins!  We were shocked to see that there was such a big performance difference between the MSI GeForce GTX 750 Ti and ASUS GeForce GTX 750 Ti as there is just a 13MHz clock difference on paper between the cards core and boost clock speeds.

different-boost-clocksWe dug a little deeper and found that when gaming the ASUS card was actually running at 1215MHz with the GPU reaching 57C and the MSI card was running at 1241MHz and was at just 45C. It looks like that better GPU cooler was able to reward MSI with an additional 13MHz of overhead due to the lower voltages and temperatures on the GM107 GPU.

batman-time

Benchmark Results: We won't be showing the AIB cards performance over time, but you can check out the reference cards in the chart above. The AMD Radeon R7 265 2GB and NVIDIA GeForce GTX 750 Ti 2GB reference cards are pretty similar.

Battlefield 4

bf4-screenshot

Battlefield 4 is a first-person shooter video game developed by EA Digital Illusions CE (DICE) and published by Electronic Arts. It is a sequel to 2011's Battlefield 3 and was released on October 29, 2013 in North America. Battlefield 4's single-player Campaign takes place in 2020, six years after the events of its predecessor. Tensions between Russia and the United States have beem running at a record high. On top of this, China is also on the brink of war, as Admiral Chang, the main antagonist, plans to overthrow China's current government; and, if successful, the Russians will have full support from the Chinese, bringing China into a war with the United States.

bf4-settings

This game title uses the Frostbite 3 game engine and looks great. We tested Battlefield 4 with the Ultra graphics quality preset as most discrete desktop graphics cards can easily play with this IQ setting at 1080P and we still want to be able to push the higher-end cards down the road. We used FRAPS to benchmark each card with these settings on the Shanghai level.

bf4-cpu-utilization

Battlefield 4 is more CPU intensive than any other game that we benchmark with as 25% of the CPU is used up during gameplay. You can see that six threads are being used and that the processor is running in Turbo mode at 3.96GHz more times than not.

bf4-fps

Benchmark Results: In Battlefield 4 with Ultra settings the AMD Radeon R7 265 pulled ahead, but the NVIDIA GeForce GTX 750 Ti cards were above the 30FPS mark with these aggressive image quality settings and the game was very playable.  It should be noted that we did not test with the Mantle API and stuck with DirectX 11 to ensure everything was tested on the same API.

bf4-time

Benchmark Results: The cards shadowed one another very closely in BF4, but the AMD Radeon R7 265 was clearly ahead for the entire time.

Crysis 3

crysis3-SS

Like the others, it is a first-person shooter developed by Crytek, using their CryEngine 3. Released in February 2013, it is well known to make even powerful system choke. It has probably the highest graphics requirements of any game available today. Unfortunately, Crytek didn’t include a standardized benchmark with Crysis 3. While the enemies will move about on their own, we will attempt to keep the same testing process for each test.

crysis3-settings

crysis3-settings2

Crysis 3 has a reputation for being highly resource intensive. Most graphics cards will have problems running Crysis 3 at maximum settings, so we settled on 4x MSAA with the graphics quality mostly set to Very High with 16x AF. We disabled v-sync and left the motion blur amount on medium.

crysis3-cpu-utilization

Crysis 3 appeared to run for the most part on just 3 CPU threads and used up about 15-18% of our Intel Core i7-4960X processor with these settings. Notice that the processor speed was at 3.53GHz and we very seldom, if ever, saw the processor go into turbo mode on Crysis 3.

crysis3-fps

Benchmark Results: The NVIDIA GeForce GTX 750 Ti was able to take the lead over the AMD Radeon R7 260X in Crysis 3 by about a single FPS.

 

crysis3-time

Benchmark Results: The AMD Radeon R7 260X 2GB and NVIDIA GeForce GTX 750 Ti 2GB were very similar in Crysis 3.

DayZ

 

DayZ-SS

DayZ is a multiplayer open world survival horror video game in development by Bohemia Interactive and the stand-alone version of the award-winning mod of the same name. The game was test-released on December 16, 2013, for Microsoft Windows via digital distribution platform Steam and has sold over 1 million copies in early alpha testing. The game runs on a branch of the Take On Helicopters engine (part of the Real Virtuality engine) and got our attention as it is neither an AMD or NVIDIA backed game title. Bohemia Interactive, NVIDIA and AMD all confirmed that no optimizations are in the game yet, so we figured this would be an interesting game title to purchase and try out.

dayz-settings

We ran DayZ with the image quality settings fairly cranked up with the quality set to very high.

dayz-cpu-utilization

DayZ appears to be running on the six physical cores of the Intel Core i7-4960X processor and averages around 17-24% CPU usage from what we were able to tell from the CPU utilization meter that is built into the Windows 8.1 task manager.

dayz-fps

Benchmark Results: The AMD Radeon R7 260X 2GB was found to be 1FPS slower than the NVIDIA GeForce GTX 750 Ti video cards. Higher clock speeds didn't help much in this game title, but remember it isn't optimized for NVIDIA or AMD yet.

dayz-time

Benchmark Results: DayZ was a royal pain in the butt to benchmark as the multiplayer online game is very buggy, you have zombies trying to kill you and other players also like to kill you. We originally wanted to run around housing areas, but we couldn't stay alive and running into and out of houses was impossible to reproduce because one time the doors on a house will be open and the next time you get into the server they'll be closed. We opted to run in an open area next to some rail road tracks and as a result the FPS is higher than more graphics intensive areas and the performance was relatively flat. We really wanted to include some DayZ performance numbers though and for $29.99 it never hurts to try something new.

Far Cry 3

Farcry3 Game Screenshot

Far Cry 3 is an open world first-person shooter video game developed by Ubisoft Montreal and published by Ubisoft for Microsoft Windows, Xbox 360 and PlayStation 3. It is the sequel to 2008's Far Cry 2. The game was released on December 4th, 2012 for North America. Far Cry 3 is set on a tropical island found somewhere at the intersection of the Indian and Pacific Oceans. After a vacation goes awry, player character Jason Brody has to save his kidnapped friends and escape from the islands and their unhinged inhabitants.

FarCry 3 Quality Settings

FarCry 3 Video Quality

Far Cry 3 uses the Dunia Engine 2 game engine with Havok physics. The graphics are excellent and the game really pushes the limits of what one can expect from mainstream graphics cards. We set game title to 8x MSAA Anti-Aliasing and ultra quality settings.

fc3-cpu-utilization

Far Cry 3 appears to be like most of the other games we are using to test video cards and uses up about 20% of the processor and is running on multiple cores.

fc3-fps

Benchmark Results: The NVIDIA GeForce GTX 750 Ti graphics card was 20% faster than the AMD Radeon R7 260X 2GB and the overclocked cards were just a single FPS being the AMD Radeon R7 265!

fc3-time

Benchmark Results: Some small variations here and there, but no big frame drops on any of the cards to report back about.

Metro Last Light

 

MetroLL-SS

Metro: Last Light is a first-person shooter video game developed by Ukrainian studio 4A Games and published by Deep Silver. The game is set in a post-apocalyptic world and features action-oriented gameplay with a combination of survival horror elements. It uses the 4A Game engine and was released in May 2013.

MetroLL-settings

Metro: Last Light was benchmarked with very high image quality settings with the SSAA set to 2x and 4x AF. These settings are tough for entry level discrete graphics cards, but are more than playable on high-end gaming graphics cards. We benchmarked this game title on the Theater level.

metroll-cpu-utilization

We again found around 20% CPU usage on Metro: Last Light.

metro-fps

Benchmark Results: In Metro: Last Light the MSI GeForce GTX 750 Ti 2GB was found to be about 19% faster than the AMD Radeon R7 260X and averaged 25 FPS at 1920x1080.

metro-time

Benchmark Results: No big performance dips or spikes that are out of the ordinary here!

3DMark 2013

3Dmark Fire Strike Benchmark Results - For high performance gaming PCs

Use Fire Strike to test the performance of dedicated gaming PCs, or use the Fire Strike Extreme preset for high-end systems with multiple GPUs. Fire Strike uses a multi-threaded DirectX 11 engine to test DirectX 11 hardware.

3DMark Fire Strike

 

Fire Strike Benchmark Results:

3dmark

Benchmark Results: The 3DMark Fire Strike benchmark has the AMD Radeon R7 265 scoring 4,690 3DMarks and the MSI GeForce GTX 750 Ti 2GB Gaming OC card coming in slightly lower at 4,227 3DMarks. The performance between the three GeForce GTX 750 cards is pretty small with just a 200 point difference between the reference cards and the fastest overclocked model.

Temperature & Noise Testing

Temperatures are important to enthusiasts and gamers, so we took a bit of time and did some temperature testing on the Sapphire Radeon R7 265 with the Dual-X GPU cooler.

NVIDIA GeForce GTX 750 Ti Reference Card Temps:

750ti-idle

750ti-load

ASUS GeForce GTX 750 Ti OC Video Card Temps:

asus-750ti-idle

asus-750ti-load

MSI GeForce GTX 750 Ti Gaming OC Video Card Temps:

msi-750ti-idle

msi-750ti-load

The NVIDIA GeForce GTX 750 Ti has some crazy low thermal numbers and it looks like NVIDIA greatly improved their energy efficiency with Maxwell. The NVIDIA reference card with its puny little GPU cooler topped out at just 58C! The ASUS GeForce GTX 750 Ti 2GB cards DC Lite GPU Cooler design wasn't impressive at all and performed just 1C better than the reference cards cooler at load and actually 3C worse at idle. The MSI GeForce GTX 750 Ti 2GB Gaming OC cards cooler was nothing short of amazing and we averaged 20C at idle and just 45C at load. Some of the high-end cards idle way above 45C, so it is very impressive to see such low numbers!

temp-testing

The Sapphire Dual-X R7 265 2GB video card runs at 55C and the MSI GeForce GTX 750 Ti 2GB Gaming OC was 10C cooler when gaming. Clearly NVIDIA has won the thermal battle for now.

Sound Testing

We test noise levels with an Extech sound level meter that has ±1.5dB accuracy that meets Type 2 standards. This meter ranges from 35dB to 90dB on the low measurement range, which is perfect for us as our test room usually averages around 36dB. We measure the sound level two inches above the corner of the motherboard with 'A' frequency weighting. The microphone wind cover is used to make sure no wind is blowing across the microphone, which would seriously throw off the data.

noise-testing

All three of the NVIDIA GeForce GTX 750 Ti cards that we tested were very quiet. The NVIDIA GeForce GTX 750 Ti reference card was the quietest of the bunch and hit just 39.5dB when gaming, which is excellent. The ASUS and MSI cards were right around 40.0dB though and that isn't much louder for those dual-fan cards. You shouldn't be able to hear this card if you have a gaming case that is loaded with a bunch of fans.

Power Consumption

geforce-gtx-750ti-cards

For testing power consumption, we took our test system and plugged it into a Kill-A-Watt power meter. For idle numbers, we allowed the system to idle on the desktop for 15 minutes and took the reading. For load numbers we ran Battlefield 4 at 1920x1080 and recorded the average idle reading and the peak gaming reading on the power meter.

power-consumption

Power Consumption Results: The entire platform with the NVIDIA GeForce GTX 750 Ti 2GB reference card installed installed was consuming 105 Watts at idle and hit a maximum of 236 Watts when gaming. These are very impressive power numbers and are 35 Watts lower than the power consumed on the AMD Radeon R7 260X. The MSI and ASUS cards are factory overclocked with an additional fan, so it shouldn't be a shock that they consume slightly more power. Using up just 3W more at idle and 8W more at load isn't that bad though. NVIDIA is the winner when it comes to power efficiency and the power drop from the NVIDIA GeForce GTX 650 Ti BOOST is staggering.

Dual Monitor Power Consumption

One of the things that we noticed with the some of the current AMD Radeon graphics card is that they aren't as power efficient as NVIDIA GeForce cards when it comes to multi-monitor setups. This is something we don't often touch on in all of our video card reviews, but we wanted to see how the NVIDIA GeForce GTX 750 Ti does as it uses a new architecture and we wanted to check how it supported more than one display.

dual-monitors

In the GPU-Z screen shots above we have the Sapphire Dual-X R7 265 2GB OC w/ BOOST running with one monitor on the left and two monitors on the right. Yes, Just hooking up the second monitor will cause the power draw to go up and many people don't fully understand this. Having to push pixels and manage the clocks of two displays does put more strain on the GPU and NVIDIA increased the core and memory clock speeds to do this. You also need more voltage an an idle state to run the higher clock speeds and the means more heat and sometimes higher fan speeds. You can clearly see that the GPU idle temperature went up by 11C and the fan speed went up 1% as a result of hooking up a second display to the video card.

gtx750-dual-monitor

Here are the GPU-Z shots for the NVIDIA GeForce GTX 7500 Ti 2GB reference card that shows one monitor on the left and two monitors on the right. As you can see the NVIDIA GeForce GTX 750 Ti clock speeds, voltage and fan speeds all don't change when a second monitor is hooked up. The only change is roughly a 5% increase in the memory controller load and about a 0.2% higher TDP (power consumption) as a result of the higher memory controller load. So, there was a 1C increase in temperature and a 2W increase in power consumption due to this, which is minor compared to the 20W or higher difference seen on comparable cards from AMD.

  GTX 750 Ti GTX 750 Ti R7 260X R7 260X R7 265 R7 265
# of Displays 1 2 1 2 1 2
Core Clock 135.0 MHz 135.0 MHz 300.0 MHz 300.0 MHz 300.0 MHz 400.0 MHz
Mem Clock 202.5MHz 202.5MHz 150.0 MHz 1625.0 MHz 150.0 MHz 1400.0 MHz
Idle Temp 22C 23C 25C 34C 26C 37C
Idle Power 105W 107W 112W 132W 113W 139W
Fan Speed 33% 33% 20% 20% 20% 21%
Fan Noise 38.7 dB 38.7 dB 38.6 dB 38.8 dB 38.7 dB 38.8 dB

As you can see there is a pretty big difference in power consumption and temperatures when it comes to adding a second display with an NVIDIA versus AMD graphics card solution. Right now NVIDIA is clearly in the lead.

 

 

NVIDIA GeForce GTX 750 Ti Overclocking

We installed the EVGA Precision X v4.2.1 overclocking utility to see how the NVIDIA GeForce GTX 750 Ti 2GB video card could be overclocked! You can use whatever software utility you like for overclocking, but this is the one we used today.

evga-precisionx

In case you forgot, the NVIDIA GeForce GTX 750 Ti card is clocked at 1020 MHz base and 1085 MHz boost and the memory is clocked at 1350MHz (5400MHz effective). Let's see how much higher we can get a fully enabled GM107 Maxwell GPU with 640 CUDA cores!

evga-precisionx-oc

The NVIDIA GeForce GTX 750 Ti is pretty locked down when it comes to overclocking. There is no way to increase the power target beyond 100% and all the cards are currently limited to a +135MHz GPU clock offset. We won't be showing you all the cards being overclocked today as they can ALL easily run with the GPU clock offset at +135MHz.  We then slowly increased the memory clock offset to see how far we could go before the card would become unstable. We left the temperature target at 80C as this card is far from reaching that. 

overclock-gpuz

We ended up with a GPU clock offset to +135MHz and the mem clock offset to +700MHz before we started to get encounter some stability issues due to the memory clock frequency. This overclock meant that we were running at 1298MHz at times thanks to NVIDIA Boost 2.0 on the core and 1700MHz (6800MHz effective) on the 2GB of GDDR5 memory. Not bad and it looks like Maxwell should be pretty good when it comes to overclocking!

NVIDIA GeForce GTX 750 Ti Stock:

3dmark-750ti-stock

NVIDIA GeForce GTX 750 Ti Overclocked:

3dmark-750ti-stock-oc

By overclocking the NVIDIA GeForce GTX 750 Ti 2GB reference card we were able to take the score of 4029 and raise it up to 4555. This is a 515 point increase in our overall 3DMark score, which represents a performance gain of 12.7 percent. Not bad for cranking up the core clock as high as the cards BIOS will allow and raising the memory up just shy of where it starts to artifact.

 

Final Thoughts and Conclusions

geforce-gtx-750ti-cards

The NVIDIA GeForce GTX 750 Ti is the very product that allows us to look at what NVIDIA has been working on with Maxwell. The general gaming performance of the GeForce GTX 750 Ti wasn't that exciting, but what NVIDIA has done with the power efficiency is pretty damn amazing. The NVIDIA GeForce GTX 750 Ti 2GB reference card was just 5.7-inches long with a small GPU cooler on it, but it was quiet and we never could get the card to get over 60C! We also like that this card has just a 60W TDP and doesn't require any extra power connectors, so you can easily upgrade older systems without even really having to worry about the power supply. Well, NVIDIA does suggest at least a 300W power supply be in the system.

Reference Card Specifications GeForce GTX 750 Ti GeForce GTX 750
Chip GM107 GM107
CUDA Cores 640 512
Base Clock 1020 MHz 1020 MHz
Boost Clock 1085 MHz 1085 MHz
Memory Configuration 1 GB - 2 GB 1 GB
Memory Speed 5.4 Gbps 5.4 Gbps
Memory Bandwidth 86.4 GB/s 80 GB/s
Power Connectors None None
Outputs DL-DVI-I
DL-DVI-D
Mini-HDMI
DL-DVI-I
DL-DVI-D
Mini-HDMI
TDP 60 W 55 W
SLI Options None None
MSRP At Launch $139 - $149 $119

Here is a quick table of the GeForce GTX 750 Ti and GeForce GTX 750 specifications in case you missed something along the way.

newegg-pricing

When it comes to just gaming performance the AMD Radeon R7 265 2GB video card at $149 easily wins that battle with higher overall performance for 1080p gaming. The NVIDIA GeForce GTX 750 Ti starts at $149 and most of the custom cards will be in the $159 to $175 ball park. NVIDIA doesn't win the price versus performance battle, but they do win in pretty much everything else. When it comes to temperatures, power efficiency, noise levels, power supply requirements and things of that nature the NVIDIA GeForce GTX 750 Ti really shines. The ASUS GTX750Ti-OC-2GD5 that we looked at today is available for $154.99 shipped on Amazon and the MSI GAMING N750Ti TF 2GD5/OC is available for $163.98 shipped after a $10 rebate on Newegg and it includes Assassin’s Creed Liberation HD.

If we had to pick a winner between the MSI N750Ti TF 2GD5/OC Gaming and the ASUS GTX750Ti-OC-2GD5 it would have to be the card from MSI.  The MSI Twin Frozr GPU cooler is hands down superior and that means that the graphics card was able to boost to higher than advertised clock speeds. You also get an Assassin's Creed Liberation HD game voucher, so for $5 more you get better cooling, higher clock speeds, better performance and a game title. The ASUS card looks like it was designed for overclocking as it has a 6-pin PCIe power header on it, but it really isn't needed since NVIDIA limited the overclocking on this card to +135Mhz on the core. You don't need any additional power to do that, so for now it is a wasted feature that doesn't really do anything for you. If NVIDIA allows for additional overclocking headroom that might change though.

At the end of the day we really like what NVIDIA has done with power efficiency and we can't wait to see what happens when you get a few thousands CUDA cores on one Maxwell GPU!

LR Recommended Award

 

Legit Bottom Line: The NVIDAI GeForce GTX 750 Ti 2GB doesn't up the ante when it comes to gaming performance, but offers improvements in all the little areas and that adds up to some big changes!