NVIDIA GeForce GTX 750 Ti Maxwell Performance Numbers Leaked?

Last week we told you that numerous sites were reporting that the NVIDIA GeForce  GTX 750 Ti would be powered by the first Maxwell GPU and that it would be released in February 2014.  According to SweClockers, the NVIDIA GTX 750 Ti graphics card is expected to be released on February 18th. Today on Coolaler.com there are some benchmark numbers that have been posted up of what they say is the GeForce GTX 750 Ti. Soothepain on Coolaler writes that the benchmark numbers came from a reliable source, but usually when it is this far out the numbers are totally fake. Add to that this is a new GPU architecture and if someone really has a card, they are using extremely early beta drivers. So, knowing that these numbers are likely false, they show the NVIDIA GeForce GTX 750 Ti performing about 10 to 15% slower than the GeForce GTX 660. Does that shock anyone? 

nvidia_gtx750ti_performance

It will be interesting to look back at these numbers in a month to see if they were entirely made up or what.

Print
  • GZ77r

    Even if the benchmarks are correct, the technology platform should allow it to scale quickly as driver and software optimizations are ready. With an onboard CPU and Unified Memory – may have a lot of extra potential that is hard to value in a straight up bench mark at this time.

  • basroil

    Still a 20% improvement over the 650 Ti, and on par with the Boost, so it’s an improvement. Still hoped for 50% improvement like usual, so perhaps it’ll improve as they figure out how to use the ARM chip

  • Serpent of Darkness

    Techpowerup.com indicates, or hints that an ARM-like CPU unit (from Project Denver) will be on the PCB of the graphic card. I suspect this will be NVidia’s attempt to approach parallel, in a sense to AMD Mantle. This is besides the fact that it will use Unified Memory from the main CPU’s buffer. It will be implemented on a hardware level. Right now, the issue is a bottleneck at the CPU with up-coming and current PC Games. Without the need to recreate another, proprietary API on a software level, NVidia can reduce the overhead going into the main CPU by redirecting instructions to the CPU on the Graphic Card. The Denver CPU will communicate with the CPU on the motherboard like a middle-man in between the GPU and system-CPU. I also theorize that we will start to see AMD doing the same thing: Putting APU-like CPUs on the PCB of their graphic cards for increased performance. So the amount of Streaming Processors, Memory and GPU frequency won’t be a major factor. The Core Frequency and Core count on the APUs will matter with the graphic card. If it isn’t in an R300 series, we will probably see this in the R400 series. I think right now, AMD will resuscitate its CPU Line by selling CPUs with more cores, at higher core frequencies, but with higher power consumptions. Eventually, AMD CPUs for non-business consumers, will come in between 8 to 16 cores or higher… Haswell-E will come with 8 cores. An upcoming 2014-bulldozer or pile-driver refresh could come with 12 to 16 cores. AMD Opteron will surpass the 16 core count for Servers. I suspect AMD will push this because AMD Mantle will increase FPS performance when core count goes up, and more cores are utilized. If a 4 core CPU (3 Cores for AMD Mantle, and 1 for the main) causes a 3x to 4x increase in FPS performance, a 6 core CPU (5 cores for AMD Mantle and 1 for the main) will probably push performance up to the 4x to 5x mark. Haswell-E will probably push above 5x mark (7 cores for AMD Mantle, and 1 core for the main). Well. This is all in theory. Since AMD Mantle will be bare-metal programming, they can program, on a software level, where the instructions are going to on the individual cores. Dedicate Core 1 for DRAM, dedicate Video rendering to Cores 2, 3, 4, 5, 6, 7, etc… This reduces the input load into Core 0 in a sense. Main point about the results up above is that the 750 ti and the GTX 660, there isn’t a big performance gap between the two. The overall average between all results is only a difference of 21.26%. Let’s look at this at a hypothetical situation. Lets say the difference in performance between the GTX 650 and GTX 660 is only a difference of 62.146% in performance. Now if you look at it in context, if you were to jump from a GTX 650 to a GTX 660, you’ll gain roughly from a 0.00% to 62.146% gain. The gain from a GTX 750 ti from a GTX 650 will be 40.886% gain in theoretical performance, probably at the same price as the GTX 650. So that would mean that a theoretical GTX 860 variant will probably past the GTX 660 by another 20% to 40% theoretical gain in fps performance for the same price as a GTX 660. If a GTX 780 Ti is a 129.25% gain in theoretical performance to a GTX 660, a GTX 880 Ti would be roughly 40% to 60% higher in performance output to the GTX 780 Ti. If this is true, and the difference in performance between the AMD 7970 GHZ and R9-290x was roughly 30% difference; R9-290x and GTX 780 Ti are a difference in performance between 2% and 20%; R9-290x v2.0 at 3072 streaming Cores (all cores Open) would drops that gap down to 10% with NVidia still being king for the single-GPU Solution, Maxwell could dominate the Graphic and PC Gaming Market in 2014 to 2015 “if” AMD doesn’t have a counter-product that pushes another 30% to 50% as it’s flagship Graphic Card above R9-290x in the same generation. GTX 880 @ 860 Mhz Core Clock, could have a Cuda Core Count of 3083, roughly. GTX 880 @ 980 Mhz Core could have a 3513 Cuda Core Count. Again, this is all theoretical….

    • Ronnie

      Don’t tell me that you did originally write this essay and didn’t copy from somewhere else?!

      • dotburst

        He says techpowerup.com says…at the beginning.

        • Ronnie

          He has edited his post later on and added “Techpowerup.com indicates” line , because when I replayed it back then I’m pretty sure it wasn’t there.