NVIDIA GeForce GTX 780 Ti vs AMD Radeon R9 290X at 4K Ultra HD

Jump To:


The Best Single-GPU Card For 4K Gaming

We often get asked by readers what the best video card is for gaming on a single 4K display or an NVIDIA Surround or AMD Eyefinity multi-display setup. We addressed this question in October with a look at the just released AMD Radeon R9 290X versus the NVIDIA GeForce GTX 780 on our Sharp PN-K321 32″ 4K Ultra HD Monitor at 3840 x 2160.  A ton has changed over the past two months as both NVIDIA launched their new flagship card, the GeForce GTX 780 Ti, and both AMD and NVIDIA have released new drivers that help improve gaming performance.

4K displays are also continuing to come down in price as just this week Dell announced a pair of new 4K displays that are more affordable than ever. The new Dell UltraSharp 24 Ultra HD Monitor (UP2414Q) runs $1,399 and uses an IPS panel that is factory-tuned Adobe RGB (wide gamut) with more than acceptable viewing angles. That price point should be low enough to get more gamers playing at 4K! The Sharp PN-K321 4K LED-Backlit display that we have is getting pretty hard to find, but the ASUS PQ321Q has dropped to $3,323 shipped (Note that the mentioned Sharp and ASUS 31.5-inch 4K monitors are the same internally). Not everyone has $3,300 to spend on a monitor, but a good number of gamers are willing to pay around $1000 for a 4K monitor and we expect to see 24″ and 28″ 4K monitors with TN panels for $1000 or less in 2014.

AMD Radeon R9 290x NVIDIA GeForce GTX 780 Ti

For this comparison we have two new video cards that are going to be using. The NVIDIA GeForce GTX 780 Ti 3GB SC video card that we’ll be using is the EVGA GeForce GTX 780 Ti Superclocked w/ ACX Cooling that is sold under part number 03G-P4-2884-KR for $729.99 shipped or $719.99 shipped if you want the NVIDIA reference cooler. This card is factory overclocked and runs with a 1006MHz base clock, 1072MHz boost clock and 7000MHz on the 3GB of GDDR5 memory.  This card comes with Assassin’s Creed IV Black Flag, Tom Clancy’s Splinter Cell Blacklist Deluxe Edition and Batman: Arkham Origins. EVGA did a wonderful job on this card and it makes for a perfect example of what one can expect from one of the best GeForce GTX 780 Ti cards on the market.

The AMD Radeon R9 290X has been out since October and we will be testing out a retail Sapphire Radeon R9 290X 4GB that came from Newegg. This card retails for $549.99 plus shipping, which is $180 less than the EVGA GeForce GTX 780 Ti SC. There are no custom AMD Radeon R9 290X cards on the market, so that means you are stuck with factory clock speeds and the reference GPU cooler if you wanted to go with the flagship single-GPU card from AMD right now. The clock speeds on the Sapphire Radeon R9 290X 4Gb are 1000MHz on the core and 5000MHz on the 4GB of GDDR5 memory.

The odd thing is that neither of these cards are in-stock at Amazon or Newegg right now and they have both been out for a number of weeks now.  It is still tough to find some of these high-end cards!

4k-resolution

Both the Radeon R9 290X and the GeForce GTX 780 Ti had no issues being setup and run at 4K Ultra HD resolutions on our Windows 7 64-bit test system. Everything just worked right after the latest driver was installed and we were up and running at the monitors native resolution of 3840×2160.

290x-780ti

So, the only thing let is to get to benchmarking! We’ll be skipping all the synthetic benchmarks for this review and will be running seven game titles and manually benchmarking each of them with FRAPs. We’ll be looking at minimum, average and maximum frame rates as well as looking at the first 90 seconds of each benchmark run charted out in its entirety. This is a little different than we normally benchmark video cards, but should give you a better look at what is going on in the game titles as we did experience some stutters, freezing and a ton of tearing.

Print
Jump To:
1 2 3 4 5 6 7 8 9 10  Next »
  • Mnn

    The comparison is not exactly fair. The 290X has to be overclocked too. The 780 Ti ACX SC is already almost clocked to the max out of the box. The 290X still has about 15% OC headroom.

  • jiggy

    the real difference is he’s not even rendering the games in native 4k, simply just fucking upscaling them from 1080p to 4k which, hence the 4, is 4 times 1080p. he could crank up res scale on bf3, then maybe we could have gotten an actual idea of what the performance really is. you would need 2 of these cards for REAL 4k gaming. 200% res scale on bf3 is equal to that of 4k. and that would really chew up memory. might even need the titan.

  • Chris

    Pointless review. So you took a reference card and put it up against non-reference card that is OC’ed. I don’t know what you were trying to prove…I could have told you what would happen without even running tests.

  • Fred Barton

    for top end cards how are these companys getting away for charging such high prices for cards that can barely run these games? 25 /30 fps? bloody hell thats nearly unplayable, i wouldnt bother playing it until something that can run this game comes out.

    • iostream

      its 4k, not 1080p

  • Johnny Danger

    i just bought a 780 ti superclocked and its a beast. i have tried a friends 290x , and decided to go nvidia one again. amd software is terrible, and the 290x doesnt feel as smooth, plus , i love physx. i dont mind paying extra for something that is truly better

  • TF108

    Que raro en todos sale victoriosa la Nvidia menos en un par de ellos que la AMD le gana por poco. Hasta en el Battlefield 4 que esta echo con una AMD 7990 de 6GB. ESOOOOOOOOOO NO SE LO CREE NADIEEEEEEEEEE! NVIDIOSOS!!!!!!!! Esta pagina y la de nvidia es lo mismo!!! 3000 puntos menos !!!!!

  • Brian Berneker

    Incidentally, those on OSX who are looking for 4K will need to upgrade to Mavericks (10.9) if they want 4K support via HDMI. I picked up a 4K SEIKI 39″ TV for a steal but couldn’t do 4k resolution on 10.8.5 without hacking the pixel clock limitation. Apparently OSX had limited the drivers, but this has changed in 10.9 (which I think is fair to say has some bugs).

    4K on OSX can be done, but be prepared to use additional tools like switchresX and Terminal for full functionality like HiDPI if you want to stay in 10.8.X.

  • Brian Berneker

    Anyone using OSX may want to beware of late model GTX780 cards. There is apparently an upgrade to newer cards that makes openGL and openCL crash on OSX, rendering apps that use them broken.

    This is bittersweet because it means NVidia is actively upgrading their hardware, but neither the latest NVidia web driver or the one built in to 10.9 (Mavericks) or 10.8.X (Mountain Lion) recognizes the new cards properly.

    If you are planning to drop $600+ on a new GTX780, you may want to give this some thought ahead of time. Once the next driver comes out it will hopefully solve things, but for now I’d hold off.

  • Silviu

    Nvidia GeForce GTX 780Ti fastest gaming GPU on the planet !
    Fair tests R9 290x (OC=Uber) vs GTX 780Ti (OC)
    And AMD R9 290x stock vs GTX 780Ti stock = GTX 780Ti it will be better but not by much!
    Thank you LegitReviews for this benchmarks now i know what card i will get for my Christmas present GTX 780Ti !
    Thank you Nvidia for showing us again “gamers” that you are the future and gaming leadership!
    And also i would like to thank to AMD for the best-buy product GPU of the year AMD Radeon R9 290 (without x).

    • Kevin Lepp

      Fastest gpu on the planet with limitations. Run multi display with aa and she won’t be so fast no mo >.<

      I've long learned not to give anyone the benefit of the doubt over the internet SO I shall let you in on a secret….

      If that was the case titan wouldn't be priced nearly twice as high :p

      Course common sense is just a common commodity they say :p

  • Fiberton

    It is using Nvapi for 780TI..Fine…Mantle is here soon :)

    • Athenasius

      There is a huge misconception surrounding NVAPI and what it does.

      Nvapi is a driver interface that merely allows hardware access to certain sensors on the card and the ability to perform certain types of nvidia only extensions and techniques, i.e sli, 3d vision and some control over DX rendering but you still have to go through the thick layer of directx for the majority of rendering which is why it does not give the performance benefits of mantle.

      Mantle allows the application to directly control gpu memory and rendering through a thin abstraction layer that is not tied to any architecture but does require a base set of features that need supporting in hardware , this is where the performance improvements come from as the programmers know better how their application needs to handle memory and rendering (like how consoles are programmed).

      with D3D.X and GL.X the driver developers write the memory management and render handling code in a black box that the application sends commands to which the application developer has no control over but can be adjusted by working with the driver team but never controlled by the application (which is why day 1 AAA game driver updates are needed for best results)

      GL does have some vender specific extensions that bypass the black box to improve the Draw call limit but Mantle is more than just the draw call limit improvement which is where carmack was wrong in some ways when saying that GL can do what mantle can which is why developers are more interested in supporting mantle than GL.

      Mantle is currently only available for GCN because they decided that they would base the feature set of mantle on some of the capabilities of a compute based gpu using the GCN architecture, GCN supports many more features and is far more general purpose than the older VLIW series cards which lack the hardware abilities to support the core features of mantle, this is why amd stated that even if a card is directx 11 capable it does not mean that it will support mantle.

      Mantle is very forward thinking and would have been hindered if it had to support the 5k and/or 6k series of cards in its capabilities and was designed from the ground up to leverage more general purpose compute orientated gpu’s.

      where it comes to whether or not nvidia will be capable of supporting the core feature set of mantle on kepler is still uncertain as Andersson himself said it will depend on keplers hardware feature set, being that kepler alike GCN is far more general purpose than their older generations of cards it will be dependent on whether nvidia decide to implement a mantle driver for kepler which will be no different to writing a directx driver, they get the feature set and what each feature is required to do and then implement the algorithm to convert the mantle api feature into kepler hardware code etc.

      at least that’s how i understand it from all the videos and presentations.

  • Kevin Lepp

    Ouch, just finished the review and realized u are already running out of frame buffer at 4k which is actually quite a bit lower than multi monitor reso. I assumed my reso was 4k but apparently it’s quite a bit beyond. So for multi monitor users their is only 1 card and it’s the card that wasn’t even in the review lol. I game at 4680×2560 and I got enough vram to last me the next few years I’m hoping.

    Glad I sold my 780s (probably wouldn’t have had nvidia removed the ability to use quad sli saying it was only for the titan line and then allowed it on the new 780s lol )

    Lying greedy pigs. They might as well give it back to the poor guys who bought 780s but they won’t because it would be admitting they messed up. They’re absolutely shameless

  • Kevin Lepp

    How the hell can u say either is the best for 4k gaming when a titan is within 1-5% of them in framerate (so negligible it’s almost funny) yet with double the vram. Today’s games such as bf4 have already hit 5100mb of vram with under 4k reso (albeit not by much) so perhaps you should test the above cards when the vram is gone? Or Atleast weigh that when you call something “the best card for uhd resolutions” because that’s so misplaced it isn’t funny. Your basing it on framerate alone with under max settings. Atleast add a disclaimer for people who don’t think for themselves that says “best card as long as you don’t exceed 4k, play games that come out after now, or use AA for extreme eye candy”

    Seriously. No one argues the titan is expensive, but you ever consider their is a damn good reason that A) nvidia still charges 1k and B) the newer cards weren’t given the vram it has?

  • Anonymus

    Let’s not forget about Mantle, guys !

    • Kevin Lepp

      Micky mantle?

  • tabby

    Is 4x msaa really necessary when you are already running 4k? Seems to me if you turn that off, you can get respectable performance from both cards.

    • Kevin Lepp

      People always say this. That’s like saying is 500hp really necessary? Seems to me like you can already do a quarter mile in 12 seconds with 400. Define necessary. You aren’t dealing with your average mom and pop PC gamers, enthusiasts want the best/more etc

      • tabby

        By necessary I am talking if there is a noticeable visible difference. If the quality is non-existent or minor at best, then there is no point at all turning on the AA…….

        • Teddy Steves

          just depends on the game. Aliasing is really only noticeble in the far distance but 4x really doesn’t hurt frames that bad on the 780 TI Superclocked. The only game that doesn’t run at 30 fps or better at 4K with or without 4x AA is Crysis 3. you gotta play that one at 1440p to have smooth gameplay or 1600p no AA.

    • Teddy Steves

      it just depends on the game. some need it. some do not. but for the ones that do 4x is the most they need. BF4 at 4K with 4x AA looks amazing. so does Crysis 1, NFS Shift and F1 2012.

  • yushady

    buy r9 for bitcoin

  • Nvidia

    암드충 아웃! So a nice all-weather heater when you buy AMD?

  • Socius

    What you should have added to the review was a GTX titan matched to the same clock speed as the 780ti to see whether the extra SMX on the 780ti or the extra 3GB VRAM on the Titan end up being better for 4K gaming.

  • Quentin

    It’s really not useful to simply test each game at one particular graphics level. Find which graphics level gives an acceptable performance and then tell us about it.

    • Lordz

      The other question is what fan speed was th 290x running at?
      To be fair if they had the fan at 70% it would be loud as hell but give 0 throttling so you could at least see what the card is capeable of. (especially when cooled correctly by the vendors)

  • Serpent of Darkness

    1. You guys should read these benchmarks with a grain of salt. Unless the 3rd party benchers are using AA past 16x, or MLAA on the AMD side as a substitution for FXAA, “Max” isn’t always max. It’s just max settings in a game’s options. It’s not max settings in the NCP or AMD CCC. Typically, 3rd party benchers don’t typically push it to the very max–I’m not accusing Legitreviews.com of that. Only informing others. When you’re able of choosing options beyond the scope of the game, and come out with a un-sufferable fps output, it’s more valid to call that the “Max.” 3rd Party Benchers never really push 8xMSAA for NVidia Cards or AMD Cards.
    2. For roughly under $200.00 dollars more, you’re going to get 2 to 15 FPS more. Practical or economically ideal isn’t two of the first ten words to describe the GTX 780 Ti.
    3. This is a reference R9-290x Graphic Card Versus a GTX 780 Ti non-reference card. The validity of this benchmark is no different than comparing a GTX 680 to the AMD 7990. This isn’t an apple-to-apple comparison. As such, I wouldn’t take it seriously.
    4. The Transistors count between NVidia GTX 780 Ti versus AMD R9-290x, GTX 780 Ti Tran-count is greater than R9-290x. Remember, R9-290x is only 35% more performance than the 7970Ghz edition Series. Between GTX 780 Ti and GTX Titan, there’s only a 15% increase in performance. R9-290x is in competition with the GTX 780. GTX 780 was 15% more performance than 7970 GHZ, and GTX Titan was 30% more performance than 7970 GHZ. Since R9-290x performance > GTX Titan, GTX 780 Ti came into the scene. So with that logic in mind, GTX 780 Ti should be roughly 10 to 13% more performance than R9-290x. Is it? 2 to 15 FPS–the answer is no. For most titles, the average is less than 13%. In BF4 @ Max, the difference is 24%. Bioshock Infinite comes in at 20% for GTX 780 Ti.
    5. GTX 780 Ti finally gives full D3D11.2 or 11.1 support and Directcompute 5.0. Wait a minute, isn’t Directcompute 5.0 originally found on AMD Graphic Cards in prior generations? So GTX 780 Ti doesn’t need CUDA to win???
    6. For those of you who don’t fully understand, AMD Mantle will only work on games that support it. So you’re looking at BF4, Star Citizens, Thief, others… It’s going to increase the draw-call rate by, at most, 9x. So if current games now utilize up to 1x or 2x times the draw-call rates, that means in certain game, you’re possibly going to see a sharp increase in FPS with said PC games, but that’s not going to be the entire truth. What’s going to happen is this. For games that use AMD Mantle, it’s going to allow the Developers to increase the graphic-details, add more features, and improve the physics of the game. So even though you’re getting 40 FPS with AMD Mantle, in Star Citizens, you’re getting 40 FPS with much more detail and eye-candy from a graphics perspective. It also means that NVidia graphic cards will suffer a drop in FPS performance because it doesn’t utilize AMD Mantle. So what takes 40 FPS to push the same graphics on an AMD Card, will be 20 FPS on a NVidia Graphic Card. A lot of people don’t realize, but there’s a native language on the core processor. That needs to be translated to other languages like the API languages. The whole fact that AMD mantle is going to low developers to code things through the native coding language on the hardware, is going to improve the AMD card’s performance with Mantle.
    7. AMD may still want to push it’s own HSL later on in a few years. They’ve been wanting to get away from the D3D-Iron Grip MS has on the API.
    8. There’s to many GTX 780 Ti variants on the market. If supply goes up, and sales go down, price will go down with it. Consumers aren’t going to purchase an R9-290x for $550, and then suddenly go with a GTX 780 Ti non-reference card or GTX Titan Black for another $700 to $1100 Price Tag. That’s unrealistic. Since the base consumer is limited, and the economy isn’t improving, purchasing power by the consumers is naturally going to take a dive. So in a sense, this is a gamble for NVidia. If they produce to much supply of units, and not enough revenue for all these non-reference GTX 780 Ti, they will start to lose money. A major portion of their revenue comes from Gaming Graphics, Server Graphics, Workstation Graphic, and Tablet. They aren’t in the manufacturing business of monitors, Storage Drives, Sound Cards, peripherals, motherboards, etc… So they are actually limited in a sense in comparison to AMD. AMD does CPUs, APUs, Server-CPUs, Server-Graphics, Workstation Graphics, Gaming Graphics, collaborations with other companies on TruAudio, AMD Mantle, etc… AMD has SeaMicro who does storage. They just did the consoles for Microsoft and Sony. So when you put AMD down, you need to take into account that AMD does a lot more, in the tech industry, than NVidia.

    • http://www.com.org.edu/ Quinn FitzGerald

      STUFF THAT ACTUALLY IS RELEVANT TO THE ARTICLE:
      1. yup, happily agree.
      2. currently you also get a much cooler and quieter card (at the same performance level) which does matter to some people. I personally don’t understand why, but it does.
      3. It compares the best current 290x has to offer. If you were going to choose to buy a high-end GPU today, and you had to choose based only on performance, the 780ti offers a higher performance cap thanks to having custom coolers out. (and AMD really needs to remake their stock cooler, they aren’t making cards like the 4870 anymore)

      RANDOM CRAP YOU THREW IN:
      4. Transistors don’t mean anything when it comes to playing games (directly) they mean things towards manufacturing.
      5. why does it matter who it comes from? It gets to to feature parity. What the hell does this have to do with the article.
      6. Mantle isn’t going to change a game from approx equal to AMD being twice as fast. Probably 10-20% in most cases (Based off of current knowledge) what is impressive is that it will do so with lower CPU utilization, and therefor lower power.
      7. *yawn* AMD can push whatever they want, DX and D3D is going to stick around for a while yet, at least until the next release of consoles. What does this have to do with the article again?
      8. You know how supply and demand works, congradulations! What does this have to do with the review again?

      Nvidia PROFIT breakdown:
      1. GPUs
      2. licensing/settlement payouts

      Nvidia losses:
      1. Anything with Tegra in it
      2. pretty much any other part of Nvidia

      Now, SeaMicro DOES NOT DO STORAGE… They just don’t.

      Anyhow, saying AMD has a slower current card (given WHAT IS OUT ON THE MARKET) than Nvidia isn’t putting AMD down. It is stating a fact.

  • Sol

    Let’s say that you actually buy one of these cards and that you give another load of cash for a decent 4K monitor. Do you think it’s going to be enough for you to game at 4K? Didn’t think so. So this discussion is going to be repeated when the cards are better at 4K and when most of us can afford the upgrade. For now all I can say is that I’m satisfied with a 770 or a 280x at 1440p.

    • Socius

      I think the general idea is that anyone who can afford something like Asus’s 4K monitor at $3500 can afford 2 or 3 GTX 780 TI’s. It’s just like how OLED TV’s are out starting at $10k. Early adopters always pay a ton more.

      • 4K

        The Seiki Digital SE39UY04 39 inch TV/Monitor is available at Amazon and Ebay at around at $500 and above.

        • Socius

          Which is a 30Hz monitor, meaning it’d be perfectly suited for the 290x or 780 ti.

        • 4K

          It will be 60Hz if use HDMI 2.0 that is available at Ebay.

        • Socius

          Err…no. That’s not how it works. The device has to be complaint with the hdmi2.0 specs. It is not. Using an hdmi 2.0 compliant cable won’t magically change what the device is capable of. If some time down the line a firmware update can add hdmi 2.0 support, then yes. But as of now it is limited to 30Hz.

        • 4K

          What are you talking about? The Seiki Digital SE39UY04 39 inch has up to 120 Hz. I play game using 290X at 4K with this monitor at up to 60 Hz and using HDMI 2.0 from Ebay.

        • Socius

          No you don’t. It has 120Hz at 1080p only. It has limited bandwidth. You can not get 60Hz on that display at 4k resolution. No point arguing with me. Just ask on Seiki Digitals Facebook page and they will confirm it. Or simply google it.

    • tabby

      the 770 or the 280x are not very well suited for 1440p. There’s definitely going to be issues when you are newer games in 2014.

  • athenasius

    What was the core clock of the 780Ti while it was running games?
    because even if the base clock was 1006 and the stated boost was 1072, since its known that Kepler cards will boost beyond their stated boost clock if they are cool enough and go anywhere up to 1200mhz core.

    could you do a run with boost disabled and a clock for clock comparison between the two cards, would be interesting to see a proper comparison of both chips with the 29 290x properly cooled as well to prevent it throttling.

    • john

      The point being? This review pits a superclocked 780ti vs a reference 290x and it barely makes it and we know the 290x throttles like a little bitch… it’s just a product bench not a technical one…

    • Nathan Kirsch

      You can see that here on the FC3 page – http://www.legitreviews.com/nvidia-geforce-gtx-780-ti-vs-amd-radeon-r9-290x-at-4k-ultra-hd_130055/6

      It looks like the card was running at 1150MHz tops when being tested.

      • athenasius

        Very interesting, if we assume the 780ti most likely had the same boost clock for all runs then for a 14% clock increase over the 290x assuming the r9 290x was not throttling during the runs the 780ti only had a 11%ish performance increase over the r9 290x.

        essentially making the r9 290x more powerfull clock per clock, but we shall wait and see once the custom boards are also released with adequate cooling if this is true.

        next getting nvidia to support mantle so gaming can advance as a whole :)

  • WiseManPlayingPC

    I’d say they both suck at 4K. But the winner here is R9 because it’s cheaper and we all should punish NVIDIA for their high prices.

    • looniam

      oh yeah and reward amd for releasing a card that has only crappy reference coolers, throttles like hell because of the heat or have to buy a $100 water block for . . no thanks.

      • john

        I think amd got its fair share of ass whipping for the refference design…

        • plsboy

          Well i’ve gotta say No one buys a 600 $ reference card unless ure planing on using Water Cooling…

      • Victor

        Hey back off, that amd card has been saving me hundreds on my heating bill this winter.

      • Ken

        I have a 290 overclocked and haven’t had any throttling issues even during extended gaming. No heat problems either. Reference cooler and all. Then again, I have a sufficiently cooled case.

    • basroil

      Not at all, when your cheapest monitor is $1400 a pair of $650 cards and pair of $600 cards isn’t too different.

    • Anon

      Both suck at 4k? Compared to what??

  • Agustin Borgato

    the best comparison would be 780ti Reference vs 290x Reference too, here the 780ti is already overclocked

    • Mike DeGeorge

      The difference on the core is 72MHz between the 780 Ti and 290X according to Legit Review’s numbers.

      • Agustin Borgato

        if we overclocked the 290x the diference between both will be lower or even zero, when you are comparing GPU’s it has to be always at stock setting (GTX780ti = 875-928/1750 / 290X = 1000/1250mhz), it doesn’t make any sense to test an OC’ed GPU vs a stock one, it’s like we compare a GTX680 Lightning vs HD7970 stock, it’s absurd

        • Nathan Kirsch

          Guess AMD needs to let their partner release their custom cards!

        • 200380051

          You’re missing the point. Either compare reference cards, or custom/OC’d ones. We’ve seen reference card matchups. Now wait for both sides’ custom designs, yaa? You lost your time making this review: when amd releases their custom cards, you’ll have to do that top-end 4K matchup again, then it will be relevant.

          Anyways, thx for hinting at how the custom 290Xs will beat the Ti’s :)

        • Nathan Kirsch

          No worries and no time wasted… I’ve had multiple readers ask for this comparison that wanted to see what both sides had to offer on their top cards right now. When will the custom 290X boards be out? AMD hasn’t told us, so it looks like we are many weeks away.

        • Agustin Borgato

          okay, i will be waiting the 290x Lightning or DCII vs GTX780Ti ACX SC Review, they will be head to head in terms of performance

        • Mario Štajduhar

          No, they will not. R9 290X will take the lead because of larger memory bus and better cpc perf. Author’s termanology of this article is very, very poor. I am just curious what will happen when WHQL drivers comes out. I believe the very similar story when 7970/680 franchise was out. Apples to apples as I would said. “Ti” is 33% more expensive, but halts down in comparison to R9 (when the cards are at same clockspeeds of gpu and memory, and using the official drivers) period.

        • Miikka Nystrom

          Coming from someone who interns for Microsoft; the “larger memory bus” is a gimmick because the R9 290X is not even ‘powerful’ enough to run under circumstances in which would require the use of the larger memory bus; thus making it inefficient. On the contrary, this is not as much of a problem with the 780Ti.

        • basroil

          1000MHz solid clock on a R9 290X is already an overclock considering even the press samples struggle to get more than 925 for any extended period of time. Looks like both cards are 75MHz above the normal.

        • Agustin Borgato

          that was solved by a driver update, fan profile went from 40% to 47%, GPU’s load frequency now is 1000mhz all the time

        • meh

          hahahaha

      • Darren Rushworth-Moore

        You can’t compare two different architectures with clock speeds alone.

        However this is the EVGA SC so an expensive card made more expensive, 875MHz stock – 1006MHz, boost is 928 stock – 1070MHz so the difference is 142MHz double of what you are quoting from a stock 780ti to this SC 780Ti.

    • Victor

      the 780 ti has more overclock potential than the 290x, i’d like to see benches with both overclocked to their max to show how much better the 780 ti truly is.

      • Liam Tormey

        Once the r9 290 and 290x get decent coolers, we’ll be able to overclock a lot more, because the card runs so darn hot.

    • intrepix

      I’ll wait for the 4gb versions which are coming soon and watch the “inflated” prices drop to where they should be which has the retailers keeping the pressure on the manufacturers to keep their “suggested retail prices” from being mentioned.

    • nem

      review powered by NVIDIA LOL^^