Top Game Developers Answer – 4K or NVIDIA G-Sync Monitor?

Are you a gamer and thinking about updating your monitor? Ultra HD 4K monitors look amazing, but they cost a small fortune. For example the ASUS PQ321Q 4K Ultra HD display runs $3499 and that is far more than most are willing to pay. NVIDIA today announced G-Sync and this new technology brings very apparent improvements to those with GeForce GTX video cards by synchronizing the monitor to the output of the GPU, instead of the GPU to the monitor, resulting in a tear-free, faster, smoother experience that redefines gaming.

nvidia-GSYNC

NVIDIA already has industry support from monitor makers and has ASUS, BenQ, Philips and ViewSonic already signed up to produce G-Sync enabled monitors. ASUS plans to release their G-SYNC-enhanced VG248QE gaming monitor in the first half of 2014 and has the price set to $399. The non G-Sync version is currently available for $279.99, so the price delta between the non-G-SYNC models and G-SYNC model appears to be about $129.  So, that leads to the question of what monitors will gamers be looking towards most?

Gaming Industry Panel

NVIDIA was able to John Carmack, Johan Andersson, and Tim Sweeney to take questions and they were asked if they would rather have a monitor with 4K or G-Sync. John Carmack, co-founder of id Software, was the first to answer and he quickly said NVIDIA G-Sync and the others didn’t disagree. NVIDIA G-Sync monitors are certainly more affordable, but keep in mind G-Sync can be used in a 4K panel! 

Print
  • charlgamer

    not gonna buy a new monitor whit the same resuliton I am waiting for 4k whit G-sync

  • Mopar

    Interesting, was it not nVidia that just a month ago was proclaiming the future of PC gaming was 4K displays? Wonder what it will be next month? Also a panel of game developers paid to be on stage of course they will agree. Interesting however as Carmack did a video interview when Rage released and explained that 60FPS was all the devs needed to hit for a great gaming experience.

    • Jono

      The last piece of this story….. “but keep in mind G-Sync can be used in a 4K panel!”
      people are so negative

  • Saif

    tbh i think that it’s time for peace because it will be best for all us and them, nvidia should approach AMD about this technology and make some kind of contract that they both like, there will be less prices dropping but still every body would get what he wants and every body will profit more than ever. i hope that i could spread the word more till they see this because it’s worth it.

    • FUCK AMD

      I speak for the entire internet when I say, FUCK AMD, nobody cares about AMD. Nvidia is superior in everyway possible, from hardware, software to features. Why should Nvidia give a fuck about AMD? They’re no longer competition. They just exist… NVIDIA! FANBOY AND PROUD! WANNA FIGHT PUSSY????!!!!

      • saif

        well obviously AMD has beaten the shit out of nvidia this year due to console’s selling and kaveri also their new Hawai GPU’s and there is still more to come when Mantle is here which is a new innovation we haven’t seen in years. it could boost your performance up to 45% plus amd is a better price/performance company by not less than 30% i’d say. obviously AMD is more creaetive. AMD! FANBOY AND PROUD! WANNA FIGHT PUSSY????!!!!!

  • Jim Maki

    You probably don’t remember the USRobotics Sportster 14.4K Modem – that would only go 14.4 with another USRobotics Sportster at the other end (BB or FTP or whatever), otherwise it was 2400 baud. Yeah, it died on the ash heap of dial-up modems, very near the bottom of the pile.

    Similarly, I welcome the innovation, but there is no way they will corner the PC gaming video card and monitor market before others adapt, imo.

  • Polaco

    so propietary technology that only works bounded with a specific gpu vendor? god ! why?!

    • Nathan Kirsch

      No stuttering, tearing and a smooth game experience. It looks good in person!

      • Cloud W Omega

        that is not the problem here. It that you MUST use a certain brand. Its not open

        • gonchuki

          Exactly this. How about creating an open standard like AMD does? All we have seen from Nvidia ever since they bought Ageia is more and more proprietary and closed nature “technologies” being thrown around. Why the quotes? Because that’s what where I come from we call “vendor lock-in”.

        • basroil

          Where’s AMD’s “open standard”? I haven’t seen one yet, only ones AMD’s ever been a part of is as part of Khronos, which Nvidia is also a member of (Intel too).

        • Kayle Lang

          What about OpenCL? I know Nvidia is part of it, but AMD is actually pushing it while Nvidia is pushing PhysX.

        • basroil

          No, AMD is only one of the players, and the trademark doesn’t even belong to any graphics company, rather Apple. They all pretty much just submit tweaks they made for their own systems for inclusion into the standard anyway, it helps them claim they are ahead of the competition (but usually doesn’t help). And all of it is just lower level implementations of DirectCompute though (it came out a whole 6mo earlier), so you can say that it’s actually Microsoft pushing OCL

          And Nvidia is actually pushing CUDA, not PhysX, for computation.

        • gon

          Mantle will be an open standard, it’s one of the first things AMD said during their presentation. OpenCL might not be AMD’s own proposal, but they are backing it as opposed to CUDA which is unavailable to anyone else that might want to implement it. Same with PhysX, for which they even purposefully disable GPU acceleration if they detect any AMD card on your system and it has been historically crippled to run bad on CPUs so that they have a better selling point.
          It’s probably dead simple for Nvidia to make PhysX depend on OpenCL instead of CUDA, but they seem to like their walled garden a lot lately.

        • basroil

          Will be != is , by the time Glide was open it already died, and since neither the xbox nor PS4 support it the chances of wide adaptation are slim outside major studios.

          As for OCL vs CUDA, they are not directly interchangeable, and switching physx to ocl would require quite a bit of rewrites and workarounds. CUDA is mostly for professional applications where inter-portability is less important, so it doesn’t really affect anyone.

        • Roman Cruz

          Since when is Mantle open standard? It only works on GCN GPUs, which means it only works on AMD cards.

        • Serpent of Darkness

          This is incorrect. AMD Mantle, the new low level API, can be used by both AMD and NVidia. Why. It’s simple. Both card can use OpenGL. So what does GCN have to do with it. GCN, the architecture is more optimized for D3D11.0, D3D11.1, D3D11.2 and OpenGL. NVidia’s Kepler Architecture is optimized for D3D9.0, D3D11.0, but it can also use OpenGL. As for D3D11.1, there’s no full support for it on the Kepler Architecture, and zero support for the D3D11.2 API. Simple and short: AMD Mantle isn’t going to be proprietary, but AMD’s GCN will have more bang for the buck.

        • Roman Cruz

          You seem to have a flawed understanding of Mantle. It’s a _replacement_ for DX and GL, not an extension for either of them. It allows for coding directly to the metal, something that DX11 and OpenGL are incapable of doing.

          Hell, it won’t even work on an Xbox One:

          http://www.extremetech.com/gaming/168671-xbox-one-will-not-support-amds-mantle-and-ps4-is-also-unlikely-is-mantle-doa

        • Hung Mai

          HSA. Remember that.

    • basroil

      We don’t know exactly if it will stay proprietary or not yet. Basically it’s monitor side frame buffering with variable framerate capabilities. The computer side seems to just be a form of frame pacing combined with non VESA video output, so no reason to think AMD couldn’t be included if their graphics cards support non-VESA protocols over DP (only supported on displayport, probably because of the packet system DP uses)

      • Serpent of Darkness

        I think calling it “Frame Buffering” is an inaccurate way to describe it. From what I understand, seen of it, and heard from others, it’s more like G-Sync is actually regulating the refresh rate on the fly on the monitor’s end. It’s being modulated on a frame by frame pace. Meaning, when the GPU says to the monitor time spent on Frame 458 is 13.4 ms, the monitor with G-Sync in it, will say hey, instead of waiting for the frame at 16.67 ms, it’s coming at 13.4 ms. Monitor tells the GPU it’s waiting for the next frame to display.13.4 ms = 74.62 Fps. It displays the next frame, frame b, at 73. FPS or 13.70 ms. As FPS goes down in magnitude, Time to produce a Frame goes up in milliseconds. If the refresh rate of the monitor is at 60 hz, then the G-Sync Unit will probably continue to work at 16.67 ms, and the gpu will continuously render images at 60 fps until it drops in value. Since FPS and Time To produce Frames is regulated by the hardware on the GPU, it’s not going to be a surprise that this will work, n-sync, with both components (The GPU and G-Sync).
        The reason why I don’t think it’s acting as another frame buffer is because this implies that the image is being stored on the chip before it’s displayed on the monitor. So essentially, the gpu writes the frame to it’s frame buffer, sends it to the G-Sync Core, stores the image on it’s frame buffer, and then displays the frame in under 16.67 ms or more. If this were true, then latency on the G-Sync Chip would come into play. If the latency on the Tegra4 Chip goes up, this could be a problem, and the middle man could cause the thing it’s trying to prevent: Micro-stutters and Runt Frames.
        The problem right now is that when the GPU writes a frame to the buffer, and sends it to the monitor, the monitor will decide when the fame is displayed over an interval of time. If say 60hz (60 hz = to 60 fps max), the interval of time is 16.67 ms is 60 fps. Frame A gets written to the Frame Buffer, then gets sent to the display. When Frame A is displayed, Fame B is already in the frame buffer and it’s being sent to the monitor. Now if time spent to send the frame, from the frame buffer to the monitor exceeds that 16.67 ms time interval, the frame is either dropped, and waits for the next refresh frame, or displays it. This is where the problem begins. Any frame rate above 60 fps will meet that interval of time need to display frames, and push the next frame from the frame buffer. When you start dipping below 60 fps, frames will either start to get dropped, overlapped, or tear in the process of an upcoming frame.
        G-Sync will probably stay proprietary. A required NVidia GPU is needed for this to work, on certain, new 144hz refresh monitors. Not even going to go into the frustration that old monitors can’t use it, and you need to open your monitor to install and use the product with a retail price of $399.99. There’s so many reasons why it should stay proprietary, and so many reasons point that NVidia and Asus want you to spend more money to purchase their products when the economy is unstable right now. In addition, since the issue with Micro-stutters and tearing is starting to seem more like a monitor issue now, and not a GPU issue, I wouldn’t be surprised if monitor manufacturers come out with similar gimmicks.
        In truth, this reminds me of Dynamic Framerate Control found on the Beta Version of the RadeonPro software. You can argue that this and G-Sync isn’t the same thing, and I will agree to an extent, but they both basically control FPS in some manner. The problem isn’t the cards, it’s the monitors. It’s just that NVidia found a way to fix it on the Monitor end while AMD uses software to produce similar results. With Dynamic Frame Control, you lock in a single FPS value, and the only thing changes is the time to produce Frames, but you’re getting all frames sent to the monitor at the same refresh rate. So the monitor can basically anticipate when a frame is coming if the pace never changes. DFC will actually 100% of frames, but you can start to see some latency when the time to produce frames goes up. It won’t create tearing or micro-stutters because the frames aren’t overlapping, tearing, or being dropped.