AMD Mantle & TrueAudio Patch for THIEF Tested

Jump To:


AMD Mantle & TrueAudio Patch for THIEF

Mantle & TrueAudio are a pair of new features used by AMD on select Radeon R7 and R9 video cards that are Graphics Core Next (GCN) compatible.  Mantle is a new graphics API that AMD says provides a performance increase in games that are Mantle enabled.  

AMD Mantle API

Battlefield 4 was the first game to support Mantle, and Legit Reviews found that Mantle did provide a significant performance increase, with just a driver update and a patch by EA-DICE.  As it  is a new unproven technology, it is an interesting technology to keep our eyes on while it matures.

AMD-TrueAudio

AMD’s TrueAudio technology is a feature on AMD HD 7790, R7 260, and R290 series graphics cards.  These cards include a dedicated audio processor, which allows the audio processing to be offloaded from the CPU.  This processor processes audio compression and filtering, speech processing and recognition, simulating audio environments and creating 3D sound effects.
AMD Radeon TrueAudio Cards Other than needing one of the Radeon graphic cards that supports Mantle and TrueAudio, no additional hardware is necessary.  TrueAudio will work with your existing audio solution, the software developers will implement TrueAudio allowing the processing to be switched from the CPU to the Xtensa audio cores.

While Battlefield 4 was  the first game to support Mantle, Thief by Square Enix is the first game to support both Mantle and TrueAudio.  TrueAudio is implemented to enhance environmental sound effects.  This was done by taking a snapshot of the echo characteristics of a real-world location and importing it into software to be translated into how the sound should react within the games environments.

Let’s take a look at how Mantle affects the performance of Thief, and whether TrueAudio can enhance the games audio.

Print
Jump To:
  • John Lindsay

    Have you run Thief with mantle with the ondie apu? and btw what speed is the ram running at? the 5800 loves 1866 or even 2133

    • basroil

      Won’t really matter, the issue with the 5800 isn’t memory, it’s that the CPU is absolute garbage for gaming. After you add in the bloated AMD drivers, the CPU just don’t have enough power to do the physics, realtime compiled shaders, etc.

      • John Lindsay

        apparnatly your just a paid intel shill with all your comments. try using one at 720p (that’s HD) with fast ram – intel has nothing in the retail channel which can match it with onboard gfx.

        • basroil

          Tell us one game where an AMD chip running at the same speed as an intel Core i series chip actually does better on AMD. (hint, it doesn’t exist because AMD prefers low throughput per core per clock with more cores, while games are not perfectly parallel even with parallel physics, collision, and rendering).

          It’s not an AMD vs intel thing, it’s an AMD athlon day to AMD FX day thing, where AMD just picked the wrong architecture to pursue.

        • John Lindsay

          anything which uses the retail APU`s from either IHV – amd is so far ahead of intel its not even funny.

        • basroil

          Nobody cares about APU performance. AMD APU chips only see major benefits when paired with a high end GPU anyway, that’s how Mantle is set up. So any way you slice it Mantle is a crutch for AMD, letting them get away with sub-par CPU performance and drivers.

  • Strider

    Just dropping this here real quick to counter some of the BS “fanboy” comments.

    Mantle does not require an AMD CPU to see a performance increase, many people on “reasonable” Intel chips have seen performance boosts in both supported titles when running a supported AMD GPU. Mantle will help with ANY CPU limitations no matter who’s name is on it.

    To say this is a gimmick by AMD to make better drivers or CPUs is pure and utter fanboy nonsense. There is noting wrong with AMD drivers, just as there is nothing wrong with Nvidia drivers. Neither one is “better”, in the big picture they have both had their share of issues in the past. However modern fanboy mentality seems to bring long term memory problems with it.

    As far as the CPUs go, to use the 4770K and 8350 as examples, they game on par with one and other in most all gaming situations when the supporting hardware is comparable. The differences are often measured in single digit FPS numbers with few exceptions. In BF4 for example, the performance difference between the two is moot. I own both of these processors and both are fantastic gamers, pure and simple.

    Both processors see a benefit from Mantle, and of course that benefit can increase as you move down the line on both sides of the fence.

    I will leave it at that. Keep calm and game on! =]

    • XionEternum

      I like you, in almost too many ways so I’m just going to say it and go…
      (Name, avatar, common-fucking-sense speech, and attitude)

  • asd

    “to avoid making better drivers and CPUs”.Mantle can count as a driver.And about CPU’s even with intel theres performance gain so you dont make any sense.Either way,enjoy free performance.

  • basroil

    “So far, both games that support Mantle have shown a good increase in performance once Mantle is implemented.”

    Hardly… you need to use AMD CPUs to see any performance increase, any reasonable Intel chip sees practically no difference. Mantle isn’t really a performance boost, it’s a cheating mechanism for AMD to avoid making better drivers and CPUs.

    • Steven Kean

      Actually, the Battlefield 4 test was done on an Intel 3570K and still received a good performance boost (15.8% boost). Thief wasn’t tested on the the 3570K, however that is next to be done, sadly there are only so many hours in a day to get testing done across multiple CPU’s.

      • basroil

        “Actually, the Battlefield 4 test was done on an Intel 3570K and still received a good performance boost (15.8% boost)”

        - Except AMD had originally promised console level performance (i.e. 5x or so over same hardware with windows/linux and abstraction layer). As for BF4, even the developers admit the issue is that their software taxes the CPU in strange ways (use spikes), and techreport found some strange issues with AMD drivers taking up a ton of CPU power (and time locking other things out) while Nvidia’s drivers are far thinner. If AMD focused on driver optimization to reduce bloat and excess time spent in driver, it could probably improve performance just as much as Mantle, without forcing developers to spread their teams thinner.

        • XionEternum

          Intel fanboy troll likes trying to quote people trying to make a point of flawed logic when there is none? Well let’s see what’s happened thus far due to AMD’s recent innovations:
          Microsoft has unveiled DX12 as a new low-level API to reduce overhead and be compatible with nearly everything in the past three generations of GPUs.
          OpenGL with support from nVidia, Intel, AND AMD has unveiled much the same thing with expectations for even broader compatibility.
          Now, let’s talk about your parroting of other sources:
          AMD drivers aren’t bloated and do not spike the CPU any more than nVidia. I’ve got both an AMD/AMD and an Intel/nVidia platform. I am monitoring both after having optimized both myself. Both have the latest drivers for their respective GPUs, one is a GTX560ti-448 Ultra Classified and the other is an XFX Black R9-290. I am looking at the software volumes of the drivers and neither are very different. AMD has more ‘extras’ on the side that interact with it yes, but these are situational for their respective purposes such as video playback smoothing. Yes this means slightly more CPU usage, but only when these features are being used. Your sources either didn’t know this, or didn’t bother mentioning it.
          Now, for future reference:
          Stop fanboying over shit. Both companies are working their hardest to do what they think is best for their stock value. AMD’s been behind thanks to the bullshit fiasco over Bulldozer, which was never made for synthetic benchmarks mind you but for actual real-world use and future software development, while nVidia and Intel are sitting high enough all they care to do is develop and release marginally better generations of products. Case in point? nVidia claimed that a single GTX680 pulled the same fps on the Unreal4 demo that 3-way SLI GTX580s did. Well, for the first month after the GTX600-series launched, it did. However, after that month a global driver update shot the GTX500-series back up with marginal improvements for the GTX600-series. Suddenly GTX580s were near the same fps as GTX680s if not over in some cases. Reason I stuck with the 448. Another example is Haswell being a marginal improvement over Ivy-Bridge at best. And here we have AMD having to push software development forward to match their architectural design and free multi-core CPUs from the shackles of DirectX with the intent to nudge other into mirroring their innovation.
          I don’t care if you use Intel and nVidia. They have decent hardware for a reasonable price. But you cannot parade them around when all they do is marginal improvements to their existing hardware. Hell, back when I was getting into enthusiast PC building almost a decade ago, Intel and nVidia were the go-to for productivity builds. Since nVidia completely abandoned their double-precision CUDA on their consumer-grade cards, CUDA acceleration has all but died in favor of OpenCL, which works best on AMD’s GCN architecture. And rewatch the AMD conference. They never promised console-level performance. That was hype from review sites I don’t give a single nano-fuck about, nor should anyone else. They promised “low-level API to reduce driver overhead on the CPU to fully utilize GPU performance” in summary. In other words: Powerful CPUs with little overhead to begin with, won’t see much improvement. Slower multi-threaded CPUs will see larger improvements. This was then rumor-linked to their low-speed 8-core CPUs on the new consoles. What AMD made public, is all any of us consumers KNOW they claimed and said.

        • basroil

          “due to AMD’s recent innovations: Microsoft has unveiled DX12 as a new
          low-level API to reduce overhead and be compatible with nearly
          everything in the past three generations of GPUs.”

          You really think they were able to do that in 6mo since Mantle came out? No, they were probably working on it since before DX 11 was finalized.

          “AMD drivers aren’t bloated and do not spike the CPU any more than nVidia.”

          There’s a dozen sources that state otherwise. DX11 overhead is lower in nVidia systems than AMD ones. When it comes to OGL, then the situation is reversed, but considering 99% of desktop games use DX, no need to care about OGL.

          “Another example is Haswell being a marginal improvement over Ivy-Bridge at best.”

          Hardly, especially in the enterprise software market where AVX optimized software is common. When it comes to gaming the two architectures are very similar in performance though. Compared to AMD though, either one is a beast, and single thread performance can be 50% better for the same clock speed.

          ” And rewatch the AMD conference. They never promised console-level performance. That was hype from review sites ”

          They specifically state 9x higher draw calls (which has yet to be show on any system, not even 2x has been seen yet). Even AMD’s tech demo (Star Swarm) only gets about 30% improvement! To date, Mantle has not actually met any of it’s objectives, from performance to availability. When DX12 comes out in 2015, it’s going to be free and easy to download and use, Mantle currently hides behind an NDA wall, which shows just how useless it is right now.

        • XionEternum

          Predictable and nitpicky as all hell.
          You think Mantle just came out of nowhere, but DX12 hasn’t? If a low-level API can be developed between DICE and AMD in X amount of time, then someone as big as Microsoft with support from nVidia should be able to do it in half the time.

          Once again, I am running both platforms. I’ve optimized both myself. Neither is noticeably more demanding than the other whether at idle, running DX9-11, or running OpenGL (test sample Unigine Heaven). At most +/-5% on a single core of an 1100T @ 4GHz. Your sources aren’t optimizing properly if they are getting different results. But most reviewers don’t optimize and just plug in the hardware, install express drivers and run the benchmarks. I consistently get better results than most for the hardware I’ve tested.

          This was not about enterprise performance. This was about consumer performance. Haswell has been heavily marketed for consumer use over enterprise use. It’s been launched first on consumer platforms. If you’re going to pull that excuse, then you’re clutching at straws.

          I said to rewatch the AMD conference, I never mentioned the Star Swarm demo. Star Swarm is Oxide’s tech demo, not AMD’s. And thank you for bringing it up since they themselves claim significant gains in batch counts on the ALPHA version of Mantle. This tech demo is outright far lower performance on DX11 no matter the GPU or CPU used. Keep in mind, this is JUST a tech demo, but you’re trying to make a point of it even though I never mentioned it in the first place.

          Both Mantle and DX12 are behind an NDA. Mantle is available in an open beta state for the games that support it, however none of the games that do right now have a high demand for CPU performance as previously stated. Both are free so I have no clue what you’re even going on about here. DX12 will NOT come to Win7, so unless Win9 is a vast improvement over Win8, MOST gamers won’t get to use it. Mantle is out in a state that is still being optimized and improved. It is available for all GCN-based AMD GPUs, and if nVidia was willing to take the time to look at the API, maybe they could use it too. But they’re not even going to try. But here, once again we come back to: You’re being a nitpicky entitled brat if you think small differences like this matter one goddamn. The fact DX12 will not be on Win7 matters. The fact OpenCL works better than CUDA-accel and is being adopted more and works better on AMD, for productivity this matters. The fact Mantle in an unfinished state does offer an improvement matters. The fact not all sources of benchmarking optimize their system matters. The fact there has been only marginal consumer-level performance gains in the past few generations of CPU from both sides matters. And finally, the fact that AMD is pushing for a revolution in computing with HSA while Intel and nVidia are content with marginal gains in performance matters a LOT in the long term.

          I do not fanboy either. I would gladly be in Intel/nVidia’s side if they were innovating in some way comparable to AMD’s innovation trend. I don’t deny their performance at all, but considering how much AMD accomplishes considering how small they are compared to Intel and nVidia, I am beyond impressed. Anyone else should be too. That doesn’t preclude them from using Intel and/or nVidia though. You can prefer those brands while still appreciating the innovations of AMD.

          Now if IBM were to… *cringe*

    • psxlover

      That probably means that the bottleneck is your CPU and not your GPU. Coupling a high end graphics card with an average cpu is not a good idea.

    • bullmer

      tell me what dx driver can do what mantle is able to do, can nvidia’s dx driver do the same things as mantle? stop spouting non sense