AMD Mantle API Real World BF4 Benchmark Performance On Catalyst 14.1

Jump To:


AMD Mantle Real World Performance

AMD has been talking about the potential performance increase that their new Mantle technology is going to provide, we have all seen the numbers that AMD and EA-DICE has been releasing.  AMD has finally released CATALYST 14.1 beta drivers, so now we can do our own independent testing to check out performance with and without Mantle enabled.

AMD Mantle API

We were given early access to the Catalyst 14.1 beta driver, there are a few hoops to jump through during the installation process. We were able to do a successful install, but it looks like the installer is causing some issues for AMD.  Here is a look at Catalyst Control Center that confirms we are now running Catalyst driver package 14.1.

AMD Catalyst Control Center 14.1

Currently there are only two ways to test the AMD Mantle APU. You can use the latest build of the highly popular Battlefield  game title or there is a new synthetic benchmark called StarSwarm that is now available to download and use. We tried both benchmark options out on what we consider a mainstream gaming platform (Intel Core i5-3570 ‘Ivy Bridge Process w/ a Sapphire Radeon R7 260X video card).

 Let’s see if the numbers that have been released is pure marketing, or whether Mantle performs as well as we’re told it does.

Print
Jump To:
1 2 3 4 5  Next »
  • mariano guntin

    Note: The “review” or “real life benchmark” isn’t really it. It’s using a r7 260x, mantle is right now optimized for r9 290, 260, 270 and 280 gotta wait for a new driver. So this article is garbage.

  • mariano guntin

    I read some brats talking about “CPU, GPU , bottlenecking, etc”….. Mantle isn’t made for that, mantle is a low lvl api that use LESS draw calls, making it faster than DX, DRAW CALLS, that’s the key of mantle.

  • Nathan Kirsch
  • anon

    I’m nvidia fanboy, and was about time to prove that all that horsepower our GTX695, 780ti or 7990, 290x is LOST IN TRANSLATION due to, basically, poor code optimization because of schedules, a.k.a. account/marketing department. Mantle is a proof of that, direct access with optimized api makes the difference. Kudos for AMD.

  • Serpent of Darkness

    StarSwarm:
    With D3D: avg of 11 FPS. (3 R9-290x)
    With Mantle: Avg of 55 FPS. (1 R9-290x)
    BF4:
    With D3D: avg is 60 fps (3 R9-290x)
    With Mantle: avg is 45 fps (2 R9-290x)
    i7 4960x @ 4.6ghz
    65.5 GB Ram at 1300 Mhz roughly.
    3 R9-290x in Tri-FireX.

  • Gruia

    please make the drivers stable.
    260x bsods with all of the ati drivers (it only works with the ones released from the manufacturer)

  • V8351

    Just been playing bf4 with mantle for last two hours,on asus rampage 4 , asus matrix 7970 .didnt notice any jump in fps , but something very interesting did happen,less load on the video card resulting in much lower card temps.
    Everything maxed out on ultra , 27″ Samsung at 2560 , matrix air cooled didn’t go over 56degress c

    • BillyJ

      Ramage IV… yeah, that doesn’t really say much, you could have a Celeron on it for all we know. But I’m guessing you’re sporting a i7, also,what did you play exactly? Singleplayer I’m guessing, otherwise I’d have to say “you’re doing it wrong”, well, at least some part of it…

      Fun-fact: Catalyst 14.1 beta also brings performance improvements to BF4 D3D11… so, yeah…

  • 00

    Retest with older generation of CPu’s like core 2 duo, or slower new, like Pentium Gxxx or FX 4xxx.

  • 00

    Retest with i3 xxx

  • godrilla

    The age of brute force computing is hopefully over or at least a start of a transition for efficient coding.

    • basroil

      Mantle IS brute force computing…

      • john

        Yes because optimizing an api is brute force… geez you nvidia fanboys are as stupid as they come… Brute force means moar coarz morar ghrz moar anythings… This is a software updated that reveals hidden potential of actual hardware… where is the brute force in that you brute??

        • clementl

          Technically he´s correct. Mantle removes a lot of the abstraction layer by requiering less CPU-calls. Thus it gives you access to raw GPU power without beign bottlenecked by the CPU.

        • BillyJ

          I think there’s a misinterpretation of what each mean by brute force. I think what the replying guy wanted to say is it’s brute force because the iGPUs and dGPUs have a chance to shine and work to their actual potential, and not 40-80% like in most other situations, do to the CPU feeding the GPU more efficiently and promptly and due to the GPU itself having better use of it’s hardware.

        • basroil

          I’m talking about brute force as in they prioritize call throughput rather than forcing developers to actually consider just what needs to be called. Also enables lazy design like in StarSwarm, where everything is made parallel on the GPU even though there are plenty of techniques that would work faster and look better on any CPU. The point of Mantle is to enable direct hardware access, but the end result will be that AAA studios and geeked out indies keep optimizing like always while more lazy programmers will abuse the system to calculate things the hard way because they aren’t good enough to make an optimization.

        • mariano guntin

          The point of Mantle isn’t cosmetic changes like “let’s use less cpu and more gpu, seems good”. MANTLE is a low lvl api, low lvl means hard to program and more accurate in general terms. But talking about the facts of how it works, the objetive of mantle is making less draw calls, less draw calls, faster rendering. Similar to the glide (i think) from 3dfx cards back in the late 90′s.

  • Reinaldo Camargo Filho

    Why don’t you guys share the driver with us?

    • Nathan Kirsch

      It isn’t ours to share and the installer isn’t ready for the public. The public version will be out shortly and I’ve asked AMD to give us an update on when exactly that will take place.

  • derpyderp

    Should have used an AMD CPU/APU since Mantle also allows for HSA and certain new communications between their Processors and GPUs. It also scales to more CPU Cores even if the game doesnt support it, so you should be getting a lot more performance increases if you use all AMD hardware. ~Derp

  • gigilangostino

    StarWarm, this fake bench that doesn´t make any use of geometry instancing (the very natural choice for this test. You can resolve one frame of this benchmark with a dozens of calls to the API, but of course, nobody mentioned this trivial solution for the problem, no Oxide games, no the reviewer) and uses 3-5 draw calls for the rendering of one star (yes, the bright dots, the one that you maybe thought that was a texture for all the stars), that overuse of calls is obscene, like if the developers were a group of insane monkeys that didn´t know how to make their jobs.

    Ah!! an the effects, you know, the “temporal AA” that they implement in their engine multiplies the number of calls for, around 3-4X more than without it (yes, the basi star, you know, consumes one draw call without temporal AA, very mucho for this pity effect, BUT you see 3-5 drawcalls by star with temporal AA).

    A AA effect that multiplies the calls, the use of calls by every star when it’s a entity in the game that you can make with other techniques (ex: a projected texture, for ALL the stars), and of course, zero use ot the 101 tehcnique for any game with many objects: Instancing.

    Wonderful (blow)job, Oxide games. AMD thanks you for your dishonestity.

    • Marios145

      Dont feed the troll?
      please?

    • athenasius

      No heavy instancing because they have made the ships more dynamic, you need individually generated models to make each object dynamic.

      • basroil

        Not really, even in a worst case scenario you don’t need a hundred versions of the same ship, you just need to instance equal parts. I.E. if all your ships have the same glowing thrusters, render that separately from the 5 long wings and 10 stubby ones. There’s always something you can do, if you aren’t being paid by AMD to show off their new system that is.

        The StarSwarm Mantle test is basically the PhysX vs CPU test, completely biased using outdated or even ridiculous techniques to slow down the “competing” version.

        • BillyJ

          QQ some moar, you nVious tool.

          They made the engine render that way because they intended, and because that is how most of the stuff they implemented have to be, they want a real sense of dynamic space filled with actual objects, not 2D sprites with zero values up to the point that they get close enough. Get a grip, get a clue, get a real job, fanboy crusades don’t count.

        • basroil

          StarSwarm is NOT a game, it’s a demo and biased benchmarking tool. Everything is pretty low poly (especially their convex hull “shields”), and instead they take to rendering everything in the field rather than just rendering to a reasonable depth and applying optimizations past that. You won’t be able to tell if your 1px dot has 1 vertex or 100000, so why force the engine to render excess information? the demo looks like crap anyway

        • Serpent of Darkness

          I wonder what you’re rant is going to be after Star Citizens is released, and the fps performance between NVidia to AMD becomes a gap a mile in length with AMD Mantle enabled? AMD is performing better than NVidia in this scenario. You’re right that Starswarm is a demo and a biased benchmarking tool. The part that you take out of proportions and context is that it’s a tool that compares D3D to AMD Mantle. It’s not a tool that compares NVidia to AMD. So you’re “Hate-Rant” on AMD Mantle has no justification, and everybody knows you’re a known NVdia fan-boy on this site. In general, all benchmarking tools favor brand x over brand y, or brand z over brand x, either because the brand pays the 3rd party benchers some royalties and provides them with free goodie, or because the hardware is made for it on a Hardware design level.

        • Serpent of Darkness

          Correction. It’s not really a benchmark. The Starswarm demo is a “total game engine.” This is stated in the F&Q section in Steam.

        • basroil

          “The part that you take out of proportions and context is that it’s a
          tool that compares D3D to AMD Mantle. It’s not a tool that compares
          NVidia to AMD.”

          Who the hell ever claimed otherwise? I specifically only stated it was propaganda FOR AMD’s NEW SYSTEM, not against competitors. In fact, my gripes are that AMD cards don’t seem to benefit from it and can actually show significant performance drops. Take 1080p BF4 benching on an overclocked 2700k/3770k/4770k/4960x, BF4 shows drops of up to 14% compared to DX11, and BF4 isn’t even coded for DX11 exclusively (which would increase performance a bit),

        • Rick La Rose

          100K Draw Calls are not ridiculous or outdated. What is ridiculous and outdated is DirectX. As it cannot properly handle many draw calls (a parallel function) seeing as DirectX is still built with mostly Single Cores in mind (serial functions).

    • Serpent of Darkness

      It’s amazing how nobody on the Green Camp has used the Starswarm “Game Engine” (with D3D and extreme settings) to test their own setups. I don’t see it stated anywhere on a 3rd party bench site, or on Steam that Starswarm is exclusive to AMD Users only… See where they stand before they go off engaging that accelerator to their mouths and fingers, voicing there dislike towards AMD Mantle. If they could pull 90 FPS on a single GTX card, at extreme settings on the D3D API, then the rants of the Green Camp would have more weight in their arguments. Otherwise, these rants are nothing more than hot air…

      • basroil

        No need, the community did it and found that GPU utilization on nvidia/intel systems is less than 50% while mantel utilization is 100%. There’s no bottlenecks in DX benchmark, only bad programming. Nvidia can’t expect to hit 90FPS if the program caps them on purpose through proprietary bad design. That said, a 780Ti gets about 40fps with CPU and GPU at 50% each, which means even the DX pathways are 50% slower for Nvidia because of something the developers did (even BF4 shows nearly equal frame-rate for specs regardless of nvidia or amd). With all this put together, you can expect an unbiased framerate between 120fps and 160fps for a 780Ti or about 100FPS for a 770, and I’m sure that with real tweaking of DX11 paths AMD chips would also show increased performance to the point that Mantle only improves really crappy CPUs like the A series

    • Rick La Rose

      Geometry instancing leads to copies of the same model being made in order to provide the illusion of individually dynamic and separate rendered objects. In reality, they’re simply copies of one another.

      What Star Swarm does is allow each unit to be individual and to have access to their own AI and Physics.

      As such turrets move independently and targets, acquired by the turrets, are programmed individually with their own independent AI functions.

      Geometry Instancing is a work around for the Draw Call problems with DirectX. Star Swarm shows you the real thing. It’s like we’re all talking about Pure French Cheese made by hand in France and you’re bringing in the fact that Cheese Whiz exists as some sort of argument. Why don’t we all just eat Cheese Whiz. Maybe because it isn’t the same?

  • Guest

    If Mantle catches on I wont need to upgrade my 2500K

    • Eddytion

      Intel’s gonna be mad :D

      • john

        Why intel? I don’t get it it improves performance on intel too. I think amd should talk to intel and release.mantle for intel gpu’s too… then somebody will be raving mad… guess who:)?

      • Venus

        Exactly ,these cock sucking companies forcing us to upgrade each year .I can only imagine intel staff pulling their hair out of madness.

    • derpyderp

      Actually you should be using an AMD CPU/APU to get the full performance increase from mantle. What these tests fail to show or realize is that Mantle works for the CPU/APU as well as the GPU. We’re only seeing half the utilization on these benchmarks as someone doesn’t understand fully how mantle works or what all it supports. .

      • john

        I’m installing 14.1 right now and I have a A10-7850… without dGPU… Was playing yesterday BF4 at medium & high in FullHD and was smooth (2.4 ghz memory) both campaign and multiplayer – was really nice :)… Can’t wait to test the Mantle update now :D

    • derpyderp

      Also, if this was being benched on the new Linux Kernel, it would be as much as 60-90% increased in performance. All in due time…

    • JackD

      No need, that great 2011 quad…….with a 7870XT gets average 43.5 fps on Starswarm exact same setup as in the article. I have seen similar for an 8350 with a 7950. This is good work from AMD. So keep your cpu, i will be keeping mine anyway.

  • Dean

    Applying a decent overclock would yield just as much of a performance gain. Not impressed AMD.

    • quasibaka

      So if you do a decent OC + mantle you get even more perf , right , right ??

      • http://kougeru.tumblr.com/ Kougeru

        Shhhh…that’s too much for him to process

      • Nathan Kirsch

        Steve is going to be doing some overclocking for you guys tonight to see what happens.

    • Jon

      You’re missing the big picture. You can now buy a cheaper CPU and get the same kind of performance. It’s a big deal, unless you’re an nVidia fanboy who can’t admit AMD has ever done anything right.

      • Peng Tuck Kwok

        Exactly. Don’t understand why there would be people who would hate that mid to low end gaming rigs get a performance boost. Come on guys we’re all gamers here. Benefits like these are good for everyone.

      • gugu

        Wait you mean the same as AMD Fanboys say :”PhysX is crap” ?

        • X

          PhysX is crap and it was crap before NVIDIA bought it. Brilliant concept, but no real mainstream games took it on. Same could become of Mantle. SLI gamer btw. ;)

      • Ricky Payne

        Or if you have an old rig and not much to spend on hardware. I have a q6600 and and upgraded my Nvidia 8400gs to an R7 260x on Black Friday, with BF4 Mantle took me from 45-55 fps medium and low mixed to 60 fps constant with high and medium mixed @ 1080p. Glad I spent my $130 GPU budget on my first AMD GPU. Hard to beat AMD’s price to performance ratio.

    • BillyJ

      LOL *facepalm*

  • anonymous

    Some results from others:

    http://pclab.pl/art55953-2.html – Single Player results
    http://pclab.pl/art55953-3.html – MultiPlayer results on different CPUs

    • Ern

      Any idea what specific video cards they used in this article? I’m assuming the 780 GHz is the Gigabyte card. Is it up against a reference 290x? I’m surprised the 780 beat out a 290x with mantle on a 4770k. Making me rethink my soon to be purchase.

      • BillyJ

        Those slides look like a load of horse sh!t, either they tested erronously, had a fluke, or are moronic. Nowhere in Hell would overclocking a high-end CPU yeald you 25% more frames under those circumstances. Are you freaking kidding me? And you people believe that crap so fast, get a grip…

  • Mark

    Ahhh. this is a really lop-sided build with a powerful gaming CPU and an entry level graphics card.
    I’m surprised there are gains at all to be honest.
    Most gamers spend twice as much on their graphics as they do on their CPUs.
    You’ll find a ton of GTX 770 + 3570K builds and a ton of GTX 780 + 3770K builds.

    • derpyderp

      None of which can use Mantle lol

    • R Valencia

      3770K is not a mainstream CPU.

      • basroil

        In gaming and workstations it certainly is. It only costs $50-70 more than the 3570 yet gives you 50% more power in non-gaming tasks.

        But Mark is nuts if he thinks people are spending twice as much on GPU as CPU, considering the 770 costs as much as a 3770k, not twice. For twice you’re looking at a 780Ti.

        • R Valencia

          Not in Steam stats.

        • basroil

          Steam stats also say that 4% of users have the HD4000 as their primary CPU, and we both know that’s a load of bull (since that’s a GPU, not CPU).

          Actually, Steam doesn’t release CPU types, only cores and speed, so you have no way to tell between the 3770k and 3570k. It does list GPU though, and the first Mantle capable GPU is in the 21st spot for DX11 GPUs… Kind of sad actually, more people use an Nvidia 630M than an ATI 7970.

          Another fun fact, Linux is over-rated because Steam has more users with 8/8.1 than all users not using it or Windows 7 combined (21% to 18%, of which 11% use XP or Vista)

        • R Valencia

          From 2008 era, http://www.xbitlabs.com/articles/cpu/display/pentium-e2160_2.html
          the majority of Intel CPU sales are less pricey CPUs.

  • John Smith

    A 3570K ($200 CPU) and an R7 260X ($120 GPU) is not a real world platform for gamers in any way shape or form.
    If anything this is the opposite of that.
    A typical gaming desktop would have a GPU that costs 1.5X-2.0X the CPU.
    i.e a 3570K + an R9 280X or an R9 290.
    Or an i3 3220 with an R9 270.

    • Billy

      This would result in even better results correct? If you keep the same CPU but increase the power of the GPU there would be an even greater bottleneck.

    • http://kougeru.tumblr.com/ Kougeru

      most people I know have CPUs that cost the same or 100 dollars MORE than their GPUs. GPU prices tend to lower faster

      • Erm

        They did not know how to build a good gaming PC, then…

        • dirge

          Maybe in general but certain games such as the new SimCity and Starcraft 2 use only 2 cores so your shiny new graphics card will be waiting for your CPU for most of the time. I went with an i5 2500 and a gtx 460 card for Starcraft 2. The i5 gave me great fps boost but even still, the gtx 460 was waiting for it most of the time. This resulted in me trying overclocking my CPU but first I had to get a K version, a different and more expensive motherboard and also a better cooling solution. Even then, my GPU was still waiting for the CPU most of the time and bear in mind the gtx 460 isn’t very up to date (although I’ve recently gotten the r9 270x to replace it because the nvidia card lost its fan and the heat had some side effects to it).

        • BillyJ

          Blizzard can’t code it’s games worth sh!t…

        • Coboltjet

          So an AMD FX 8350 @ 4.2Ghz, 32GB RAM @ 1866Mhz and 2x Sapphire Radeon Dual-X R9 270x 4GB GDDR5(won the 2nd in a sapphire gold competition) with 2x 480GB RevoDrive3 x2 and 4x Seagate ST3000DM001 in 2 Raid 0 configurations, sitting on a ASUS M5A99FX PRO R2.0 powered by Corsair CX750M,

          I spent more on my GPU than my CPU but i don’t have a good gaming PC?

    • Nacelle

      The problem with your logic is, not everyone is very smart. I’ve talked to a large number of people who think the way to gauge how good a graphics card is, is by the amount of memory it has. These are the re-re’s that would buy the setup listed.

    • john

      Why not a single kaveri?? I play BF4 on high in FullHD smoothly… Amazing to say the least… Whereas on dx I could only play smoothly on medium..

  • Austin Ray

    Wait. So where did you get this?

  • Edward Turvey

    Thanks for the write-up. Very interesting and glad that you used a mid grade system with a 3570K. So even on a low performance card like the 260X, there is still a performance boost. I imagine GPU heavy systems will receive the high performance increases, where the CPU is limiting them.