AMD Ryzen - Ready To Put The Pressure On IntelToday marks AMD's grand re-entry to the high-end processor market after abandoning it years ago when they found that they were no longer competitive in that market. AMD exiting the high performance desktop computing market was a shock to many and it left Intel Corporation with no competition. With Intel left all alone in the desktop processor market it has allowed them to dominate the ever growing PC market. While the PC industry in general has been slowing down year after year, the PC gaming industry is booming and topped $30 billion in 2016. Desktop computers are mainly used for gaming and content creation these days and by not offering a high-end processor it let Intel rule the roost for over half a decade. Without competition the prices on Intel's flagship processors have gone up and generational IPC gains are usually no more than 10% annually. Gamers have been dreaming about the day that AMD would return the market as they hoped it would lower prices and rejuvenate the chip designers and maybe force them to do better than the 5-10% IPC improvements that we've all become so accustomed to. While AMD turned tail and left the market they went back to the drawing board and came up with a new 'clean-sheet' x86 microarchitecture design called 'Zen' that they feel will be able to power flexible, high-performance platforms that are used for content creation during the work day and then hardcore gaming in the off hours. One of the other key design goals with Zen was to create an architecture that they could build on for the future. AMD Zen will be around for some time and Zen 2 and Zen 3 are already on the roadmap. AMD Ryzen processors will be released in three series; Ryzen 7, Ryzen 5 and Ryzen 3. The AMD Ryzen 7 1800X, Ryzen 7 1700X and Ryzen 1700 are being released today and are $300+ processors that each are 8-core, 16-thread parts.
- AMD Ryzen 7 CPU Landing Page on Amazon
- Ryzen 7 1800X – $499 – Amazon.com
- Ryzen 7 1700X – $399 – Amazon.com
- Ryzen 7 1700 – $329 – Amazon.com
|PRODUCT LINE||MODEL||CORES||THREADS||BASE CLOCK (GHZ)||BOOST CLOCK (GHZ)||INCLUDED COOLER||TDP (WATTS)||ON SALE|
|RYZEN 7||1700||8||16||3.0||3.7||Wraith Spire||65||Now|
|RYZEN 5||1600X||6||12||3.6||4.0||Wraith Spire||95||Q2|
|RYZEN 5||1500X||4||8||3.5||3.7||Wraith Spire||65||Q2|
AMD Zen Architecture, SenseMI and XFR TechnologiesWith AMD's Zen microarchitecture used on Ryzen processors, AMD has been able to get greater than a 52% IPC improvement over AMD's previous desktop design. This is thanks to a 1.75x larger instruction scheduler window, a 1.5x grater issue width and resources as well as a new micro-op cache that allows the L2 and L3 cache to be bypassed with frequently-accessed micro operations. To top that off, AMD came up with a great SMT and even made a neural network-based branch prediction unit to prepare the right instructions and pathways are being used on future workloads. The AMD Zen architecture is comprised of what they call a CCU Complex (CCX), which is a 4-core, 8-thread module. AMD can place more than one of these CCX units together on a package, which is exactly what AMD did to get the 8-core, 16-thread processors in the Ryzen 7 processor series. Each CCX has 64K L1 I-Cache, 64K L1 D-Cache, 512KB dedicated L2 cach per core and then finally 8 MB of L3 cache shared across all the cores. In processors where more than one CCX is used, AMD connects the CCXes together with their high-speed Infinity Fabric (this fabrid also handles system memory, I/O, PCIe and more). From what we have heard AMD has been tinkering around with connecting up to four of these complexes together, so at some point in the future a 16-core, 32-thread processor is entirely possible and likely going to make its way to the desktop market when the time is right. Each AMD Ryzen processor features AMD SenseMI technology, which is basically a ton of interconnected sensors that are accurate to 1mA, 1mV, 1mW and 1C with a pooling rate of 1000/sec that allows for real time adjustments to the processors behavior. This data allows for improved power use of the cores, management of the speculative cache fetches and even to perform AI-based branch prediction. Thanks to the real time power, temperature and load data from the Infinity Fabric, AMD precision boost can adjust the processors clock speeds on the fly in 25MHz steps. This 'fine' clock tuning means that you are able to stay at the highest possible clock speed for that particular workload. AMD also has a new feature called Extended Frequency Range (XFR) on all Ryzen processors that have the -X suffix at the end of the model number. XFR increases the maximum Precision boost frequency on systems that are using high-end CPU cooling solutions (think high-end air cooling or most any liquid cooling loop). This additional headroom is given automatically thanks to the SenseMI data that allows the processors distance to the pre-set junction thermal limits and giving you additional clock frequency when you have some headroom available. What is the rough temperature threshold/range at which XFR kicks in? It starts to kick in below 60C Tj and gradually allows faster frequency down to ~25C Tj What is the rough temperature threshold at which XFR remains stable at 4.1GHz (e.g. no more "dithering" between 4.0/4.1)? Roughly 25C Tj but that will depend on the heatsink performance and the system ambient temperature and the workload being run. Let's take a look at the test system and then move onto the benchmarks!
Our CPU Test SystemsBefore we look at the numbers, let’s take a brief look at the test system that was used. All testing was done on a fresh install of Windows 10 Pro Anniversary Update 1607 build 14393.10 64-bit and benchmarks were completed on the desktop with no other software programs running. We tested on five different desktop platforms (Intel Z77, Intel Z97, Intel Z270, Intel X99, AMD A4 and AMD AM3+) in this article, so we'll just quickly touch on each as all shared common parts (CPU Cooler, Video Card, SSD, Power Supply) and only differed in the board, processor, memory kit and memory timings. The AMD AM4 platform that we used to test the Ryzen 7 series processors was running the MSI X370 XPower Gaming Titanium Motherboard with UEFI 117 that came out on 2/23/2017. The Corsair Vengeance 16GB 4000MHz DDR4 dual channel memory kit was manually set to 2933MHz with 14-14-14-45 1T memory timings as we wanted to test with one of the most popular clock frequencies sold today. We used an NVIDIA GeForce GTX 1080 8GB Founders Edition video card with GeForce 376.33 WHQL drivers for all of the systems. We also used the Corsair AX860i digital power supply and a Corsair Force MP500 480GB PCIe SSD. It should be noted that we used both a Noctua air cooler and a Corsair Hydro Series H110 water cooler for this review. The Noctua air cooler arrived first and was used for all CPU benchmarks and then for overclocking we switched to the liquid cooler. The Intel Z270 platform that we used to test the Intel 1151 processors was running the Gigabyte Aorus Z270X-Gaming 5 with UEFI F5e that came out on 12/28/2016. The Corsair Vengeance 16GB 4000MHz DDR4 dual channel memory kit was manually set to 3000MHz with 15-15-15-36 1T memory timings as we wanted to test with one of the most popular clock frequencies sold today. We used an NVIDIA GeForce GTX 1080 8GB Founders Edition video card with GeForce 376.33 WHQL drivers for all of the systems. We also used the Corsair AX860i digital power supply, Corsair Hydro Series H105 water cooler and Crucial MX300 1050GB SSDs on all of the desktop systems.
|Intel LGA1151 Test Platform|
Intel Core i7-7700K
|Gigabyte Z270X-Gaming 5||Click Here|
|16GB Vengeance 3000MHz DDR4||Click Here|
|GeForce GTX 1080 FE||Click Here|
|Crucial MX300 1050GB||Click Here|
|Corsair H105||Click Here|
|Corsair K70 RGB||Click Here|
|Corsair M65 Pro||Click Here|
|Corsair AX860i||Click Here|
|ASUS VE278Q 27"||Click Here|
|Windows 10 64-Bit||Click Here|
Memory Bandwidth Benchmarks
SiSoftware Sandra 2016 SP3 Memory Bandwidth: linkSiSoftware Sandra 2016 is a utility, which includes remote analysis, benchmarking and diagnostic features for PCs, servers, mobile devices and networks. This test has been popular for CPU and memory benchmarks for well over a decade and it is one of the easiest benchmarks out there to run.
AIDA64 5.80 Memory & Cache Benchmark: linkAIDA64 is an industry-leading system information tool, loved by PC enthusiasts around the world, which not only provides extremely detailed information about both hardware and installed software, but also helps users diagnose issues and offers benchmarks to measure the performance of the computer. Memory Bandwidth Results Summary: Memory actually turned out to be tricky for us on the new AMD AM4 platform for Ryzen processors. We were using an AMD X370 based motherboard to test the Ryzen 7 1800X processor and found that many common clock frequencies that we've used for years on Intel boards are not support on these boards. So, plugging in the same Corsair LPX Vengeance 3000MHz DDR4 CL15 kit that we've run all on the DDR4 boards in the past couldn't run on this board at 3000MHz or CL15. We could either run it at 2933 MHz or 3200 MHz and when we tried both we learned that 3200 MHz wasn't stable on the Ryzen 7 1800X CPU and MSI XPower Gaming X370 Titanium motherboard that we were using. Maybe it was an early UEFI support issue or maybe the memory controller on our particular processor didn't like running that fast. We ended up settling at 2933 MHz with CL14 timings as we couldn't set CL15 timings on our board. It turns out that if you want to run a DDR4 memory kit over 2667 MHz on any board (B350 or X370) that you'll be running in a 'Geardown' mode that requires even numbers for the CAS latency.
“When running 2667 or higher for stability we run in memory in Geardown mode so CS is driven for 2 memclks, and UMC and PHY need to be in sync regarding even clocks for cmd/add bus setup/hold time, other parameters that have to be even include Tcwl, Twr, etc. So, when in geardown mode, CAS latency has to be even number as a logical requirement.”So, once we figured out we could and couldn't run we were able to get 2933 MHz running with CL14 timings just fine. Memory bandwidth performance on SiSoftware Sandra 2016 SP3 was impressive at 34.89 GB/s, which is better than any Intel platform that we tested with dual-channel DDR4 memory running at 3000 MHz with CL 15 timings. Performance in the AIDA64 Memory & Cache benchmark also showed memory bandwidth performance that was comparable to what Intel has to offer on their latest platforms. Once you get past the initial quirks of how the memory works on the platform and get it running it has really solid performance. That said, the memory cache latency was much lower than we expected and it was nearly twice as slow as the latest Intel platforms. The Memory Latency benchmark measures the typical delay when the CPU reads data from system memory. So, it looks like Ryzen is unexpectedly slow at issuing of the read command until the data arrives to the integer registers of the CPU. The maximum memory bandwidth looks great, but getting that data looks slower than we were expecting to see on this new platform.
Memory Subsystem PerformanceSo, Ryzen has solid memory bandwidth, but the latency scores made us want to look closer at how Ryzen's integrated memory controller and the new cache hierarchy works. Ryzen 7 processors have a really interesting design as the 16MB L3 cache is sliced up two 8MB halves, due to the way the two CCX'es are connected to the L3 cache. So, while AMD advertises that Ryzen 7 CPUs have 16MB L3 cache it isn't all in one location. So, you have 8MB per pair of CCX (L3) plus 1MB per CCX (L2) and that adds up to 20MB of cache on the Ryzen 7 processor. 8+8=16, plus 4x1 per CCX = 20MB total. We also learned the L2 caches are not shared by all CUs in the CCX. Each CU in the CCX has 512KB available to it, for a total of 2MB of L2 per CCX. The L3 is shared by all CUs in the CCX, so a single CU can allocate to the entire 8MB L3. That means a single CU can access up to 512KB of L2 and 8MB of L3, even though the die will have 4MB of L2 and 16MB of L3. Complicated, but a neat design if it works well. Let's take a look at some Memory & Cache Latency testing using Linear Forward and Full Random workloads. Cache Latency Results Summary: These results are totally unexpected as it looks like the 16MB of L3 cache on the Ryzen 7 1800X processor was just 8MB sized and that 8MB belonging to the other CCX wasn't even there. It doesn't even act like if it was a "L4" cache… Very weird. Applications like Blender, Lightroom, and video transcoding doesn't use any no or very little CCX communication. 7-Zip (real world compression of mixed data) and games do, so this might be a weak point for Ryzen. When we showed AMD these results they pointed outu that when you’re not hitting in the L3 cache, their prefetchers (linear forward) look like they’re beating Intel’s (linear forward) for predictable access patterns, hiding a lot of memory latency. One of the neat things about a Ryzen platform is that you can disable cores and if you wanted to run just four cores you can select a 4-0 or 2-2 core setup on your CPU with a Ryzen 7 series processor. We thought some cache was attached per core, so disabling cores should change our benchmark results. Running all four cores on one half of the die (4-0) might have disabled the cache on one of the CCX's and we proved that to be the case here as there were no results in the 192 MB and 256 MB block sizes. the maximum block size is LLC * 16. So when you have a 8MB L3 cache (or 8MB per CCX), the maximum block size is 128MB that they test will run to. Since we were getting results up to 256MB on the tests it looks like both blocks of 8MB cache per CCX was being accessed. This is a good sign, but there is a significant performance hit after 8MB for some reason. It would be nice to benchmark the speed between the two CCX's as they are independent and connected with each other only through the SDF (Scalable Data Fabric). Could the interconnect between the two CCX's be bottlenecking performance? We aren't sure of a way to test that, but it sure looks like it.
Real World Benchmarks
Dolphin 5.0 x64 Emulator Benchmark: linkThe long awaited Dolphin 5.0 release happened in 2016 and thanks to a major cleaning up of the codebase Dolphin has reached a new level of efficiency, powered by a revitalized dynamic recompiler. Dolphin is considered by many to be the best Nintendo Wii emulator for PC you can find. It also works for Gamecube. We are running the official Dolphin 5.0 benchmark as it offers closer mapping to real world Dolphin performance as the previous version was extremely floating point heavy. We feel this is a pretty good general CPU benchmark for real world performance as emulation workloads are something that most gamers will run at one point or another. We benchmark the standard Wii homebrew application and run it with the speed limit set to 'unlimited' and the External Frame Buffer set to 'real' in case you wanted to run this on your personal system.
Agisoft Photoscan 1.2.6 x64 - 2D to 3D Image Manipulation Benchmark: linkAgisoft PhotoScan is a stand-alone software product that performs photogrammetric processing of 2D digital images and generates 3D spatial data to be used in GIS applications, cultural heritage documentation, and visual effects production as well as for indirect measurements of objects of various scales. We us the 50 images from the 'Building' sample data download page for our benchmark. We take the total time it takes to complete four steps: Align Photos, Build Dense Cloud, Build Model, Build Texture with all the default settings for each.
KeyShot 6.3 - 3D Rendering and Animation: linkKeyShot 3D rendering and animation software is one of the fastest, easiest way to create amazing, photographic visuals of your 3D data. We installed KeyShot 6.3 to do some benchmarking and real-world stress testing using the camera_benchmark.bip scene that is included with the application. This benchmark tests a 800x554 pixel image with a continuous sample rate and shows the Frames Per Second (FPS) that the scene is being rendered from. This scene has nearly 42,000 triangles and does a good job at using all available cores to render the scene.
Blender 2.78a Open Source 3D Creation Benchmark: linkBlender is the free and open source 3D creation suite.
Media Encoding & Encryption Benchmarks
HandBrake v1.0.1 - linkHandBrake is an open-source, GPL-licensed, multiplatform, multithreaded video transcoder, available for MacOS X, Linux and Windows. It is popular today as it allows you to transcode multiple input video formats to h.264 output format and is highly multithreaded. We used Big Buck Bunny as our input file, which has become one of the world standards for video benchmarks. For our benchmark scenario we used a standard 2D 4K (3840x2160) 60 FPS clip in the MP4 format and used Handbrake version 1.0.1 to do two things. We used the new Fast 1080p30 preset to shrink that down to a 1920 x 1080 video clip to reduce the file size. This is something people often do to save space to put movies onto mobile devices. We also ran the workload using the normal preset as it puts the CPU at a higher load than the Fast 1080p30 preset.
X264 HD Encoding - linkthe x264 HD Benchmark is a reproducible measure of how fast your machine can encode a short HD-quality video clip into a high quality x264 video file. It’s nice because everyone running it will use the same video clip and software. The video encoder (x264.exe) reports a fairly accurate internal benchmark (in frames per second) for each pass of the video encode and it also uses multi-core processors very efficiently. All these factors make this an ideal benchmark to compare different processors and systems to each other. We are using x264 HD v5.0.1 for this test. Media Encoding Benchmark Results Summary: AMD has shown that the AMD Ryzen 1800X CPU does well in Handbrake, but we found it to be slightly behind the Intel Core i7-6900K in our tests. We benchmark two workloads because we found that transcoding to a different native resolution that then original video didn't put the CPU at 100% load across all threads, but in the 'normal' legacy test the resolution stays the same and the CPU is fully utilized. In the Fast 1080p30 test the 1800X was 2.8% slower, but was 26% slower when all the CPUs were being used. Strangly enough, the AMD Ryzen 7 1800X beat the Intel Core i7-6900K in the X264 benchmark test on the important second pass, but got its lunch handed to it on the quicker first pass. Media encoding is certainly a strong point for Ryzen 7 and many will be purchasing one of these processors as it can save you a ton of rendering/processing time!
VeraCrypt 1.19 - linkVeraCrypt is an open-source disk encryption software brought to you by IDRIX and is a fork based on the discontinued TrueCrypt 7.1a utility. The developers claim that weaknesses found in TrueCrypt have been resolved with the VeraCrypt project. This is a popular utility used by people that don't want to use Microsoft's built-in encyption tool for Windows 10 called Bitlocker. Encryption Benchmark Results Summary: If encryption is something you do, you'll find having more cores and threads to be very beneficial as you can see from the results above. The AMD Ryzen 7 1800X finished with a score of 5.6 GB/s on the standard AES benchmark, which is just ahead of a stock Intel Core i7-7700K, but far behind the Intel Core i7-6900K.
3DMark & Cinebench
Futuremark 3DMark 2.2.3509 - link3DMark is a popular gaming performance benchmark that includes everything you need to benchmark your PC whether you're gaming on a desktop PC, laptop, notebook, or a tablet. 3DMark includes seven benchmark tests and we'll be running 'Sky Diver' that is aimed at gaming laptops and mid-range PCs.
Maxon Cinebench R15.038 - linkCINEBENCH is a real-world cross platform test suite that evaluates your computer's performance capabilities. CINEBENCH is based on MAXON's award-winning animation software Cinema 4D, which is used extensively by studios and production houses worldwide for 3D content creation. MAXON software has been used in blockbuster movies such as Iron Man 3, Oblivion, Life of Pi or Prometheus and many more. 3DMark and Cinebench Benchmarks Results Summary: The AMD Ryzen 7 1800X performs exceptionally in 3DMark overall test score thanks to being boosted by a high Physics score that drives up the overall score! The Cinebench scores are impressive for the multi-threaded CPU test with decent single-threaded performance. The Intel Core i7-2700K 'Sandy Bridge' processor debuted more than 5 years ago and AMD finally can beat one of those processors that is overclocked up to 4.5GHz. Those with older Intel Ivy Bridge and Sandy Bridge systems finally can see a value in upgrading as single-threaded performance is going to be the same or better and multi-threaded performance is at an entirely new level here in 2017. That said, the OpenGL performance in Cinebench with the AMD Ryzen 7 1800X Processor and the NVIDIA GeForce GTX 1080 video card was lower than we hoped for at 112.67 FPS. The Intel Core i7-6900K gets 168.48 FPS, so the 6900K is ~49.5% faster in OpenGL performance. Let's hope this isn't an indication of what is going to happen in game titles on the next page.
Discrete GPU Gaming Performance
ThiefThief is a series of stealth video games in which the player takes the role of Garrett, a master thief in a fantasy/steampunk world resembling a cross between the Late Middle Ages and the Victorian era, with more advanced technologies interspersed. Thief is the fourth title in the Thief series, developed by Eidos Montreal and published by Square Enix on February 25, 2014. We picked this game title for CPU testing as it is known to scale well with CPUs. We use the games built-in benchmark and test with the default settings with these changes; exclusive fullscreen, vSync off, 1920 x 1080, 60Hz.
Grand Theft Auto VGrand Theft Auto V, currently one of the hottest PC games, was finally released for the PC on April 14, 2015. Developed by Rockstar, it is set in 2013 and the city of Los Santos. It utilizes the Rockstar Advanced Game Engine (RAGE) which Rockstar has been using since 2006, with multiple updates for technology improvements. We picked this game title for CPU testing as it is known to scale well with CPUs. We use the games built-in benchmark and test with the default settings with these changes; vSync off, 1920 x 1080, 60Hz.
Deus Ex: Mankind DividedDeus Ex: Mankind Divided is an action role-playing stealth video game developed by Eidos Montreal and published by Square Enix. Set in a cyberpunk-themed dystopian world in 2029, two years after the events of Human Revolution, Mankind Divided features the return of Adam Jensen from the previous game, Deus Ex: Human Revolution, with new technology and body augmentations. The game was released on August 23rd, 2016 for PC users and we are using it to show DX12 performance on the CPUs that we tested. DX12 removed most all of the CPU overhead, so we wanted to see what happens to performance on DX12 game titles as well. We use the games built-in benchmark and test with the default settings with these changes; DX12 enabled, exclusive fullscreen, vSync off, 1920 x 1080, 60Hz, medium graphics. Discrete Gaming Benchmarks Results Summary: On the three game titles we tested with, the AMD Ryzen 7 1800X showed somewhat disappointing results. In Thief at 1920 x 1080 with Normal image quality settings and VSync disabled we were averaging 108.8 FPS. This score was 47% faster than the 'old' flagship AMD FX-9590 5GHz processor, but was 38% slower than a stock Intel Core i7-7700K 'Kaby Lake' processor. Ryzen 7 processors in general fall 15% short in IPC versus Kaby Lake, but this is a larger performance drop than we expected. In Grand Theft Auto V we found the AMD Ryzen 7 1800X to be 42% faster than the AMD FX-9590 processor, but was 18% slower than a stock Intel Core i7-7700K processor. In Deus Ex: Mankind Divided we are looking at CPU performance on a DX12 game title, so we expect the results to be tighter since there is less of a burden on the CPU since DX12 reduces the CPU overhead. On this game title the Ryzen 7 1800X was found to be 22% faster than the 'old' flagship AMD FX-9590 5GHz processor, but was 22% slower than a stock Intel Core i7-7700K processor. This is more where we expected the performance to be with Ryzen in game titles. AMD made it clear to use that they don't expect to win in game benchmarks. Our results show that they don't, so we are happy that they didn't make any false claims here. AMD thinks that they have become competitive for gamers though as they've moved the system bottleneck with regards to gaming from the CPU back to the graphics card. AMD furthermore noted that they think most gamers that are buying a $300+ processor will be gaming at higher than a 1920 x 1080 screen resolution. Once you move to a 1440P or 4K display they feel that the performance difference between Intel and AMD isn't unreasonable as you'll again be more GPU limited due to the screen resolution rather than CPU limited. Here are some slides that show AMD's internal results: [gallery ids="192072,192071,192070,192069,192068,192067,192066"] All that said, AMD belives that they can improve the low 1080P performance numbers with game engine optimizations and provided these statements for us from game developers;
“Oxide games is incredibly excited with what we are seeing from the Ryzen CPU. Using our Nitrous game engine, we are working to scale our existing and future game title performance to take full advantage of Ryzen and its 8-core, 16-thread architecture, and the results thus far are impressive. These optimizations are not yet available for Ryzen benchmarking. However, expect updates soon to enhance the performance of games like Ashes of the Singularity on Ryzen CPUs, as well as our future game releases.” - Brad Wardell, CEO Stardock and Oxide "Creative Assembly is committed to reviewing and optimizing its games on the all-new Ryzen CPU. While current third-party testing doesn’t reflect this yet, our joint optimization program with AMD means that we are looking at options to deliver performance optimization updates in the future to provide better performance on Ryzen CPUs moving forward. " – Creative Assembly, Developers of the Multi-award Winning Total War SeriesLet's take a look at 1440P and 4K results on the game titles that we tested on the next page.
Discrete GPU Gaming Performance - Part 2AMD claims that the Ryzen 7 series processors perform comparably to many of the Intel Broadwell-E processors and slightly behind the Intel Kaby Lake processors when gaming at 2560 x 1440 and 3840 x 2160 resolutions, so we re-tested several processors to see if that is true. These numbers are slightly different than the ones on the page before as we completely re-did our testing and used a newer video card driver and ran fewer runs due to time constraints.
Grand Theft Auto V
Deus Ex: Mankind DividedDiscrete Gaming Benchmarks Results Summary: It looks like AMD is telling the truth as once you game at a 4K resolution the minimum, average and maximum frame rates are all pretty close to one another. For straight 1080P gaming the Intel Kaby Lake processors still lead the way and we were shocked to see how well the little and low cost Intel Core i3-7350K did on these three older game titles. This is just a small look at games, but it gives you a general idea of overall gaming performance on Ryzen. Many gamers think that more cores will be beneficial in the years to come, but it will take a long time for 16-threaded games to become normal. Let's take a look at power consumption.
Power ConsumptionNo review is complete without taking a look at power and the AMD Ryzen 7 1800X actually did really well considering it is an 8-core processor with a 95W TDP. At idle the AMD Ryzen 7 1800X Processor used 44.2W at idle and that is impressive as includes the MSI X370 motherboard, NVIDIA GeForce GTX 1080 FE video card, SSD and the HDF. The processor topped out at 174W in Handbrake and 273W when playing Thief at 1080P.
AMD Ryzen 7 1800X and 1700 TemperaturesWe used AIDA64 to look at temperatures as Ryzen Masters wasn't given to us while we were doing our performance testing. We aren't sure if AIDA 64 numbers are accurate, but we'll let you take a look. With the Noctua air cooler we were getting idle temps of around 39C and load temps of 52C on a 10 minute long AIDA64 stability test. Then on the AMD Ryzen 1700 we were getting 37C idle temps and 46C load temps. Yes, we know the elapsed time is different for each processor, but the temps flat line pretty quick on air cooling. If these temperatures are accurate we are pretty impressed! We'll fire up and look at temps with Ryzen Masters soon.
AMD Ryzen 7 1800+ OverclockingAMD didn't leave much headroom in the Ryzen 7 1800X Processor as they were trying to get all the clock speed they could get to be competitive with Intel. We still wanted to see just how far we could get with overclocking though and gave it a shot. At stock voltage we managed to get up to 3.9GHz on all cores with full stability, which meant that we got slower single-threaded performance as when you overclock you no longer get your Turbo Boost up to 4.0 GHz and the XFR boost of up to 4.1 GHz on 2-cores. This is why on Cinebench we went up to 1720 points on the multi-threaded CPU test and then dropped down to 154 points on the single-threaded CPU from the stock benchmark numbers of 1635 points multi-threaded and 163 points single-threaded. By increasing the CPU voltage up to 1.46V from the 'auto' setting on our board of 1.34V we managed to get up to 4.1GHz on all cores. This took our multi-threaded CPU up to 1814 points and took our single-threaded performance right back to where we were straight out of the box since 4.1 GHz is the top clock speed by default with Turbo Boost and XFR. Running 4.2GHz was possible on our system, but it wasn't stable in long video encoding benchmarks and we were up to 1.5V and stopped increasing the voltage at that point. These numbers put the AMD Ryzen 7 1800X right on the heels of the Intel Core i7-6950X in Cinebench on both multi-threaded and single-threaded CPU performance. The all-core overclock also improved gaming performance as you can see from the chart above. A 4.7% gain is close to being what we'd consider a significant performance boost, so we will take it! Playing Thief at 1080P with stock settings had an overall system power draw of 273 Watts and that increased up to 281 Watts at 3.9 GHz and again up to 293 Watts when running at 4.1 GHz. So, power went up by about 20 Watts while gaming with the overclock. A 4.7% performance gain for a 7.3% power increase may not be worth it for those worried about power consumption. Even overclocked up to 4.1 GHz on all cores the Intel Core i7-2700K still has better 1080P gaming performance, which is still hard for us to understand how that is possible when the multi-threaded performance is so good and the single threaded performance in applications like Cinebench are the same as a 2700K processor. Really odd, but the numbers don't lie! Update 03/02/2017 2pm CT: Using the Corsair Hydro 110i Extreme Performance Liquid CPU Cooler ($134.99 shipped), we managed to 4.2GHz on the Ryzen 7 1800X, but needed to run 1.48-1.50V on the CPU and only going it partially stable. Benchmarks like Cinebench and games couldn't run, but we were able to get a CPU-Z run pulled off at 4.2 GHz, so that is better than nothing right? Let's wrap up this review!
Final Thoughts and ConclusionsAMD sees Ryzen 7 processors as being the perfect fit for multi-threaded (nT) work with slightly lower single threaded (1T) performance. Our testing shows that is true for the most part. In applications where all cores are being used the 8-Core, 16-Thread Ryzen parts usually do exceptionally well. There are some instances where lightly threaded applications like dolphin offered lower than expected performance and even heavily threaded applications like Photoscan that left us a little disappointed after seeing the impressive Cinebench, Blender and Handbrake results that AMD has been showing off for months. If there is one area of the AMD Ryzen processor that would be a disappointment and needs to be improved upon it would most certainly have to be 1080P gaming performance. We only tested a few game titles in this CPU review, but the Ryzen 7 1800X was 17-38% slower than the stock Intel Core i7-7700K 'Kaby Lake' quad-core processor. The AMD Ryzen 7 1800X just can't compete at 1080P where the GPU bottleneck is minimal, but when you increase the screen resolution to 1440P or 4K and shift the bottleneck from the CPU to the GPU you see AMD Ryzen performing at basically the same performance level. AMD never once said that they'd beat Intel in gaming benchmarks, but they did say that they would have comparable 4K gaming performance that that appeared true in our tests using a single NVIDIA GeForce GTX 1080 graphics card. What remains to be seen is if that will hold true with multiple graphics cards in SLI or Crossfire as well as upcoming next-generation video cards like AMD's Vega GPU. The state of gaming on AMD Ryzen right now is that at 1080P with a high-end graphics card the performance is significantly behind Intel, but at 1440P and 4K we would call it comparable. AMD has since informed us that they think driver optimizations from the game developers will help the 1080P situation and started giving us statements to back that up:
“Oxide games is incredibly excited with what we are seeing from the Ryzen CPU. Using our Nitrous game engine, we are working to scale our existing and future game title performance to take full advantage of Ryzen and its 8-core, 16-thread architecture, and the results thus far are impressive. These optimizations are not yet available for Ryzen benchmarking. However, expect updates soon to enhance the performance of games like Ashes of the Singularity on Ryzen CPUs, as well as our future game releases.” - Brad Wardell, CEO Stardock and Oxide "Creative Assembly is committed to reviewing and optimizing its games on the all-new Ryzen CPU. While current third-party testing doesn’t reflect this yet, our joint optimization program with AMD means that we are looking at options to deliver performance optimization updates in the future to provide better performance on Ryzen CPUs moving forward. " – Creative Assembly, Developers of the Multi-award Winning Total War SeriesAMD firmly believes that they can improve gaming performance with Ryzen optimizations as all the games we tested with were optimized on Intel, so they feel the testing is one sided right now. They also have a Windows Driver coming in approximately one month that will help performance as the Windows High Precision Event Timer (HPET) isn't playing nice with the SenseMI sensors that poll the CPU status every millisecond.
“AMD expects to create a simplified solution in the next month to set a Windows profile that pairs the optimum performance experienced in the High Performance power plan, with the energy efficiency experienced in the Balanced power plan.” - AMDAMD sent us a list of recommendations to get the best performance out of Ryzen and those tips can be seen below:
- Use fresh OS image of windows. We’ve seen performance improvements with a clean install of Windows vs. a re-used install of Windows that was originally configured for another processor.
- Make sure there are no background CPU temp or freq. tools in the background. Real-time measurement can impact performance by up to 5%.
- Make sure the system has Windows High Precision Event Timer (HPET) disabled. HPET can often be disabled in the BIOS. Alternatively: from Windows, open an administrative command shell and type: bcdedit /deletevalue useplatformclock – this can improve performance by 5-8%.
- AMD Ryzen Master’s accurate measurements present require HPET. Therefore it is important to disable HPET if you already installed and used Ryzen Master prior to game benchmarking.
- Ryzen 7 1800X – $499 – Amazon.com
- Ryzen 7 1700X – $399 – Amazon.com
- Ryzen 7 1700 – $329 – Amazon.com