[ad_1]

Intel’s newest line of desktop processors bring with them a number of changes designed to sway favor with performance enthusiasts. These new parts bring Intel’s consumer processors up to eight cores, with higher frequencies, better thermal connectivity, and extra hardware security updates for Spectre and Meltdown. The only catch is that you’re going to need a large wallet and a big cooler: both price and power consumption hit new highs this time around.

Coffee Lake Refresh: A Refresher

Our coverage of Intel’s announcement last week went into detail about the three new processors being launched today.  But here’s a quick reminder of the latest silicon in the market.

Today, three CPUs are being launched: the 8-core Core i9-9900K capable of hitting 5.0 GHz out of the box, an 8-core Core i7-9700K that’s a bit cheaper, and a 6-core Core i5-9600K that on paper looks like it could be a killer purchase.

Intel 9th Gen Core
AnandTech Cores TDP Freq L3 L3 Per
Core
DRAM
DDR4
iGPU iGPU
Turbo
Core i9-9900K $488 8 / 16 95 W 3.6 / 5.0 16 MB 2.0 MB 2666 GT2 1200
Core i7-9700K $374 8 / 8 95 W 3.6 / 4.9 12 MB 1.5 MB 2666 GT2 1200
Core i5-9600K $262 6 / 6 95 W 3.7 / 4.6 9 MB 1.5 MB 2666 GT2 1150
8th Gen
Core i7-8086K $425 6 / 12 95 W 4.0 / 5.0 12 MB 2 MB 2666 24 EUs 1200
Core i7-8700K $359 6 / 12 95 W 3.7 / 4.7 12 MB 2 MB 2666 24 EUs 1200
Core i5-8600K $258 6 / 6 95 W 3.6 / 4.3 9 MB 1.5 MB 2666 24 EUs 1150
Core i3-8350K $179 4 / 4 91 W 4.0 8 MB 2 MB 2400 24 EUs 1150
Pentium G5600 $93 2 / 4 54 W 3.9 4 MB 2 MB 2400 24 EUs 1100

The new halo product is the Core i9-9900K, Intel’s first mainstream desktop socketable processor to have the Core i9 naming. This is an eight-core, sixteen thread processor, Intel’s first in this product line. It offers a base frequency of 3.6 GHz and a peak turbo frequency of 5.0 GHz – which is actually a two-core turbo as we go into below. This is an overclockable processor, allowing users to push the frequency if the cooling is sufficient, and despite the memory controller still rated at DDR4-2666, higher speed memory should work in almost every chip. The Core i9-9900K also gets a fully-enabled cache, with 2 MB available per core for a chip-wide total of 16 MB. There’s also some integrated graphics, the same UHD 630 graphics we saw on the previous generation. This all comes in at a $488 suggested retail price, although no cooler is bundled.

The Core i7 now sits in the ‘middle’ of the set, but the Core i7-9700K is seemingly no slouch. Intel has done away with hyper-threading on this part, giving it eight cores and eight threads only, however it does have a base frequency of 3.6 GHz and a turbo frequency of 4.9 GHz. For this part Intel has reduced the L3 cache per core to 1.5 MB, which might have an affect on some software, but the processor is overclockable and features the same DDR4-2666 support as the Core i9. The $374 suggested retail price is a bit easier to digest for sure, with the user safe in the knowledge that no two threads are sharing resources on a single core. This chip will be an interesting comparison to the last generation Core i7-8700K, which has two fewer cores but has hyper-threading.

The Core i5-9600K suddenly becomes the baby overclocking chip, but still commands a $262 price, a few dollars more than the Core i5-8600K but in exchange for extra frequency and all the extras listed later in this article. For the money, this chip has a base frequency of 3.7 GHz and a turbo frequency of 4.6 GHz, along with the same DDR4-2666 support and UHD 630 graphics.

All three parts are the first entrants into Intel’s 9th Generation Core product line, and under the hood they feature a refresh of the Coffee Lake architecture we saw in the 8th Generation Core products. They are built on Intel’s 14++ manufacturing node, the latest node which prioritizes high frequency and performance. The key highlights of this set of three processors, asides from all being overclockable, comes down to what Intel has done under the hood.

Per Core Turbo Ratios

In our information escapades, we were able to obtain the per-core turbo values for each processor. Intel still classifies this information as ‘proprietary’, so does not distribute it. However Intel’s partners are more than happy to give us the information, given that it has to be coded into the system BIOS anyway.

The big uplift here is that 5.0 GHz turbo. In our Core i7-8086K review, where Intel was happy to promote that chip as its first 5.0 GHz product, the fact that the 5.0 GHz value was on a single core was actually a downside – no matter how we tested the processor, there is usually enough running on more than one core that no user ever realistically sees 5.0 GHz at all. We only ever managed to see it flick up momentarily while waiting at idle. But the fact that the Core i9-9900K now has it across two cores means that we are more likely to see this high frequency in our single-threaded testing.

More Coffee, Less Caffeine: Hyper-Threading and L3 Cache

All this aside, it would appear that Intel is also forgoing hyper-threading on most of its processors. The only Core processors to get hyper-threading will be the Core i9 parts, and perhaps the Pentiums as well. This is partly to help make the product stack more linear, and so cheaper chips are not treading on the toes of the more expensive ones (e.g. though unlikely, a quad-core with hyper-threading might outperform a 6-core without). The other angle is one of the recently discovered side-channel attacks that can occur when hyper-threading is in action. By disabling hyper-threading on the volume production chips, this security issue is no longer present. It also ensures that every thread on that chip is not competing for per-core resources.

One of the more interesting dissections of the new 9th Generation product is in the L3 cache per core for the different models. In previous generations, the Core i7 parts had 2 MB of L3 cache per core, while the Core i5 had 1.5 MB of L3 cache per core, and the Core i3 was split between some with 2MB and others with 1.5MB. This time around, Intel is only putting the full cache on the highest Core i9 parts, and reducing the Core i7 to 1.5MB of L3 per core. This will have a slight knock-on effect on performance, which when we get the processors will be an interesting metric to test.

Integrated Graphics

One topic that Intel has not focused on much in several generations (since Broadwell, really) is that of integrated graphics. All the chips announced for the 9th generation family will still have the same GT2 configuration as the 8th generation, including the new Core i9 parts. Officially these come under the 8+2 designation. Intel still believes that having a form of integrated graphics on these high-end, overclockable processors, is still a value addition to the platform. The only downside is the performance, and it won’t be winning any awards soon.

The graphics will still be labelled as UHD Graphics 630, and use the same drivers as the 8th gen family.

Coffee Lake Refresh: Learning from the GPU Companies

Intel’s 9th Generation Core family is built around the Coffee Lake platform, and as the processors have not had any microarchitectural changes, they are refreshes of the 8th generation parts but with the product stack laid out a little differently. For those keeping track, Coffee Lake was already a rehash of Kaby Lake, which was an update to Skylake. So we are on Skylake Refresh Refresh Refresh. Making for what's essentially the same 2015 core CPU microarchitecture now going into 2018 (and beyond).

Intel's Core Architecture Cadence
Core Generation Microarchitecture Process Node Release Year
2nd Sandy Bridge 32nm 2011
3rd Ivy Bridge 22nm 2012
4th Haswell 22nm 2013
5th Broadwell 14nm 2014
6th Skylake 14nm 2015
7th Kaby Lake 14nm+ 2016
8th Kaby Lake-R
Coffee Lake-S
Kaby Lake-G
Coffee Lake-U/H
Whiskey Lake-U
Amber Lake-Y
Cannon Lake-U
14nm+
14nm++
14nm+
14nm++
14nm++
14nm+
10nm
2017
2017-2018
2018
2018
2018
2018
2017*
9th Coffee Lake Refresh 14nm** 2018
Unknown Ice Lake (Consumer) 10nm? 2019?
Cascade Lake (Server)
Cooper Lake (Server)
Ice Lake (Server)
14nm**
14nm**
10nm
2018
2019
2020
* Single CPU For Revenue
** Intel '14nm Class'

Intel has promised that its 10nm manufacturing process will ramp through 2019, and has already announced that it will introduce Ice Lake for servers on 10nm in 2020, after another run of 14nm with Cooper Lake in 2019. On the consumer side, the status is still in limbo – with any luck, the next generation of consumer parts will be a proper update to the microarchitecture, regardless of the process node.

I’ve had an 8-Core for Years!

Depending on where you draw the line for ‘consumer’ processors, technically we have had 8-core Intel CPUs on the high-end desktop space for a number of years. The Core i7-5960X  was released in August 2014, and features eight Haswell cores on the HEDT platform, with quad-channel DDR4-2133 memory and 44 PCIe lanes at 140W. Back then, on Intel’s 22nm process, the die size was around 355.52 mm2.

Back when Intel launched the first Coffee Lake processors, the 6+2 die design of the i7-8700K was around ~151 mm2, an increase of ~26mm2 over the 4+2 design of the i7-7700K (~125mm2). Back then, that was a jump from Intel’s official 14+ to 14++ manufacturing nodes, which due to a relaxed fin pitch made everything a bit bigger anyway.

But if we take 26mm2 on the high end of adding a pair of cores to the die size, then we can predict that the 8+2 design of the Core i9-9900K should come in around ~177 mm2, or a 17% larger die size. At 177mm2 including integrated graphics, this would be half the size of the Core i7-5960X, although with only half the memory controllers and PCIe lanes too. Even with that, it’s a sizeable decrease.

Naïvely one might suggest that a 17% increase in die area might directly translate to a 17% increase in price. A 17% increase in the tray price of the Core i7-8700K puts it in the region of $420, whereas the official pricing is at $488 for the K-equivalent processor. Given how Intel bins its chips (one die can be sold for half as much as another), it is hard to say how much this $488 increases profit margins, although it is widely expected that it will.


Die sizes from Wikichip

If we look at die sizes of the top end chips, through the decade of quad cores the die size was actually decreasing, from the quad core of Nehalem at over 260 mm2, down to Kaby Lake at 125 mm2. It has now steadily increased as more and more cores have been added. It might be crazy to think that Intel would happily spend 260+ mm2 on a mainstream silicon die today on its latest manufacturing process.

Over on the next page, we’ll cover Spectre/Meltdown fixes and discuss the updates to Intel’s STIM strategy.

Pages In This Review

  1. Coffee Lake Refresher
  2. Spectre, Meltdown, and STIM
  3. Test Bed and Setup
  4. 2018 and 2019 Benchmark Suite: Spectre and Meltdown Hardened
  5. CPU Performance: System Tests
  6. CPU Performance: Rendering Tests
  7. CPU Performance: Office Tests
  8. CPU Performance: Encoding Tests
  9. CPU Performance: Web and Legacy Tests
  10. Gaming: World of Tanks enCore
  11. Gaming: Final Fantasy XV
  12. Gaming: Shadow of War
  13. Gaming: Civilization 6
  14. Gaming: Ashes Classic
  15. Gaming: Strange Brigade
  16. Gaming: Grand Theft Auto V
  17. Gaming: Far Cry 5
  18. Gaming: Shadow of the Tomb Raider
  19. Gaming: F1 2018
  20. Gaming: Integrated Graphics
  21. Power Consumption
  22. Overclocking
  23. Conclusions and Final Words

The Spectre and Meltdown vulnerabilities made quite a splash earlier this year, forcing makers of hardware and software to release updates in order to tackle them. There are several ways to fix the issues, including software, firmware, and hardware updates. Each generation of product is slowly implementing fixes, including some of the new 9th Generation processors.

At this point Intel has split the list down into 5/6 wide variants of different types of vulnerabilities. For all processors beyond mid-2018, here is what the fix table looks like:

Spectre and Meltdown on Intel
AnandTech SKX-R
3175X
CFL-R Cascade Lake Whiskey
Lake
Amber
Lake
Spectre Variant 1 Bounds Check Bypass OS/VMM OS/VMM OS/VMM OS/VMM OS/VMM
Spectre Variant 2 Branch Target Injection Firmware + OS Firmware + OS Hardware + OS Firmware + OS Firmware + OS
Meltdown Variant 3 Rogue Data Cache Load Firmware Hardware Hardware Hardware Firmware
Meltdown Variant 3a Rogue System Register Read Firmware Firmware Firmware Firmware Firmware
  Variant 4 Speculative Store Bypass Firmware + OS Firmware + OS Firmware + OS Firmware + OS Firmware + OS
  Variant 5 L1 Terminal Fault Firmware Hardware Hardware Hardware Firmware

The new 9th Generation processors, listed as CFL-R (Coffee Lake Refresh), has implemented hardware fixes for variant 3, Rogue Data Cache Load, and variant 5, L1 Terminal Fault.

Because the new chips have required new masks for manufacturing, Intel has been able to make these changes. The goal of moving the changes into hardware means that the hardware is always protected, regardless of OS or environment, and with the hope that any additional overhead created by a software fix can be lessened if done in hardware.

(S)TIM: Soldered Down Processors

With the desktop processors we use today, they are built from a silicon die (the smart bit), a package substrate (the green bit), a heatspreader (the silver bit), and a material that helps transfer heat from the silicon die to the heatspreader. The quality of the binding between the silicon die and the heatspreader using this thermal interface material is a key component in the processors ability to remove the heat generated from using it.

Traditionally there are two different types of thermal material: a heat conductive paste, or a bonded metal. Both have positives and negatives.

The heat conductive paste is a universal tool – it can be applied to practically any manufactured processor, and is able to deal with a wide range of changing conditions. Because metals expand under temperature, when a processor is used and gets hot, it expands – so does the heatspreader. The paste can easily deal with this. This allows paste-based processors to live longer and in more environments. Using a bonded metal typically reduces the level of thermal cycling possible, as the metal also expands and contracts in a non-fluid way. This might mean the processors has a rated lifespan of several years, rather than a dozen years. However, the bonded metal solution performs a lot, lot better – metal conducts heat better than the silicon-based pastes – but it is slightly more expensive (a dollar or two per unit, at most, when the materials and manufacturing are taken into account).

Thermal Interface
Intel Celeron Pentium Core i3 Core i5 Core i7
Core i9
HEDT
Sandy Bridge LGA1155 Paste Paste Paste Bonded Bonded Bonded
Ivy Bridge LGA1155 Paste Paste Paste Paste Paste Bonded
Haswell / DK LGA1150 Paste Paste Paste Paste Paste Bonded
Broadwell LGA1150 Paste Paste Paste Paste Paste Bonded
Skylake LGA1151 Paste Paste Paste Paste Paste Paste
Kaby Lake LGA1151 Paste Paste Paste Paste Paste
Coffee Lake 1151 v2 Paste Paste Paste Paste Paste
CFL-R 1151 v2 ? ? ? K = Bonded
AMD
Zambezi AM3+ Bonded Carrizo AM4 Bonded
Vishera AM3+ Bonded Bristol R AM4 Bonded
Llano FM1 Paste Summit R AM4 Bonded
Trinity FM2 Paste Raven R AM4 Paste
Richland FM2 Paste Pinnacle AM4 Bonded
Kaveri FM2+ Paste / Bonded* TR TR4 Bonded
Carrizo FM2+ Paste TR2 TR4 Bonded
Kabini AM1 Paste      
*Some Kaveri Refresh were bonded

In our Ryzen APU delidding article, we went through the process of removing the heatspreader and conductive paste from a popular low cost product, and we showed that replacing that paste with a bonded liquid metal improved temperatures, overclocking, and performance in mid-range overclocks. If any company wants to make enthusiasts happy, using a bonded metal is the way to go.

For several years, Intel has always stated that they are there for enthusiasts. In the distant past, as the table above shows, Intel provided processors with a soldered bonded metal interface and was happy to do so. In recent times however, the whole product line was pushed into the heat conductive paste for a number of reasons.

As Intel was continually saying that they still cared about enthusiasts, a number of users were concerned that Intel was getting itself confused. Some believed that Intel had ‘enthusiasts’ and ‘overclockers’ in two distinct non-overlapping categories. It is what it is, but now Intel has returned to using applying STIM and wants to court overclockers again.

Intel has officially confirmed that new 9th generation processors will feature a layer of solder making up the TIM between the die and the IHS. The new processors with solder include the Core i9-9900K, the Core i7-9700K and Core i5-9600K.

As we’ll show in this review, the combination of STIM plus other features are of great assistance when pushing the new processors to the overclocking limits. Intel’s own overclocking team at the launch event hit 6.9 GHz temporarily using exotic sub-zero coolants such as liquid nitrogen.

One of the worst kept secrets this year has been Intel’s Z390 chipset. If you believe everything the motherboard manufacturers have told me, most of them had been ready for this release for several months, hence why seeing around 55 new motherboards hit the market this month and into next.

The Z390 chipset is an update to Z370, and both types of motherboards will support 8000-series and 9000-series processors (Z370 will need a BIOS update). The updates are similar to the updates seen with B360: native USB 3.1 10 Gbps ports, and integrated Wi-Fi on the chipset.

Intel Z390, Z370 and Z270 Chipset Comparison
Feature Z390 Z370 Z270
Max PCH PCIe 3.0 Lanes 24 24 24
Max USB 3.1 (Gen2/Gen1) 6/10 0/10 0/10
Total USB 14 14 14
Max SATA Ports 6 6 6
PCIe Config x16

x8/x8

x8/x4/x4
x16

x8/x8

x8/x4/x4
x16

x8/x8

x8/x4/x4
Memory Channels 2 2 2
Intel Optane Memory Support Y Y Y
Intel Rapid Storage Technology (RST) Y Y Y
Max Rapid Storage Technology Ports 3 3 3
Integrated 802.11ac WiFi MAC Y N N
Intel Smart Sound Y Y Y
Integrated SDXC (SDA 3.0) Support Y N N
DMI 3.0 3.0 3.0
Overclocking Support Y Y Y
Intel vPro N N N
Max HSIO Lanes 30 30 30
Intel Smart Sound Y Y Y
ME Firmware 12 11 11

The integrated Wi-Fi uses CNVi, which allows the motherboard manufacturer to use one of Intel’s three companion RF modules as a PHY, rather than using a potentially more expensive MAC+PHY combo from a different vendor (such as Broadcom). I have been told that the cost of implementing a CRF adds about $15 to the retail price of the board, so we are likely to see some vendors experiment with mid-price models with-and-without Wi-Fi using this method.


ASRock Z390 Phantom Gaming-ITX/ac

For the USB 3.1 Gen 2 ports, Type-A ports are supported natively and motherboard manufacturers will have to use re-driver chips to support Type-C reversibility. These come at extra cost, as one might expect. It will be interesting to see how manufacturers mix and match the Gen 2, Gen 1, and USB 2.0 ports on the rear panels, now they have a choice. I suspect it will come down to signal integrity on the traces on the motherboard.


MSI MEG Z390 Godlike

For the Z390 chipset and motherboards, we have our usual every-board-overview post, covering every model the manufacturers would tell us about. Interestingly there is going to be a mini-ITX with Thunderbolt 3, and one board with a PLX chip! There are also some motherboards with Realtek’s 2.5G Ethernet controller – now if only we also had consumer grade switches.

As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's maximum supported frequency. This is also typically run at JEDEC subtimings where possible.

It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds – this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date.

Test Setup
Intel 9th Gen i9-9900K
i7-9700K
i5-9600K
ASRock Z370
Gaming i7
P1.70 TRUE
Copper
Crucial Ballistix
4x8GB
DDR4-2666
Intel 8th Gen i7-8086K
i7-8700K
i5-8600K
ASRock Z370
Gaming i7
P1.70 TRUE
Copper
Crucial Ballistix
4x8GB
DDR4-2666
Intel 7th Gen i7-7700K
i5-7600K
GIGABYTE X170
ECC Extreme
F21e Silverstone*
AR10-115XS
G.Skill RipjawsV
2x16GB
DDR4-2400
Intel 6th Gen i7-6700K
i5-6600K
GIGABYTE X170
ECC Extreme
F21e Silverstone*
AR10-115XS
G.Skill RipjawsV
2x16GB
DDR4-22133
Intel HEDT i9-7900X
i7-7820X
i7-7800X
ASRock X299
OC Formula
P1.40 TRUE
Copper
Crucial Ballistix
4x8GB
DDR4-2666
AMD 2000 R7 2700X
R5 2600X
R5 2500X
ASRock X370
Gaming K4
P4.80 Wraith Max* G.Skill SniperX
2×8 GB
DDR4-2933
AMD 1000 R7 1800X ASRock X370
Gaming K4
P4.80 Wraith Max* G.Skill SniperX
2×8 GB
DDR4-2666
AMD TR4 TR 1920X ASUS ROG
X399 Zenith
0078 Enermax
Liqtech TR4
G.Skill FlareX
4x8GB
DDR4-2666
GPU Sapphire RX 460 2GB (CPU Tests)
MSI GTX 1080 Gaming 8G (Gaming Tests)
PSU Corsair AX860i
Corsair AX1200i
SSD Crucial MX200 1TB
OS Windows 10 x64 RS3 1709
Spectre and Meltdown Patched
*VRM Supplimented with SST-FHP141-VF 173 CFM fans

We must thank the following companies for kindly providing hardware for our multiple test beds. Some of this hardware is not in this test bed specifically, but is used in other testing.

Hardware Providers
Sapphire RX 460 Nitro MSI GTX 1080 Gaming X OC Crucial MX200 +
MX500 SSDs
Corsair AX860i +
AX1200i PSUs
G.Skill RipjawsV,
SniperX, FlareX
Crucial Ballistix
DDR4
Silverstone
Coolers
Silverstone
Fans

Spectre and Meltdown Hardened

In order to keep up to date with our testing, we have to update our software every so often to stay relevant. In our updates we typically implement the latest operating system, the latest patches, the latest software revisions, the newest graphics drivers, as well as add new tests or remove old ones. As regular readers will know, our CPU testing revolves an automated test suite, and depending on how the newest software works, the suite either needs to change, be updated, have tests removed, or be rewritten completely. Last time we did a full re-write, it took the best part of a month, including regression testing (testing older processors).

One of the key elements of our testing update for 2018 (and 2019) is the fact that our scripts and systems are designed to be hardened for Spectre and Meltdown. This means making sure that all of our BIOSes are updated with the latest microcode, and all the steps are in place with our operating system with updates. In this case we are using Windows 10 x64 Enterprise 1709 with April security updates which enforces Smeltdown (our combined name) mitigations. Uses might ask why we are not running Windows 10 x64 RS4, the latest major update that doesn't delete your data – this is due to some new features which are giving uneven results. Rather than spend a few weeks learning to disable them, we’re going ahead with RS3 which has been widely used.

Our previous benchmark suite was split into several segments depending on how the test is usually perceived. Our new test suite follows similar lines, and we run the tests based on:

  • Power
  • Memory
  • Office
  • System
  • Render
  • Encoding
  • Web
  • Legacy
  • Integrated Gaming
  • CPU Gaming

Depending on the focus of the review, the order of these benchmarks might change, or some left out of the main review. All of our data will reside in our online benchmark database, Bench, for which there is a new ‘CPU 2019’ section for all of our new tests.

Within each section, we will have the following tests:

Power

Our power tests consist of running a substantial workload for every thread in the system, and then probing the power registers on the chip to find out details such as core power, package power, DRAM power, IO power, and per-core power. This all depends on how much information is given by the manufacturer of the chip: sometimes a lot, sometimes not at all.

We are currently running POV-Ray as our main test for Power, as it seems to hit deep into the system and is very consistent. In order to limit the number of cores for power, we use an affinity mask driven from the command line.

Memory

These tests involve disabling all turbo modes in the system, forcing it to run at base frequency, and them implementing both a memory latency checker (Intel’s Memory Latency Checker works equally well for both platforms) and AIDA64 to probe cache bandwidth.

Office

  • Chromium Compile: Windows VC++ Compile of Chrome 56 (same as 2017)
  • PCMark10: Primary data will be the overview results – subtest results will be in Bench
  • 3DMark Physics: We test every physics sub-test for Bench, and report the major ones (new)
  • GeekBench4: By request (new)
  • SYSmark 2018: Recently released by BAPCo, currently automating it into our suite (new, when feasible)

System

  • Application Load: Time to load GIMP 2.10.4 (new)
  • FCAT: Time to process a 90 second ROTR 1440p recording (same as 2017)
  • 3D Particle Movement: Particle distribution test (same as 2017) – we also have AVX2 and AVX512 versions of this, which may be added later
  • Dolphin 5.0: Console emulation test (same as 2017)
  • DigiCortex: Sea Slug Brain simulation (same as 2017)
  • y-Cruncher v0.7.6: Pi calculation with optimized instruction sets for new CPUs (new)
  • Agisoft Photoscan 1.3.3: 2D image to 3D modelling tool (updated)

Render

  • Corona 1.3: Performance renderer for 3dsMax, Cinema4D (same as 2017)
  • Blender 2.79b: Render of bmw27 on CPU (updated to 2.79b)
  • LuxMark v3.1 C++ and OpenCL: Test of different rendering code paths (same as 2017)
  • POV-Ray 3.7.1: Built-in benchmark (updated)
  • CineBench R15: Older Cinema4D test, will likely remain in Bench (same as 2017)

Encoding

  • 7-zip 1805: Built-in benchmark (updated to v1805)
  • WinRAR 5.60b3: Compression test of directory with video and web files (updated to 5.60b3)
  • AES Encryption: In-memory AES performance. Slightly older test. (same as 2017)
  • Handbrake 1.1.0: Logitech C920 1080p60 input file, transcoded into three formats for streaming/storage:
    • 720p60, x264, 6000 kbps CBR, Fast, High Profile
    • 1080p60, x264, 3500 kbps CBR, Faster, Main Profile
    • 1080p60, HEVC, 3500 kbps VBR, Fast, 2-Pass Main Profile

Web

  • WebXPRT3: The latest WebXPRT test (updated)
  • WebXPRT15: Similar to 3, but slightly older. (same as 2017)
  • Speedometer2: Javascript Framework test (new)
  • Google Octane 2.0: Depreciated but popular web test (same as 2017)
  • Mozilla Kraken 1.1: Depreciated but popular web test (same as 2017)

Legacy (same as 2017)

  • 3DPM v1: Older version of 3DPM, very naïve code
  • x264 HD 3.0: Older transcode benchmark
  • Cinebench R11.5 and R10: Representative of different coding methodologies

Linux (when feasible)

When in full swing, we wish to return to running LinuxBench 1.0. This was in our 2016 test, but was ditched in 2017 as it added an extra complication layer to our automation. By popular request, we are going to run it again.

Integrated and CPU Gaming

We have recently automated around a dozen games at four different performance levels. A good number of games will have frame time data, however due to automation complications, some will not. The idea is that we get a good overview of a number of different genres and engines for testing. So far we have the following games automated:

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
World of Tanks enCore Driving / Action Feb
2018
DX11 768p
Minimum
1080p
Medium
1080p
Ultra
4K
Ultra
Final Fantasy XV JRPG Mar
2018
DX11 720p
Standard
1080p
Standard
4K
Standard
8K
Standard
Shadow of War Action / RPG Sep
2017
DX11 720p
Ultra
1080p
Ultra
4K
High
8K
High
F1 2018 Racing Aug
2018
DX11 720p
Low
1080p
Med
4K
High
4K
Ultra
Civilization VI RTS Oct
2016
DX12 1080p
Ultra
4K
Ultra
8K
Ultra
16K
Low
Ashes: Classic RTS Mar
2016
DX12 720p
Standard
1080p
Standard
1440p
Standard
4K
Standard
Strange Brigade* FPS Aug
2018
DX12
Vulkan
720p
Low
1080p
Medium
1440p
High
4K
Ultra
Shadow of the Tomb Raider Action Sep
2018
DX12 720p
Low
1080p
Medium
1440p
High
4K
Highest
Grand Theft Auto V Open World Apr
2015
DX11 720p
Low
1080p
High
1440p
Very High
4K
Ultra
Far Cry 5 FPS Mar
2018
DX11 720p
Low
1080p
Normal
1440p
High
4K
Ultra
*Strange Brigade is run in DX12 and Vulkan modes

For our CPU Gaming tests, we will be running on an NVIDIA GTX 1080. For the pure CPU benchmarks, we use an RX460 as we now have several units for concurrent testing.

In previous years we tested multiple GPUs on a small number of games – this time around, due to a Twitter poll I did which turned out exactly 50:50, we are doing it the other way around: more games, fewer GPUs.

One comment we get every now and again is that automation isn’t the best way of testing – there’s a higher barrier to entry, and it limits the tests that can be done. From our perspective, despite taking a little while to program properly (and get it right), automation means we can do several things:

  1. Guarantee consistent breaks between tests for cooldown to occur, rather than variable cooldown times based on ‘if I’m looking at the screen’
  2. It allows us to simultaneously test several systems at once. I currently run five systems in my office (limited by the number of 4K monitors, and space) which means we can process more hardware at the same time
  3. We can leave tests to run overnight, very useful for a deadline
  4. With a good enough script, tests can be added very easily

Our benchmark suite collates all the results and spits out data as the tests are running to a central storage platform, which I can probe mid-run to update data as it comes through. This also acts as a mental check in case any of the data might be abnormal.

We do have one major limitation, and that rests on the side of our gaming tests. We are running multiple tests through one Steam account, some of which (like GTA) are online only. As Steam only lets one system play on an account at once, our gaming script probes Steam’s own APIs to determine if we are ‘online’ or not, and to run offline tests until the account is free to be logged in on that system. Depending on the number of games we test that absolutely require online mode, it can be a bit of a bottleneck.

Benchmark Suite Updates

As always, we do take benchmark requests. It helps us understand the workloads that everyone is running and plan accordingly.

A side note on software packages: we have had requests for tests on software such as ANSYS, or other professional grade software. The downside of testing this software is licensing and scale. Most of these companies do not particularly care about us running tests, and state it’s not part of their goals. Others, like Agisoft, are more than willing to help. If you are involved in these software packages, the best way to see us benchmark them is to reach out. We have special versions of software for some of our tests, and if we can get something that works, and relevant to the audience, then we shouldn’t have too much difficulty adding it to the suite.

Our System Test section focuses significantly on real-world testing, user experience, with a slight nod to throughput. In this section we cover application loading time, image processing, simple scientific physics, emulation, neural simulation, optimized compute, and 3D model development, with a combination of readily available and custom software. For some of these tests, the bigger suites such as PCMark do cover them (we publish those values in our office section), although multiple perspectives is always beneficial. In all our tests we will explain in-depth what is being tested, and how we are testing.

All of our benchmark results can also be found in our benchmark engine, Bench.

Application Load: GIMP 2.10.4

One of the most important aspects about user experience and workflow is how fast does a system respond. A good test of this is to see how long it takes for an application to load. Most applications these days, when on an SSD, load fairly instantly, however some office tools require asset pre-loading before being available. Most operating systems employ caching as well, so when certain software is loaded repeatedly (web browser, office tools), then can be initialized much quicker.

In our last suite, we tested how long it took to load a large PDF in Adobe Acrobat. Unfortunately this test was a nightmare to program for, and didn’t transfer over to Win10 RS3 easily. In the meantime we discovered an application that can automate this test, and we put it up against GIMP, a popular free open-source online photo editing tool, and the major alternative to Adobe Photoshop. We set it to load a large 50MB design template, and perform the load 10 times with 10 seconds in-between each. Due to caching, the first 3-5 results are often slower than the rest, and time to cache can be inconsistent, we take the average of the last five results to show CPU processing on cached loading.

AppTimer: GIMP 2.10.4

Application loading is typically single thread limited, but we see here that at some point it also becomes core-resource limited. Having access to more resources per thread in a non-HT environment helps the 8C/8T and 6C/6T processors get ahead of both of the 5.0 GHz parts in our testing.

FCAT: Image Processing

The FCAT software was developed to help detect microstuttering, dropped frames, and run frames in graphics benchmarks when two accelerators were paired together to render a scene. Due to game engines and graphics drivers, not all GPU combinations performed ideally, which led to this software fixing colors to each rendered frame and dynamic raw recording of the data using a video capture device.

The FCAT software takes that recorded video, which in our case is 90 seconds of a 1440p run of Rise of the Tomb Raider, and processes that color data into frame time data so the system can plot an ‘observed’ frame rate, and correlate that to the power consumption of the accelerators. This test, by virtue of how quickly it was put together, is single threaded. We run the process and report the time to completion.

FCAT Processing ROTR 1440p GTX980Ti Data

FCAT is another single thread limited scenario, and it looks like the new 9th gen parts do very well here. The 9700K and 9900K get the same time, split by milliseconds.

3D Particle Movement v2.1: Brownian Motion

Our 3DPM test is a custom built benchmark designed to simulate six different particle movement algorithms of points in a 3D space. The algorithms were developed as part of my PhD., and while ultimately perform best on a GPU, provide a good idea on how instruction streams are interpreted by different microarchitectures.

A key part of the algorithms is the random number generation – we use relatively fast generation which ends up implementing dependency chains in the code. The upgrade over the naïve first version of this code solved for false sharing in the caches, a major bottleneck. We are also looking at AVX2 and AVX512 versions of this benchmark for future reviews.

For this test, we run a stock particle set over the six algorithms for 20 seconds apiece, with 10 second pauses, and report the total rate of particle movement, in millions of operations (movements) per second. We have a non-AVX version and an AVX version, with the latter implementing AVX512 and AVX2 where possible.

3DPM v2.1 can be downloaded from our server: 3DPMv2.1.rar (13.0 MB)

3D Particle Movement v2.1

With a non-AVX code base, the 9900K shows the IPC and frequency improvements over the R7 2700X, although in reality it is not as big of a percentage jump as you might imagine. The processors without HT get pushed back a bit here.

3D Particle Movement v2.1 (with AVX)

When we factor in AVX2/AVX512, the Skylake-X processors go off into a world of their own. The 9900K gets a bigger jump on the R7 2700X, more in line with what we expect, and the Core i7-9700K gets a boost as well.

Dolphin 5.0: Console Emulation

One of the popular requested tests in our suite is to do with console emulation. Being able to pick up a game from an older system and run it as expected depends on the overhead of the emulator: it takes a significantly more powerful x86 system to be able to accurately emulate an older non-x86 console, especially if code for that console was made to abuse certain physical bugs in the hardware.

For our test, we use the popular Dolphin emulation software, and run a compute project through it to determine how close to a standard console system our processors can emulate. In this test, a Nintendo Wii would take around 1050 seconds.

The latest version of Dolphin can be downloaded from https://dolphin-emu.org/

Dolphin 5.0 Render Test

Dolphin is another single thread limited scenario, wher Intel processors have historically done well. Here the 9900K nudges out the 9700K by a second.

DigiCortex 1.20: Sea Slug Brain Simulation

This benchmark was originally designed for simulation and visualization of neuron and synapse activity, as is commonly found in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron / 1.8B synapse simulation, equivalent to a Sea Slug.

Example of a 2.1B neuron simulation

We report the results as the ability to simulate the data as a fraction of real-time, so anything above a ‘one’ is suitable for real-time work. Out of the two modes, a ‘non-firing’ mode which is DRAM heavy and a ‘firing’ mode which has CPU work, we choose the latter. Despite this, the benchmark is still affected by DRAM speed a fair amount.

DigiCortex can be downloaded from http://www.digicortex.net/

DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

DigiCortex runs high on CPU performance and memory bandwidth, but it seems here that a 6-core Ryzen can match the 8-core 9900K pretty easily. the 8700K/8086K seem to do better on this test as well.

y-Cruncher v0.7.6: Microarchitecture Optimized Compute

I’ve known about y-Cruncher for a while, as a tool to help compute various mathematical constants, but it wasn’t until I began talking with its developer, Alex Yee, a researcher from NWU and now software optimization developer, that I realized that he has optimized the software like crazy to get the best performance. Naturally, any simulation that can take 20+ days can benefit from a 1% performance increase! Alex started y-cruncher as a high-school project, but it is now at a state where Alex is keeping it up to date to take advantage of the latest instruction sets before they are even made available in hardware.

For our test we run y-cruncher v0.7.6 through all the different optimized variants of the binary, single threaded and multi-threaded, including the AVX-512 optimized binaries. The test is to calculate 250m digits of Pi, and we use the single threaded and multi-threaded versions of this test.

Users can download y-cruncher from Alex’s website: http://www.numberworld.org/y-cruncher/

y-Cruncher 0.7.6 Single Thread, 250m Digitsy-Cruncher 0.7.6 Multi-Thread, 250m Digits

As y-cruncher has AVX2/AVX512 benefits, we see the Skylake-X processors again go off in their own little world. In multi-threaded, it takes 8 cores in the 9900K/9700K to get beyond a 6-core AVX512 enabled part.

Agisoft Photoscan 1.3.3: 2D Image to 3D Model Conversion

One of the ISVs that we have worked with for a number of years is Agisoft, who develop software called PhotoScan that transforms a number of 2D images into a 3D model. This is an important tool in model development and archiving, and relies on a number of single threaded and multi-threaded algorithms to go from one side of the computation to the other.

In our test, we take v1.3.3 of the software with a good sized data set of 84 x 18 megapixel photos and push it through a reasonably fast variant of the algorithms, but is still more stringent than our 2017 test. We report the total time to complete the process.

Agisoft’s Photoscan website can be found here: http://www.agisoft.com/

Agisoft Photoscan 1.3.3, Complex Test

Photoscan is a task that seems to enjoy both high throughput, single threaded performance, and in this case it looks like having HT off as well.

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

Corona is a fully multithreaded test, so the non-HT parts get a little behind here. The Core i9-9900K blasts through the AMD 8-core parts with a 25% margin, and taps on the door of the 12-core Threadripper.

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line – a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

Blender has an eclectic mix of requirements, from memory bandwidth to raw performance, but like Corona the processors without HT get a bit behind here. The high frequency of the 9900K pushes it above the 10C Skylake-X part, and AMD's 2700X, but behind the 1920X.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.


Taken from the Linux Version of LuxMark

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++LuxMark v3.1 OpenCL

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

The Office test suite is designed to focus around more industry standard tests that focus on office workflows, system meetings, some synthetics, but we also bundle compiler performance in with this section. For users that have to evaluate hardware in general, these are usually the benchmarks that most consider.

All of our benchmark results can also be found in our benchmark engine, Bench.

PCMark 10: Industry Standard System Profiler

Futuremark, now known as UL, has developed benchmarks that have become industry standards for around two decades. The latest complete system test suite is PCMark 10, upgrading over PCMark 8 with updated tests and more OpenCL invested into use cases such as video streaming.

PCMark splits its scores into about 14 different areas, including application startup, web, spreadsheets, photo editing, rendering, video conferencing, and physics. We post all of these numbers in our benchmark database, Bench, however the key metric for the review is the overall score.

PCMark10 Extended Score

As a general mix of a lot of tests, the new processors from Intel take the top three spots, in order. Even the i5-9600K goes ahead of the i7-8086K.

Chromium Compile: Windows VC++ Compile of Chrome 56

A large number of AnandTech readers are software engineers, looking at how the hardware they use performs. While compiling a Linux kernel is ‘standard’ for the reviewers who often compile, our test is a little more varied – we are using the windows instructions to compile Chrome, specifically a Chrome 56 build from March 2017, as that was when we built the test. Google quite handily gives instructions on how to compile with Windows, along with a 400k file download for the repo.

In our test, using Google’s instructions, we use the MSVC compiler and ninja developer tools to manage the compile. As you may expect, the benchmark is variably threaded, with a mix of DRAM requirements that benefit from faster caches. Data procured in our test is the time taken for the compile, which we convert into compiles per day.

Compile Chromium (Rate)

Pushing the raw frequency of the all-core turbo seems to work well in our compile test.

3DMark Physics: In-Game Physics Compute

Alongside PCMark is 3DMark, Futuremark’s (UL’s) gaming test suite. Each gaming tests consists of one or two GPU heavy scenes, along with a physics test that is indicative of when the test was written and the platform it is aimed at. The main overriding tests, in order of complexity, are Ice Storm, Cloud Gate, Sky Diver, Fire Strike, and Time Spy.

Some of the subtests offer variants, such as Ice Storm Unlimited, which is aimed at mobile platforms with an off-screen rendering, or Fire Strike Ultra which is aimed at high-end 4K systems with lots of the added features turned on. Time Spy also currently has an AVX-512 mode (which we may be using in the future).

For our tests, we report in Bench the results from every physics test, but for the sake of the review we keep it to the most demanding of each scene: Ice Storm Unlimited, Cloud Gate, Sky Diver, Fire Strike Ultra, and Time Spy.

3DMark Physics - Ice Storm Unlimited3DMark Physics - Cloud Gate3DMark Physics - Sky Diver3DMark Physics - Fire Strike Ultra3DMark Physics - Time Spy

The older Ice Storm test didn't much like the Core i9-9900K, pushing it back behind the R7 1800X. For the more modern tests focused on PCs, the 9900K wins out. The lack of HT is hurting the other two parts.

GeekBench4: Synthetics

A common tool for cross-platform testing between mobile, PC, and Mac, GeekBench 4 is an ultimate exercise in synthetic testing across a range of algorithms looking for peak throughput. Tests include encryption, compression, fast Fourier transform, memory operations, n-body physics, matrix operations, histogram manipulation, and HTML parsing.

I’m including this test due to popular demand, although the results do come across as overly synthetic, and a lot of users often put a lot of weight behind the test due to the fact that it is compiled across different platforms (although with different compilers).

We record the main subtest scores (Crypto, Integer, Floating Point, Memory) in our benchmark database, but for the review we post the overall single and multi-threaded results.

Geekbench 4 - ST Overall

Geekbench 4 - MT Overall

With the rise of streaming, vlogs, and video content as a whole, encoding and transcoding tests are becoming ever more important. Not only are more home users and gamers needing to convert video files into something more manageable, for streaming or archival purposes, but the servers that manage the output also manage around data and log files with compression and decompression. Our encoding tasks are focused around these important scenarios, with input from the community for the best implementation of real-world testing.

All of our benchmark results can also be found in our benchmark engine, Bench.

Handbrake 1.1.0: Streaming and Archival Video Transcoding

A popular open source tool, Handbrake is the anything-to-anything video conversion software that a number of people use as a reference point. The danger is always on version numbers and optimization, for example the latest versions of the software can take advantage of AVX-512 and OpenCL to accelerate certain types of transcoding and algorithms. The version we use here is a pure CPU play, with common transcoding variations.

We have split Handbrake up into several tests, using a Logitech C920 1080p60 native webcam recording (essentially a streamer recording), and convert them into two types of streaming formats and one for archival. The output settings used are:

  • 720p60 at 6000 kbps constant bit rate, fast setting, high profile
  • 1080p60 at 3500 kbps constant bit rate, faster setting, main profile
  • 1080p60 HEVC at 3500 kbps variable bit rate, fast setting, main profile

Handbrake 1.1.0 - 720p60 x264 6000 kbps FastHandbrake 1.1.0 - 1080p60 x264 3500 kbps FasterHandbrake 1.1.0 - 1080p60 HEVC 3500 kbps Fast

7-zip v1805: Popular Open-Source Encoding Engine

Out of our compression/decompression tool tests, 7-zip is the most requested and comes with a built-in benchmark. For our test suite, we’ve pulled the latest version of the software and we run the benchmark from the command line, reporting the compression, decompression, and a combined score.

It is noted in this benchmark that the latest multi-die processors have very bi-modal performance between compression and decompression, performing well in one and badly in the other. There are also discussions around how the Windows Scheduler is implementing every thread. As we get more results, it will be interesting to see how this plays out.

Please note, if you plan to share out the Compression graph, please include the Decompression one. Otherwise you’re only presenting half a picture.

7-Zip 1805 Compression7-Zip 1805 Decompression7-Zip 1805 Combined

WinRAR 5.60b3: Archiving Tool

My compression tool of choice is often WinRAR, having been one of the first tools a number of my generation used over two decades ago. The interface has not changed much, although the integration with Windows right click commands is always a plus. It has no in-built test, so we run a compression over a set directory containing over thirty 60-second video files and 2000 small web-based files at a normal compression rate.

WinRAR is variable threaded but also susceptible to caching, so in our test we run it 10 times and take the average of the last five, leaving the test purely for raw CPU compute performance.

WinRAR 5.60b3

AES Encryption: File Security

A number of platforms, particularly mobile devices, are now offering encryption by default with file systems in order to protect the contents. Windows based devices have these options as well, often applied by BitLocker or third-party software. In our AES encryption test, we used the discontinued TrueCrypt for its built-in benchmark, which tests several encryption algorithms directly in memory.

The data we take for this test is the combined AES encrypt/decrypt performance, measured in gigabytes per second. The software does use AES commands for processors that offer hardware selection, however not AVX-512.

AES Encoding

While more the focus of low-end and small form factor systems, web-based benchmarks are notoriously difficult to standardize. Modern web browsers are frequently updated, with no recourse to disable those updates, and as such there is difficulty in keeping a common platform. The fast paced nature of browser development means that version numbers (and performance) can change from week to week. Despite this, web tests are often a good measure of user experience: a lot of what most office work is today revolves around web applications, particularly email and office apps, but also interfaces and development environments. Our web tests include some of the industry standard tests, as well as a few popular but older tests.

We have also included our legacy benchmarks in this section, representing a stack of older code for popular benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

WebXPRT 3: Modern Real-World Web Tasks, including AI

The company behind the XPRT test suites, Principled Technologies, has recently released the latest web-test, and rather than attach a year to the name have just called it ‘3’. This latest test (as we started the suite) has built upon and developed the ethos of previous tests: user interaction, office compute, graph generation, list sorting, HTML5, image manipulation, and even goes as far as some AI testing.

For our benchmark, we run the standard test which goes through the benchmark list seven times and provides a final result. We run this standard test four times, and take an average.

Users can access the WebXPRT test at http://principledtechnologies.com/benchmarkxprt/webxprt/

WebXPRT 3 (2018)

WebXPRT 2015: HTML5 and Javascript Web UX Testing

The older version of WebXPRT is the 2015 edition, which focuses on a slightly different set of web technologies and frameworks that are in use today. This is still a relevant test, especially for users interacting with not-the-latest web applications in the market, of which there are a lot. Web framework development is often very quick but with high turnover, meaning that frameworks are quickly developed, built-upon, used, and then developers move on to the next, and adjusting an application to a new framework is a difficult arduous task, especially with rapid development cycles. This leaves a lot of applications as ‘fixed-in-time’, and relevant to user experience for many years.

Similar to WebXPRT3, the main benchmark is a sectional run repeated seven times, with a final score. We repeat the whole thing four times, and average those final scores.

WebXPRT15

Speedometer 2: JavaScript Frameworks

Our newest web test is Speedometer 2, which is a accrued test over a series of javascript frameworks to do three simple things: built a list, enable each item in the list, and remove the list. All the frameworks implement the same visual cues, but obviously apply them from different coding angles.

Our test goes through the list of frameworks, and produces a final score indicative of ‘rpm’, one of the benchmarks internal metrics. We report this final score.

Speedometer 2

Google Octane 2.0: Core Web Compute

A popular web test for several years, but now no longer being updated, is Octane, developed by Google. Version 2.0 of the test performs the best part of two-dozen compute related tasks, such as regular expressions, cryptography, ray tracing, emulation, and Navier-Stokes physics calculations.

The test gives each sub-test a score and produces a geometric mean of the set as a final result. We run the full benchmark four times, and average the final results.

Google Octane 2.0

Mozilla Kraken 1.1: Core Web Compute

Even older than Octane is Kraken, this time developed by Mozilla. This is an older test that does similar computational mechanics, such as audio processing or image filtering. Kraken seems to produce a highly variable result depending on the browser version, as it is a test that is keenly optimized for.

The main benchmark runs through each of the sub-tests ten times and produces an average time to completion for each loop, given in milliseconds. We run the full benchmark four times and take an average of the time taken.

Mozilla Kraken 1.1

3DPM v1: Naïve Code Variant of 3DPM v2.1

The first legacy test in the suite is the first version of our 3DPM benchmark. This is the ultimate naïve version of the code, as if it was written by scientist with no knowledge of how computer hardware, compilers, or optimization works (which in fact, it was at the start). This represents a large body of scientific simulation out in the wild, where getting the answer is more important than it being fast (getting a result in 4 days is acceptable if it’s correct, rather than sending someone away for a year to learn to code and getting the result in 5 minutes).

In this version, the only real optimization was in the compiler flags (-O2, -fp:fast), compiling it in release mode, and enabling OpenMP in the main compute loops. The loops were not configured for function size, and one of the key slowdowns is false sharing in the cache. It also has long dependency chains based on the random number generation, which leads to relatively poor performance on specific compute microarchitectures.

3DPM v1 can be downloaded with our 3DPM v2 code here: 3DPMv2.1.rar (13.0 MB)

3DPM v1 Single Threaded3DPM v1 Multi-Threaded

x264 HD 3.0: Older Transcode Test

This transcoding test is super old, and was used by Anand back in the day of Pentium 4 and Athlon II processors. Here a standardized 720p video is transcoded with a two-pass conversion, with the benchmark showing the frames-per-second of each pass. This benchmark is single-threaded, and between some micro-architectures we seem to actually hit an instructions-per-clock wall.

x264 HD 3.0 Pass 1x264 HD 3.0 Pass 2

Albeit different to most of the other commonly played MMO or massively multiplayer online games, World of Tanks is set in the mid-20th century and allows players to take control of a range of military based armored vehicles. World of Tanks (WoT) is developed and published by Wargaming who are based in Belarus, with the game’s soundtrack being primarily composed by Belarusian composer Sergey Khmelevsky. The game offers multiple entry points including a free-to-play element as well as allowing players to pay a fee to open up more features. One of the most interesting things about this tank based MMO is that it achieved eSports status when it debuted at the World Cyber Games back in 2012.

World of Tanks enCore is a demo application for a new and unreleased graphics engine penned by the Wargaming development team. Over time the new core engine will implemented into the full game upgrading the games visuals with key elements such as improved water, flora, shadows, lighting as well as other objects such as buildings. The World of Tanks enCore demo app not only offers up insight into the impending game engine changes, but allows users to check system performance to see if the new engine run optimally on their system.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
World of Tanks enCore Driving / Action Feb
2018
DX11 768p
Minimum
1080p
Medium
1080p
Ultra
4K
Ultra

All of our benchmark results can also be found in our benchmark engine, Bench.

World of Tanks enCore IGP Low Medium High
Average FPS
95th Percentile

Being a game that’s not especially GPU limited – at least not at Low image quality settings – World of Tanks gives the 9900K some room to stretch its legs. The game isn’t especially sensitive to core counts, so it’s all about high per-thread performance. And in this case the 9900K with its 5.0GHz turbo speed pulls ahead. In fact I’m surprised by just how far ahead of the 8086K it is (16%); this may be one of the big payoffs from the 9900K being able to turbo to 5.0GHz on two cores, versus a single core on the 8086K.

The 9700K also puts up a strong showing in this situation, second only to the 9900K. We have a few theories on this – including whether the lack of hyper-threading plays a benefit – but it’s none the less notable that the new CFL-R CPUs are taking the top two spots.

The flip side however is that any CPU-based performance lead melts away with higher image quality settings. By the time we reach High quality, it’s purely GPU bottlenecked.

Upon arriving to PC earlier this, Final Fantasy XV: Windows Edition was given a graphical overhaul as it was ported over from console, fruits of their successful partnership with NVIDIA, with hardly any hint of the troubles during Final Fantasy XV's original production and development.

In preparation for the launch, Square Enix opted to release a standalone benchmark that they have since updated. Using the Final Fantasy XV standalone benchmark gives us a lengthy standardized sequence to record, although it should be noted that its heavy use of NVIDIA technology means that the Maximum setting has problems – it renders items off screen. To get around this, we use the standard preset which does not have these issues.

Square Enix has patched the benchmark with custom graphics settings and bugfixes to be much more accurate in profiling in-game performance and graphical options. For our testing, we run the standard benchmark with a FRAPs overlay, taking a 6 minute recording of the test.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Final Fantasy XV JRPG Mar
2018
DX11 720p
Standard
1080p
Standard
4K
Standard
8K
Standard

All of our benchmark results can also be found in our benchmark engine, Bench.

Final Fantasy XV IGP Low Medium High
Average FPS
95th Percentile

Unlike World of Tanks, Final Fantasy is never entirely CPU limited at any one point. Even on its Low settings, our entire collection of CPUs is within a 7% range. Only once we drop down to IGP-level settings – which are really meant more for IGP comparisons – do we tease out any kind of CPU difference. Still, in that scenario the 9900K does at least eek out a few more frames than prior Intel CPUs, with the 9700K taking up second place. Past that, this is very clearly a game that is GPU limited in almost all scenarios.

Next up is Middle-earth: Shadow of War, the sequel to Shadow of Mordor. Developed by Monolith, whose last hit was arguably F.E.A.R., Shadow of Mordor returned them to the spotlight with an innovative NPC rival generation and interaction system called the Nemesis System, along with a storyline based on J.R.R. Tolkien's legendarium, and making it work on a highly modified engine that originally powered F.E.A.R. in 2005.

Using the new LithTech Firebird engine, Shadow of War improves on the detail and complexity, and with free add-on high-resolution texture packs, offers itself as a good example of getting the most graphics out of an engine that may not be bleeding edge. Shadow of War also supports HDR (HDR10).

AnandTech CPU Gaming 2019 Game List
Game Genre Release API IGP Low Med High
Shadow of War Action / RPG Sep 2017 DX11 720p Ultra 1080p Ultra 4K High 8K High

All of our benchmark results can also be found in our benchmark engine, Bench.

Shadow of War IGP Low Medium High
Average FPS

Shadow of War is another game where it’s hard to tease out CPU limitations under reasonable game settings. Even 1080p Ultra is a bunch of Intel CPUs seeing who can tip-toe over 100fps, with AMD right on their tail. The less reasonable 720p Ultra pushes this back slightly – the CPUs with the weakest per-thread performance start to fall behind – but it’s still a tight pack for all of the Coffee Lake CPUs. With the highest frequencies and tied for the most cores among the desktop processors here, it’s clear that the 9900K is going to be the strongest contender. But this isn’t a game that can benefit from that performance right now.

Originally penned by Sid Meier and his team, the Civ series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer overflow. Truth be told I never actually played the first version, but every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, it a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

Perhaps a more poignant benchmark would be during the late game, when in the older versions of Civilization it could take 20 minutes to cycle around the AI players before the human regained control. The new version of Civilization has an integrated ‘AI Benchmark’, although it is not currently part of our benchmark portfolio yet, due to technical reasons which we are trying to solve. Instead, we run the graphics test, which provides an example of a mid-game setup at our settings.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Civilization VI RTS Oct
2016
DX12 1080p
Ultra
4K
Ultra
8K
Ultra
16K
Low

All of our benchmark results can also be found in our benchmark engine, Bench.

Civilization 6 IGP Low Medium High
Average FPS
95th Percentile

Continuing the theme we’ve seen thus far, Civilization 6 is another game where the 9900K does provide some benefits, but not under all circumstances. The game is not particularly GPU-intensive to begin with, so at just 4K Ultra we’re still not entirely GPU limited; but past a Ryzen 7 2700X or so, all the CPUs start running together. We have to drop to 1080p Ultra to really pull the CPUs off of the dogpile, at which point the 9900K comes out in the lead.

This is another game that doesn’t seem to care about core counts so much as it does frequencies. So the 9900K has the strongest position here, while the 9700K brings up second place. But neither are very far from the 8700K, with Intel’s latest coming in at just 12% faster than their former flagship even at these CPU benchmarking sympathetic settings.

Curiously we also see the 9900K fall behind the 9700K at 4K and higher. The difference is easily close enough to be noise, but it might be a very slight impact of the lower-tier chips not having to share their cores with hyper-threading.

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of the DirectX12 features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run Ashes Classic: an older version of the game before the Escalation update. The reason for this is that this is easier to automate, without a splash screen, but still has a strong visual fidelity to test.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Ashes: Classic RTS Mar
2016
DX12 720p
Standard
1080p
Standard
1440p
Standard
4K
Standard

Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at the above settings, and take the frame-time output for our average and percentile numbers.

All of our benchmark results can also be found in our benchmark engine, Bench.

Ashes Classic IGP Low Medium High
Average FPS
95th Percentile

As a game that was designed from the get-go to punish CPUs and showcase the benefits of DirectX 12-style APIs, Ashes is one of our more CPU-sensitive tests. Above 1080p results still start running together due to GPU limits, but at or below that, we get some useful separation. In which case what we see is that the 9900K ekes out a small advantage, putting it in the lead and with the 9700K right behind it.

Notably, the game doesn’t scale much from 1080p down to 720p. Which leads me to suspect that we’re looking at a relatively pure CPU bottleneck, a rarity in modern games. In which case it’s both good and bad for Intel’s latest CPU; it’s definitely the fastest thing here, but it doesn’t do much to separate itself from the likes of the 8700K, holding just a 4% advantage at 1080p. This being despite its frequency and core count advantage. So assuming this is not in fact a GPU limit, then it means we may be encroaching on another bottleneck (memory bandwidth?), or maybe the practical frequency gains on the 9900K just aren’t all that much here.

But if nothing else, the 9900K and even the 9700K do make a case for themselves here versus the 9600K. Whether it’s the core or the clockspeeds, there’s a 10% advantage for the faster processors at 1080p.

Strange Brigade is based in 1903’s Egypt and follows a story which is very similar to that of the Mummy film franchise. This particular third-person shooter is developed by Rebellion Developments which is more widely known for games such as the Sniper Elite and Alien vs Predator series. The game follows the hunt for Seteki the Witch Queen who has arose once again and the only ‘troop’ who can ultimately stop her. Gameplay is cooperative centric with a wide variety of different levels and many puzzles which need solving by the British colonial Secret Service agents sent to put an end to her reign of barbaric and brutality.

The game supports both the DirectX 12 and Vulkan APIs and houses its own built-in benchmark which offers various options up for customization including textures, anti-aliasing, reflections, draw distance and even allows users to enable or disable motion blur, ambient occlusion and tessellation among others. AMD has boasted previously that Strange Brigade is part of its Vulkan API implementation offering scalability for AMD multi-graphics card configurations.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Strange Brigade* FPS Aug
2018
DX12
Vulkan
720p
Low
1080p
Medium
1440p
High
4K
Ultra
*Strange Brigade is run in DX12 and Vulkan modes

All of our benchmark results can also be found in our benchmark engine, Bench.

Strange Brigade DX12 IGP Low Medium High
Average FPS
95th Percentile

[words]

Strange Brigade Vulkan IGP Low Medium High
Average FPS
95th Percentile

Strange Brigade is another game that’s hard to tease CPU results out of at default settings. We’re clearly GPU-limited at 1080p medium, and have to drop down to 720p low to spread apart the CPUs. Once we do, the 9900K takes the lead, with the 9700K right behind it. Here Intel’s latest-gen flagship is still working hard to offer more than a 5% performance advantage over last year’s 8700K. Also, did I mention that everything faster than a 7700K is delivering 400fps or better?

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Grand Theft Auto V Open World Apr
2015
DX11 720p
Low
1080p
High
1440p
Very High
4K
Ultra
*Strange Brigade is run in DX12 and Vulkan modes

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

All of our benchmark results can also be found in our benchmark engine, Bench.

GTA 5 IGP Low Medium High
Average FPS
95th Percentile

[words]

The latest title in Ubisoft's Far Cry series lands us right into the unwelcoming arms of an armed militant cult in Montana, one of the many middles-of-nowhere in the United States. With a charismatic and enigmatic adversary, gorgeous landscapes of the northwestern American flavor, and lots of violence, it is classic Far Cry fare. Graphically intensive in an open-world environment, the game mixes in action and exploration.

Far Cry 5 does support Vega-centric features with Rapid Packed Math and Shader Intrinsics. Far Cry 5 also supports HDR (HDR10, scRGB, and FreeSync 2). We use the in-game benchmark for our data, and report the average/minimum frame rates.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Far Cry 5 FPS Mar
2018
DX11 720p
Low
1080p
Normal
1440p
High
4K
Ultra

All of our benchmark results can also be found in our benchmark engine, Bench.

Far Cry 5 IGP Low High
Average FPS
Minimum FPS

[words]

The latest instalment of the Tomb Raider franchise does less rising and lurks more in the shadows with Shadow of the Tomb Raider. As expected this action-adventure follows Lara Croft which is the main protagonist of the franchise as she muscles through the Mesoamerican and South American regions looking to stop a Mayan apocalyptic she herself unleashed. Shadow of the Tomb Raider is the direct sequel to the previous Rise of the Tomb Raider and was developed by Eidos Montreal and Crystal Dynamics and was published by Square Enix which hit shelves across multiple platforms in September 2018. This title effectively closes the Lara Croft Origins story and has received critical acclaims upon its release.

The integrated Shadow of the Tomb Raider benchmark is similar to that of the previous game Rise of the Tomb Raider, which we have used in our previous benchmarking suite. The newer Shadow of the Tomb Raider uses DirectX 11 and 12, with this particular title being touted as having one of the best implementations of DirectX 12 of any game released so far.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Shadow of the Tomb Raider Action Sep
2018
DX12 720p
Low
1080p
Medium
1440p
High
4K
Highest
*Strange Brigade is run in DX12 and Vulkan modes

All of our benchmark results can also be found in our benchmark engine, Bench.

SoTR IGP Low Medium High
Average FPS
95th Percentile

[words]

Aside from keeping up-to-date on the Formula One world, F1 2017 added HDR support, which F1 2018 has maintained; otherwise, we should see any newer versions of Codemasters' EGO engine find its way into F1. Graphically demanding in its own right, F1 2018 keeps a useful racing-type graphics workload in our benchmarks.

Aside from keeping up-to-date on the Formula One world, F1 2017 added HDR support, which F1 2018 has maintained. We use the in-game benchmark, set to run on the Montreal track in the wet, driving as Lewis Hamilton from last place on the grid. Data is taken over a one-lap race.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
F1 2018 Racing Aug
2018
DX11 720p
Low
1080p
Med
4K
High
4K
Ultra

All of our benchmark results can also be found in our benchmark engine, Bench.

F1 2018 IGP Low Medium High
Average FPS
95th Percentile

[words]

Despite being the ultimate joke at any bring-your-own-computer event, gaming on integrated graphics can ultimately be as rewarding as the latest mega-rig that costs the same as a car. The desire for strong integrated graphics in various shapes and sizes has waxed and waned over the years, with Intel relying on its latest ‘Gen’ graphics architecture while AMD happily puts its Vega architecture into the market to swallow up all the low-end graphics card sales. With Intel poised to make an attack on graphics in the next few years, it will be interesting to see how the graphics market develops, especially integrated graphics.

For our integrated graphics testing, we take our ‘IGP’ category settings for each game and loop the benchmark round for five minutes a piece, taking as much data as we can from our automated setup.

IGP: World of Tanks, Average FPS IGP: Final Fantasy XV, Average FPS

[words]

TDP or not the TDP, That is The Question

As shown above, Intel has given each of these processors a Thermal Design Power of 95 Watts. This magic value, as mainstream processors have grown in the last two years, has been at the center of a number of irate users.

By Intel’s own definitions, the TDP is an indicator of the cooling performance required for a processor to maintain its base frequency. In this case, if a user can only cool 95W, they can expect to realistically get only 3.6 GHz on a shiny new Core i9-9900K. That magic TDP value does not take into account any turbo values, even if the all-core turbo (such as 4.7 GHz in this case) is way above that 95W rating.

In order to make sense of this, Intel uses a series of variables called Power Levels: PL1, PL2, and PL3.

That slide is a bit dense, so we should focus on the graph on the right. This is a graph of power against time.

Here we have four horizontal lines from bottom to top: cooling limit (PL1), sustained power delivery (PL2), battery limit (PL3), and power delivery limit.

The bottom line, the cooling limit, is effectively the TDP value. Here the power (and frequency) is limited by the cooling at hand. It is the lowest sustainable frequency for the cooling, so for the most part TDP = PL1.  This is our ‘95W’ value.

The PL2 value, or sustained power delivery, is what amounts to the turbo. This is the maximum sustainable power that the processor can take until we start to hit thermal issues. When a chip goes into a turbo mode, sometimes briefly, this is the part that is relied upon. The value of PL2 can be set by the system manufacturer, however Intel has its own recommended PL2 values.

In this case, for the new 9th Generation Core processors, Intel has set the PL2 value to 210W. This is essentially the power required to hit the peak turbo on all cores, such as 4.7 GHz on the eight-core Core i9-9900K. So users can completely forget the 95W TDP when it comes to cooling. If a user wants those peak frequencies, it’s time to invest in something capable and serious.

Luckily, we can confirm all this in our power testing.

For our testing, we use POV-Ray as our load generator then take the register values for CPU power. This software method, for most platforms, includes the power split between the cores, the DRAM, and the package power. Most users cite this method as not being fully accurate, however compared to system testing it provides a good number without losses, and it forms the basis of the power values used inside the processor for its various functions.

Starting with the easy one, maximum CPU power draw.

Power (Package), Full Load

Focusing on the three new Intel CPUs, the obvious outlier here is the Core i9-9900K. Despite Intel’s 210W PL2 value, our processor actually goes beyond this at full load, hitting 221W. At this level, the CPU is running all eight cores and sixteen threads at 4.7 GHz. If we take out hyperthreading and go to 4.6 GHz all-core, then this is the Core i7-9700K, which hits 158W. This is still far and above that ‘TDP’ rating noted above.

However, taking off two cores and going down to 4.3 GHz all core and then we have the situation of the Core i5-9600K. It looks like this fits around 81-82W, below that TDP marker.

Should users be interested, in our testing at 4C/4T and 3.0 GHz, the Core i9-9900K only hit 23W power. Doubling the cores and adding another 50%+ to the frequency causes an almost 10x increase in power consumption. When Intel starts pushing those frequencies, it needs a lot of juice.

If we break out the 9900K into how much power is consumed as we load up the threads, the results look very linear.

This is as we load two threads onto one core at a time. The processor slowly adds power to the cores and only when all threads are loaded does the full PL2 come into effect.

Comparing to the other two ‘95W’ processors, we can see that the Core i9-9900K pushes more power as more cores are loaded. Despite Intel officially giving all three the same TDP at 95W, and the same PL2 at 210W, there are clear differences due to the fixed turbo tables embedded in each BIOS.

So is TDP Pointless? Yes, But There is a Solution

If you believe that TDP is the peak power draw of the processor under default scenarios, then yes, TDP is pointless, and technically it has been for generations. However under the miasma of a decade of quad core processors, most parts didn’t even reach the TDP rating even under full load – it wasn’t until we started getting higher core count parts, at the same or higher frequency, where it started becoming an issue.

But fear not, there is a solution. Or at least I want to offer one to both Intel and AMD, to see if they will take me up on the offer. The solution here is to offer two TDP ratings: a TDP and a TDP-Peak. In Intel lingo, this is PL1 and PL2, but basically the TDP-Peak takes into account the ‘all-core’ turbo. It doesn’t have to be covered under warranty (because as of right now, turbo is not), but it should be an indication for the nature of the cooling that a user needs to purchase if they want the best performance. Otherwise it’s a case of fumbling in the dark.

With the upgraded thermal interface between the processor and the heatspreader, from paste to solder, Intel is leaning on the fact that these overclockable processors should be more overclockable than previous generations. We’ve only had time to test the Core i9-9900K and i7-9700K on this, so we took them for a spin.

Our overclocking methodology is simple. We set the Load Line Calibration to static (or level 1 for this ASRock Z370 motherboard), set the frequency to 4.5 GHz, the voltage to 1.000 volts, and run our tests. If successfully stable, we record the power and performance, and then increase the CPU multiplier. If the system fails, we increase the voltage by +0.025 volts. The overclocking ends when the temperatures get too high (85C+).

For our new test suite comes new overclocking features. As mentioned in the previous page, our software loading for power measurement is POV-Ray, which can thrash a processor quite harshly. POV-Ray also does a good job on stability, but is not a substantial enough test – for that we use our Blender workload, which pushes the cores and the memory, and lasts about 5 minutes on an 8 core processor.

Results as follows:

For the Core i7-9700K, we hit 5.3 GHz very easily, for a small bump in power and temperature. For 5.4 GHz, we could boot into the operating system but it was in no way stable – we were ultimately voltage/temperature limited at this case. But an eight core, eight thread 5.3 GHz CPU at 180W for $374? Almost unimaginable a year ago.

We're currently uploading the tests from our Core i9-9900K.

When Intel announced the new processor lineup, it billed the Core i9-9900K as the ‘world’s best gaming processor’. Here’s Intel’s Anand Srivatsa, showcasing the new packaging for this eight core, sixteen thread, 5.0 GHz giant:

In actual fact, the packaging is very small. Intel didn’t supply us with this upgraded retail version of the box, but we were sampled with a toasty Core i9-9900K inside. We sourced the i7-9700K and i5-9600K from Intel’s partners for this review.

With the claim of ‘world’s best ever gaming processor’, it was clear that this needed to be put to the test. Intel commissioned (paid for) a report into the processor performance by a third party in order to obtain data, which unfortunately had numerous issues, particularly with how the chips it was tested against were benchmarked, but here at AnandTech we’ll give you the right numbers.

For our gaming tests this time around, we put each game through four different resolutions and scenarios, labelled IGP (for 720p), Low (for 1080p), Medium (for 1440p to 4K), and High (for 4K and above). Here’s a brief summary of results:

  • World of Tanks: Best CPU at IGP, Low, Medium, and top class in High
  • Final Fantasy XV: Best CPU or near top in all
  • Shadow of War: Best CPU or near top in all
  • Civilization VI: Best CPU at IGP, a bit behind at 4K, top class at 8K/16K
  • Ashes Classic: Best CPU at IGP, Low, top class at Medium, mid-pack at 4K
  • Strange Brigade DX12/Vulkan: Best CPU or near top in all
  • Grand Theft Auto V: Best CPU or near top in all
  • Far Cry 5: Best CPU or near top in all
  • Shadow of the Tomb Raider: Near top in all
  • F1 2018: Best CPU or near top in all

There’s no way around it, in almost every scenario it was either top or within variance of being the best processor in every test (except Ashes at 4K). Intel has built the world’s best gaming processor (again).

On our CPU tests, the i9-9900K hit a lot of the synthetics higher than any other mainstream processor. In some of our real world tests, such as application loading or web performance, it lost out from time to time to the i7 and i5 due to having hyper-threading, as those tests tend to prefer threads that have access to the full core resources. For memory limited tests, the high-end desktop platforms provide a better alternative.

While there’s no specific innovation in the processors driving the performance, Intel re-checked the box for STIM, last used on the mainstream in Sandy Bridge. The STIM implementation has enabled Intel to push the frequency of these parts. It was always one of the tools the company had in its back pocket, and many will speculate as to the reasons why it used that tool at this point in time.

But overall, due to the frequency push and the core push, the three new 9th Generation processors sit at the top of most of our mixed workload tests, given the high natural frequency, and set a new standard in Intel’s portfolio for being a jack of all trades. If a user has a variable workload, and wants to squeeze performance, then these new processors will should get you there.

So now, if you are the money-no-object kind of gamer, this is the processor for you. But it’s not a processor for everyone, and that comes down to cost and competition.

At $488 MSRP, plus add $80-$120 for a decent cooler or $200 for a custom loop, it’s going to be out of the range for almost all builds south of $1500 where GPU matters the most. When Intel’s own i5-9600K is under half the cost with only two fewer cores, or AMD’s R7 2700X is very competitive in almost every test, while they might not be the best, they’re more cost-effective.

The outlandish flash of the cash goes on the Core i9-9900K. The smart money ends up on the 9700K, 9600K, or the 2700X. For the select few, money is no object. For the rest of us, especially when gaming at 1440p and higher settings where the GPU is the bigger bottleneck, there are plenty of processors that do just fine, and are a bit lighter on the power bill in the process.

Let’s block ads! (Why?)

[ad_2]

Source link

Load More By admin
Load More In Tech

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also

IPhone owners can sue Apple for monopolizing App Store, Supreme Court rules – CNN

[ad_1] Justice Brett Kavanaugh, in the majority opinion, said that when “retailers e…