Dissecting Intel's EPYC Benchmarks: Performance Through the Lens of Competitive Analysis
by Johan De Gelas & Ian Cutress on November 28, 2017 9:00 AM EST- Posted in
- CPUs
- AMD
- Intel
- Xeon
- Skylake-SP
- Xeon Platinum
- EPYC
- EPYC 7601
Database Performance & Variability
Results are very different with respect to transactional database benchmarks (HammerDB & OLTP). Intel's 8160 has an advantage of 22 to 29%, which is very similar to what we saw in our own independent benchmarking.
One of the main reasons is data locality: data is distributed over the many NUMA nodes causing extra latency for data access. Especially when data is locked, this can cause performance degradation.
Intel measured this with their own Memory Latency Checker (version 3.4), but you do not have rely on Intel alone. AMD reported similar results on the Linley Processor conference, and we saw similar results too.
There is more: Intel's engineers noticed quite a bit of performance variation between different runs.
Intel engineers claim that what they reported in the first graph on this page is, in fact, the best of 10 runs. Between the 10 runs, it is claimed there was a lot of variability: ignoring the outlier number 2, there are several occasions where performance was around 60% of the best reported value. Although we can not confirm that the performance of the EPYC system varies precisely that much, we have definitely seen more variation in our EPYC benchmarks than on a comparable Intel system.
105 Comments
View All Comments
Johan Steyn - Monday, December 18, 2017 - link
I have stated before that Anandtech is on Intel's payroll. You could see it especially with the first Threadripper review, it was horrendous to say the least. This article goes the same route. You see, two people can say the same thing, but project a completely different picture. I do not disagree that Intel has it strengths over EPYC, but this article basically just agrees with Intel,s presentation. Ha ha, that would have been funny, but it is not.Intel is corrupt company and Anandtech is missing the point on how they present their "facts." I now very rarely read anything Anandtech publishes. In the 90's they were excellent - those were the days...
Jumangi - Tuesday, November 28, 2017 - link
Maybe you have herd of Google..or Facebook. Not only d9 they build but they design their own rack systems to suit their massive needs.Samus - Wednesday, November 29, 2017 - link
Even mom and pop shops shouldn't have servers built from scratch. Who's going to support and validate that hardware for the long haul?HP and Dell have the best servers in my opinion. Top to bottom. Lenovo servers are at best just rehashes of their crappy workstations. If you want to get exotic (I don't) one could consider Supermicro...friends in the industry have always mentioned good luck with them, and good support. But my experience is with the big three.
Ratman6161 - Wednesday, November 29, 2017 - link
You are both wrong in my experience. These days the software that runs on servers usually costs more (often by a wide margin) than the hardware it runs on. I was once running a software package the company paid $320K for on a VM environment of five two socket Dell servers and a SAN where the total hardware cost was $165K. But that was for the whole VM environment that ran many other servers besides the two that ran this package. Even the $165K for the VM environment included VMWare licensing so that was part software too. Considering the resources the two VMs running this package used, the total cost for the project was probably somewhere around 10% hardware and 90% software licensing.For my particular usage, the virtualization numbers are the most important so if we accept these numbers, Intel seems to be the way to go. The $10K CPU's seem pretty outlandish though. For virtualization purposes it seems like there might be more bang for the buck by going with the 8160 and just adding more hosts. Would have to get down to actually doing the math to decide on that one.
meepstone - Thursday, December 7, 2017 - link
So I'm not sure who has the bigger e-peen between eek2121 and CajunArson. The drama in the comments were more entertaining than the article!ddriver - Tuesday, November 28, 2017 - link
Take a chill pill you intel shill :)Go over to servethehome and check results from someone who is not paid to pimp intel. Epyc enjoys ample lead against similarly priced xeons.
The only niche it is at a disadvantage is the low core count high clock speed skus, simply because for some inexplicable reason amd decided to not address that important market.
Lastly, nobody buys those 10+k $$$ xeons with his own money. Those are bought exclusively with "others' money" by people who don't care about purchase value, because they have deals with intel that put a percent of that money right back into their pockets, which is their true incentive. If they could put that money in their pockets directly, they would definitely seek the best purchase value rather than going through intel to essentially launder it for them.
iwod - Tuesday, November 28, 2017 - link
This. Go to servethehome and make up your own mind.lazarpandar - Tuesday, November 28, 2017 - link
It's one thing to sound like a dick, it's another thing to sound like a dick and be wrong at the same time.mkaibear - Tuesday, November 28, 2017 - link
Er, yes, if you want just 128Gb of RAM it may cost you $1,500, but if you actually want to use the capacity of those servers you'll want a good deal more than that.The server mentioned in the Intel example can take 1.5Tb of ECC RAM, at a total cost of about $20k - at which point the cost of the CPU is much less of an impact.
As CajunArson said, a full load of RAM on one of these servers is expensive. Your response of "yes well if you only buy 128Gb of RAM it's not that expensive", while true, is a tad asinine - you're not addressing the point he made.
eek2121 - Tuesday, November 28, 2017 - link
Not every workload requires that the RAM be topped off. We are currently in the middle of building our own private cloud on Hyper-V to replace our AWS presence, which involves building out at multiple datacenters around the country. Our servers have half a terabyte of RAM. Even with that much RAM, CPUs like this would still be (and are) a major factor in the overall cost of the server. The importance for our use case is the ability to scale, not the ability to cram as many VMs into one machine as possible. 2 servers with half a terabyte of RAM are far more valuable to us than 1 server with 1-1.5 terabytes due to redundancy.