The Intel Xeon D Review: Performance Per Watt Server SoC Champion?
by Johan De Gelas on June 23, 2015 8:35 AM EST- Posted in
- CPUs
- Intel
- Xeon-D
- Broadwell-DE
The days that Intel neglected the low end of the server market are over. The most affordable Xeon used to be the Xeon E3: a desktop CPU with a few server features enabled and with a lot of potential limitations unless you could afford the E5 Xeons. The gap, both in performance and price, between Xeon E3 and E5 is huge. For example - a Xeon E5 can address up to 768 GB and the Xeon E3 up to 32 GB. A Xeon E5 server could contain up to 36 cores, whereas Xeon E3 was limited to a paltry four. And the list is long: most RAS features, virtualization features were missing from the E3, along with a much smaller L3-cache. On those terms, the Xeon E3 simply did not feel very "pro".
Luckily, the customers in the ever expanding hyperscale market (Facebook, Amazon, Google, Rackspace and so on) need Xeons at a very large scale and have been demanding a better chip than the Xeon E3. Just a few months ago, the wait was over: Xeon D fills the gap between the Xeon E3 and the Xeon E5. Combining the most advanced 14 nm Broadwell cores, a dual 10 gigabit interface, a PCIe 3.0 root with 24 lanes, USB and SATA controllers in one integrated SoC, the Xeon D has excellent specs on paper for everyone who does not need the core count of the Xeon E5 servers, but who simply needs 'more' than the Xeon E3.
Many news editors could not resist calling the Xeon D a response to the ARM server threat. After all, ARM has repeated more than once that the ambition is to be competitive in the scale-out server market. The term "micro server" is hard to find on the power point slides these days; the "scale-out" market is a lot cooler, larger and more profitable. But the comments of the Facebook engineers can quickly brings us back to reality:
"Introducing "Yosemite": the first open source modular chassis for high-powered microservers"
"We started experimenting with SoCs about two years ago. At that time, the SoC products on the market were mostly lightweight, focusing on small cores and low power. Most of them were less than 30W. Our first approach was to pack up to 36 SoCs into a 2U enclosure, which could become up to 540 SoCs per rack. But that solution didn't work well because the single-thread performance was too low, resulting in higher latency for our web platform. Based on that experiment, we set our sights on higher-power processors while maintaining the modular SoC approach."
It is pretty simple: the whole "low power simple core" philosophy did not work very well in the real scale out (or "high powered micro server") market. And the reality is that the current SoCs with an ARM ISA do not deliver the necessary per core performance: they are still micro server SoCs, at best competing with the Atom C2750. So currently, there is no ARM SoC competition in the scale out market until something better hits the market for these big players.
Two questions remain: how much better is the 2 GHz Xeon D compared to the >3GHz Xeon E3? And is it an interesting alternative to those that do not need the high end Xeon E5?
90 Comments
View All Comments
extide - Tuesday, June 23, 2015 - link
That's ECC Registered, -- not sure if it will take that, but probably, although you dont need registered, or ECC.nils_ - Wednesday, June 24, 2015 - link
If you want transcoding, you might want to look at the Xeon E3 v4 series instead, which come with Iris Pro graphics. Should be a lot more efficient.bernstein - Thursday, June 25, 2015 - link
for using ECC UDIMMs, a cheaper option would be an i3 in a xeon e3 board.psurge - Tuesday, June 23, 2015 - link
Has Intel discussed their Xeon-D roadmap at all? I'm wondering in particular if 2x25GbE is coming, whether we can expect a SOC with higher clock-speed or more cores (at a higher TDP), and what the timeframe is for Skylake based cores.nils_ - Tuesday, June 23, 2015 - link
Is 25GbE even a standard? I've heard about 40GbE and even 56GbE (matching infiniband), but not 25.psurge - Tuesday, June 23, 2015 - link
It's supposed be a more cost effective speed upgrade to 10GbE than 40GbE (it uses a single 25Gb/s serdes lane, as used in 100GbE, vs 4 10Gb/s lanes), and IIRC is being pushed by large datacenter shops like Google and Microsoft. There's more info at http://25gethernet.org/. I'm not sure where things are in the standardization process.nils_ - Wednesday, June 24, 2015 - link
It also has an interesting property when it comes to using a breakout cable of sorts, you could connect 4 servers to 1 100GbE port (this is already possible with 40GbE which can be split into 4x10GbE).JohanAnandtech - Wednesday, June 24, 2015 - link
Considering that the Xeon D must find a home in low power high density servers, I think dual 10 Gbit will be standard for a while. Any idea what 25/40 Gbit PHY would consume? Those 10 Gbit PHYs already need 3 Watt in idle, probably around 6-8W at full speed. That is a large chunk of the power budget in a micro/scale out server.psurge - Wednesday, June 24, 2015 - link
No I don't, sorry. But, I thought SFP+ with SR optics (10GBASE-SR) was < 1W per port, and that SFP+ direct attach (10GBASE-CR) was not far behind? 10GBASE-T is a power hog...pjkenned - Tuesday, June 23, 2015 - link
Hey Johan - just re-read. A few quick thoughts:First off - great piece. You do awesome work. (This is Patrick @ ServeTheHome.com btw)
Second - one thing should probably be a bit clearer - you were not using a Xeon D-1540. It was a ES Broadwell-DE version at 2.0GHz. The shipping product has 100MHz higher clocks on both base and max turbo. I did see a 5% or so performance bump from the first ES version we tested to the shipping parts. The 2.0GHz parts are really close to shipping spec though. One both of my pre-release Xeon D and all of the post-release Xeon D systems was nearly identical.
Those will not change your conclusions but does make the actual Intel Xeon D-1540 a bit better than the one you tested. LMK if you want me to set aside some time on a full speed version on a Xeon D-1540 system for you.