Intel's Xeon D SiP (System-in-package) has turned out to be one of the exciting launches this year in the server CPU space. We have already analyzed Xeon D in detail in our review of the Supermicro SuperServer 5028D-TN4T. Almost all currently available Xeon D systems / motherboards are from Supermicro, but we now have another set of options from ASRock Rack.

The Xeon D family currently consists of two members:

  • Xeon D-1520 : 4C/8T Broadwell-DE x86 cores @ 2.2 GHz, 6 MB of L2 cache, 45W TDP
  • Xeon D-1540 : 8C/16T Broadwell-DE x86 cores @ 2.0 GHz, 12 MB of L2 cache, 45W TDP

ASRock Rack's Xeon D lineup consists of one board using the Xeon D-1520 and six boards using the Xeon D-1540. Customers have the option of going with either the mini-ITX (mITX) form factor or the micro-ATX (uATX) form factor. The mITX boards are all compatible with 1U rackmount chassis.

In addition to the motherboard size, the differentiation aspects come in the form of support for different varieties of LAN ports, PCIe slot configurations, additional storage ports using the LSI 3008 HBA and different USB 3.0 port configurations. Unlike the mITX boards, all the uATX boards come with a COM port in the rear I/O.The following tables summarize the features of the various products in the ASRock Rack Xeon D lineup.

mITX Boards

  D1520D4I D1540D4I D1540D4I-2L2T
SiP Intel Xeon D-1520 Intel Xeon D-1540
RAM 4x DDR4 DIMM Slots 2133 / 1866 MHz RDIMMs (Up to 128 GB)
PCIe Expansion Slots 1x PCIe 3.0 x16
Storage Controllers 6x SATAIII 6 Gbps from integrated PCH in the Xeon D SiP
(4x via mini-SAS connector)
(1x with SATA DOM support)
1x SATAIII 6 Gbps from Marvell 9172
(via M.2 2280 interface)
LAN Controllers 2x RJ45 1GbE
(Intel i210)
2x RJ45 1GbE
(Intel i210)
2x RJ45 10GbE
(Intel X557-AT2)
Board Management Controller ASPEED AST2400
IPMI LAN Controller 1x Realtek RTL8211E
Display Output 1x D-Sub VGA
USB Ports 2x USB 3.0 Type-A (Rear I/O)

 

uATX Boards

  D1540D4U-2T8R D1540D4U-2O8R D1540D4U-2T2O8R D1540D4U-2L+
SiP Intel Xeon D-1540
RAM 4x DDR4 DIMM Slots 2133 / 1866 MHz RDIMMs (Up to 128 GB)
PCIe Expansion Slots 1x PCIe 3.0 x8 (x16 physical) 1x PCIe 3.0 x16
1x PCIe 3.0 x8 (x8 physical) 1x PCIe 3.0 x8
Storage Controllers 6x SATAIII 6 Gbps from integrated PCH in the Xeon D SiP
(4x via mini-SAS connector)
(1x with SATA DOM support)
8x SAS3 12Gbps from LSI 3008 HBA
(via mini-SAS HD connector)
1x SATAIII 6 Gbps from Marvell 9172
(via M.2 2280 interface)
LAN Controllers 2x RJ45 10GbE
(Intel X550)
2x 10G SFP+ Fiber 2x 10G SFP+ Fiber 2x RJ45 1GbE
(Intel i350)
2x RJ45 10GbE
(Intel X540)
Board Management Controller ASPEED AST2400
IPMI LAN Controller 1x Realtek RTL8211E
Display Output 1x D-Sub VGA
USB Ports 2x USB 3.0 Type-A (Rear I/O)
1x USB 3.0 Type-A (Internal Connector)
1x USB 3.0 Header

These boards are ideal for network and warm storage devices as well as micro-servers. Given the low power nature of the Xeon D platform, some of them can also be useful in home lab settings for experimenting with virtualization or even act as boards for high-end development machines.

Source: ASRock Rack

Comments Locked

37 Comments

View All Comments

  • Billy Tallis - Saturday, October 31, 2015 - link

    SO-DIMMs exist, though they can sometimes be a bit crowded for the high-capacity registered modules that get used with servers. You won't see anything significantly smaller than that, because it would force the memory bus to be narrower. DIMM slots have to be able to provide a lot more bandwidth than any SSD, and they have to do it with a simple enough connection that latency isn't adversely affected by high-level packet oriented protocols. That means they need a lot of carefully laid out wires.
  • Gigaplex - Saturday, October 31, 2015 - link

    HBM should be interesting on the Zen server APUs.
  • Samus - Sunday, November 1, 2015 - link

    They've been talking about it, and trying various solutions from RDRAM to FBDIMMs for years to bring a high speed serial interface to DDR memory.

    The problem is traditional memory controller interfaces are parallel, which is slow, so you need a lot of connections to keep the IO high. A few wide serial interfaces (or 8 wide like what RDRAM used) would have similar throughput with less pins, but it is complex for a lot of reasons. The traces all have to be close to the same distance because at the speeds these serial interfaces work at, the difference between the physical distance (locations) of the DIMMs actually matters for timing the signal. This makes manufacturing motherboards additionally complex to trace out. There also has to be a termination, although some technologies allow for self termination via fused-termination detection, which is what a SAS multiplexer does, but granted, DIMM slots are a lot difference to engineer than a multi-connection cable. Lastly is the price.

    In the end, this really comes down to JEDEC, and because of the fear of RAMBUS trolling memory makers, I think they have artificially distanced themselves from serial interfaces. Should it be a surprise trolling is holding back technology?
  • BurntMyBacon - Tuesday, November 3, 2015 - link

    @Samus: "The problem is traditional memory controller interfaces are parallel, which is slow, so you need a lot of connections to keep the IO high."

    Getting cause and effect backwards. It's not that you need a lot of IO because it is slow, it is you have to slow it down to maintain synchronization with such a large number of connections. The distinction is small in practice, but critical.

    @Samus: "The traces all have to be close to the same distance because at the speeds these serial interfaces work at, the difference between the physical distance (locations) of the DIMMs actually matters for timing the signal."

    This is more of an issue for parallel interfaces than serial interfaces. The move to PCIe from PCI reduced board complexity significantly. Rather than rely on all bits arriving at the same time (as in a parallel interface), each link is used as an independent path. You can imagine there is some overhead associated with keeping data in order.

    Keep in mind that to replace a 128bit memory bus with a single line, you would need to run 128 times faster. Parallel buses are slower, but not that slow. If you use multiple links (PCIe), then you incur more latency as you now have to make sure packets are reordered. Further, serial protocols incur additional overhead as speed increases just to make sure the sending and receiving ends are properly synchronized. PCIe 1.1 uses a relatively simple encoding scheme that sends 10 bits for every 8 bits of data. The extra bits are lost bandwidth for the purpose of making sure the endpoints are synchronized. Another potential issue is that the power use and consequently heat of a link eventually starts to rise faster than the speed of the connection.

    There are situations (plenty of them) where this extra overhead and latency are less of an issue to the overall throughput than the slowdown parallel interfaces have to incur to make sure the bits arrive at the same time. There is also the fact that it is impractical to make a parallel interface full duplex, while it is quite common in serial links. The longer the run the worse the turnaround time for half duplex connections. These longer runs where it is harder to keep lines equal length and turnaround times are poor (I.E. PCIe) are typically best kept serial. There is also a practical physical limit of how wide you can go on an interface, though HBM just redefined what that limit is for some use cases.

    @Samus: "In the end, this really comes down to JEDEC, and because of the fear of RAMBUS trolling memory makers, I think they have artificially distanced themselves from serial interfaces. Should it be a surprise trolling is holding back technology?"

    I think JEDEC deserves a little more credit than you give them here. That said, there is no denying the effects RAMBUS has had on the industry. Your assessment of the effects of patent trolls on innovation is spot on. Patents were originally intended to be used sparsely and for the specific purpose of protecting the inventor's investment by allowing them to sell their product without fear of others cloning it without the research investment. Too many companies today exist with a strangle hold on some technology, but no product to sell with said technology.
  • julianb - Saturday, October 31, 2015 - link

    Can anyone please, please tell me how would the Xeon D-1540 compare to my current 4790K in a Cinebench Multi- threaded test? I checked the original review link but there was no Cinebench test there.
    I realize these CPUs have different target markets in mind but still...I do lots of 3D rendering and would like to buy 2-3 of these Xeon D-1540 motherboards as render nodes.
    Am I right to think that the Xeon D's 8Cx2GHz are equally as powerful as 4790K's 4Cx4GHz?
    Thank you very much.
  • QinX - Saturday, October 31, 2015 - link

    If you look at the original review for the Xeon-D
    http://www.anandtech.com/show/9185/intel-xeon-d-re...
    You can see it performs gets around 29k points vs 35k for the 2560L v3

    In CPU Bench a 2650L V3 gets 36.6K points and a 4790K gets 33.5K

    So I believe the 4790K performs around 10% better if my math is somewhat right.
  • QinX - Saturday, October 31, 2015 - link

    of course the 4790K has a much higher TDP so the Xeon-D has that going for it.
  • julianb - Saturday, October 31, 2015 - link

    Thank you very much for that comparison, QinX.
    I hope these boards won't cost too much in that case.
  • MrSpadge - Sunday, November 1, 2015 - link

    As far as I know those Xeon D are very exxpensive. Consider socket 2011-3 6-core CPUs as alternative as well. Power efficeincy is OK if you eco-tune them, although obviously not as good as for Xeon D.
  • TomWomack - Monday, November 2, 2015 - link

    I have a Supermicro Xeon D 1540 board; it is not all _that_ power efficient, it takes 75W at the plug when running floating-point-intensive jobs on all eight cores. I agree that for most uses a 6-core Haswell-E is a better way to go - similar or lower price, twice the memory bandwidth.

Log in

Don't have an account? Sign up now