PCI-SIG Finalizes PCIe 5.0 Specification: x16 Slots to Reach 64GB/sec
by Ryan Smith on May 29, 2019 6:30 PM ESTFollowing the long gap after the release of PCI Express 3.0 in 2010, the PCI Special Interest Group (PCI-SIG) set about a plan to speed up the development and release of successive PCIe standards. Following this plan, in late 2017 the group released PCIe 4.0, which doubled PCIe 3.0’s bandwidth. Now less than two years after PCIe 4.0 – and with the first hardware for that standard just landing now – the group is back again with the release of the PCIe 5.0 specification, which once again doubles the amount of bandwidth available over a PCI Express link.
Built on top of the PCIe 4.0 standard, the PCIe 5.0 standard is a relatively straightforward extension of 4.0. The latest standard doubles the transfer rate once again, which now reaches 32 GigaTransfers/second. Which, for practical purposes, means PCIe slots can now reach anywhere between ~4GB/sec for a x1 slot up to ~64GB/sec for a x16 slot. For comparison’s sake, 4GB/sec is as much bandwidth as a PCIe 1.0 x16 slot, so over the last decade and a half, the number of lanes required to deliver that kind of bandwidth has been cut to 1/16th the original amount.
The fastest standard on the PCI-SIG roadmap for now, PCIe 5.0’s higher transfer rates will allow vendors to rebalance future designs between total bandwidth and simplicity by working with fewer lanes. High-bandwidth applications will of course go for everything they can get with a full x16 link, while slower hardware such as 40GigE and SSDs can be implemented using fewer lanes. PCIe 5.0’s physical layer is also going to be the cornerstone of other interconnects in the future; in particular, Intel has announced that their upcoming Compute eXpress Link (CXL) cache coherent interconnect will be built on top of PCIe 5.0.
PCI Express Bandwidth (Full Duplex) |
|||||||
Slot Width | PCIe 1.0 (2003) |
PCIe 2.0 (2007) |
PCIe 3.0 (2010) |
PCIe 4.0 (2017) |
PCIe 5.0 (2019) |
||
x1 | 0.25GB/sec | 0.5GB/sec | ~1GB/sec | ~2GB/sec | ~4GB/sec | ||
x2 | 0.5GB/sec | 1GB/sec | ~2GB/sec | ~4GB/sec | ~8GB/sec | ||
x4 | 1GB/sec | 2GB/sec | ~4GB/sec | ~8GB/sec | ~16GB/sec | ||
x8 | 2GB/sec | 4GB/sec | ~8GB/sec | ~16GB/sec | ~32GB/sec | ||
x16 | 4GB/sec | 8GB/sec | ~16GB/sec | ~32GB/sec | ~64GB/sec |
Meanwhile the big question, of course, is when we can expect to see PCIe 5.0 start showing up in products. The additional complexity of PCIe 5.0’s higher signaling rate aside, even with PCIe 4.0’s protracted development period, we’re only now seeing 4.0 gear start showing up in server products; meanwhile the first consumer gear technically hasn’t started shipping yet. So even with the quick turnaround time on PCIe 5.0 development, I’m not expecting to see 5.0 show up until 2021 at the earliest – and possibly later than that depending on just what that complexity means for hardware costs.
Ultimately, the PCI-SIG’s annual developer conference is taking place in just a few weeks, on June 18th, at which point we should get some better insight as to when the SIG members expect to finish developing and start shipping their first PCIe 5.0 products.
Source: PCI-SIG
55 Comments
View All Comments
mode_13h - Wednesday, May 29, 2019 - link
Well... I could point to 10 Gigabit, as a counterexample.Now, you might say there was no consumer demand for it, but I'd say that's even more true of PCIe 4.0. The only reason we're getting it now is that AMD decided to give it to us.
Until someone cites the original claim that it'd be too expensive, we cannot pick apart the reasoning behind it.
smilingcrow - Wednesday, May 29, 2019 - link
The Zen 2 chips are PCIe 4.0 complaint anyway due to also being used in EPYC chips for servers so there is no extra cost at that level.As some Ryzen 2 series boards are also partially PCIe 4.0 complaint that suggests that the cost is not significant which is why it has been rolled out now even further.
In the short term it may only be SSDs that benefit although that seems to be very much diminishing returns for consumers.
As for 10Gb Ethernet, I hear people complaining that the hardware is still expensive; switches etc.
Add that to low consumer interest for 10Gb Ethernet and no wonder it hasn't taken off yet.
Technology that is cheap and easy to pitch to consumers tends to get implemented first.
mode_13h - Wednesday, May 29, 2019 - link
> The Zen 2 chips are PCIe 4.0 complaint anyway due to also being used in EPYC chips for serversI doubt it. The I/O chiplet in AM4 Ryzens is certainly different than what EPYC is using. That said, they did probably 90% of the work for EPYC, so maybe it wasn't a big deal to use it in the desktop chips.
Still, that doesn't change my point, which is that we're not getting PCIe 4.0 by popular demand.
You appear to be arguing the same side as I am, which is that we got PCIe 4.0 because it was apparently fairly cheap/easy. On the flip side, 10 Gigabit Ethernet is taking forever, because even though the demand has been stronger, longer than PCIe 4.0, it's still not been enough to drive down costs to the point where even typical enthusiasts would bite.
smilingcrow - Thursday, May 30, 2019 - link
Thanks, I keep forgetting that the I/O die hosts the controller.With laptops, tablets and smart phones being used more than desktops seemingly, WiFi tends to get more traction than ethernet.
It took ages for many ISPs to add gigabit ethernet to their routers which says something about the inertia.
repoman27 - Thursday, May 30, 2019 - link
PCIe 4.0 presents challenges but will obviously see widespread adoption. AMD is there now, and Intel will follow suit as soon as they can figure out how to manufacture some new lake that isn't Skylake. Furthermore, Thunderbolt 3 already exists in the consumer space with a significantly higher signaling rate, albeit at a premium.I have a seriously hard time believing PCIe 5.0 will ever grace a consumer platform. NRZ with a Nyquist rate of 16 GHz and no FEC? I just do not see it happening. Here's a list of NRZ PHYs that operate at or above 8 GT/s for comparison:
PCIe Gen 3 - 8.0 GT/s, 128b/130b encoding
DisplayPort HBR3 - 8.1 GT/s, 8b/10b encoding
USB 3 Gen 2 - 10 GT/s, 128b/132b encoding
Thunderbolt / Thunderbolt 2 / InifiniBand FDR10 - 10.3125 GT/s, 64b/66b encoding
Intel Ultra Path Interconnect (UPI) - 10.4 GT/s
SAS-3 12Gb/s - 12 GT/s, 8b/10b encoding
HDMI 2.1 FRL - 12 GT/s, 16b/18b encoding
AMD Infinity Fabric InterSocket (IFIS) - 12.8 GT/s (@DDR4-3200)
Fibre Channel 16GFC - 14.025 GT/s, 64b/66b encoding
InfiniBand FDR - 14.0625 GT/s, 64b/66b encoding
PCIe Gen 4 - 16 GT/s, 128b/130b encoding
NVLink 1.0 - 20 GT/s
Thunderbolt 3 - 20.625 GT/s, 64b/66b encoding
SAS-4 24G - 22.5 GT/s 128b/150b encoding (128b/130b + 20b RS FEC)
NVLink 2.0 - 25 GT/s
InfiniBand EDR / Intel Omni-Path Architecture (OPA) - 25.78125 GT/s, 64b/66b encoding
Fibre Channel 32GFC - 28.05 GT/s, 256b/257b encoding
Intel Stratix 10 GXE Transceiver - 28.9 GT/s
PCIe Gen 5 - 32 GT/s, 128b/130b encoding
Xilinx UltraScale+ GTY Transceiver - 32.75 GT/s
A lot of these protocols, even DisplayPort and HDMI, have provisions for FEC. Consumers never saw SATA 12Gb/s because it was cheaper to converge with PCIe. 10 GbE was a hit in the datacenter but rarely seen on the desktop. The price of NICs was initially too high, that of switches remains exorbitant, and power requirements all but preclude it from mobile. Yet we're to believe that there will be a consumer interconnect that outstrips everything on the market except for the fastest FPGA transceiver available? What new magic will make this possible?
Gradius2 - Wednesday, May 29, 2019 - link
AFAIK its 128GB/s. NOT 64mode_13h - Wednesday, May 29, 2019 - link
PCIe 5.0 x16 is 64 GB/sec, unidirectional.eastcoast_pete - Wednesday, May 29, 2019 - link
Do I see this correctly? PCIe-5 x16 is faster than any current dual-channel memory, and basically as fast as quad-channel memory? Does that mean PCIe 5 is really fast, current memory bus speeds really suck, or both?mode_13h - Wednesday, May 29, 2019 - link
Since Sandybridge, Intel (and I assume recent AMD) CPUs can snoop L3 cache for PCIe transactions. So, you don't always have to go through RAM.Also, Intel server CPUs have 6-channel memory and are moving to 8, while EPYC already has 8. And DDR5 is on the horizon.
mode_13h - Wednesday, May 29, 2019 - link
Here you go: https://www.intel.com/content/www/us/en/io/data-di...