Not to be a nay sayer on progress... But why this sudden push? My system is pcie 2 and it does not choke my modern high end pcie 3 GPU. New SSDs connect at pcie4 but typical performance is still well within ancient sata3 specs at 40 to 60 MBpsof throughput. Is the point just to have fewer but faster lanes available? Or to use as a cpu core interconnect? Or is there some amazing real world application that truly needs this kind of speed that we just are not hearing about yet?
Actually they don't. While they can bench that fast, the real throughput for games, windows, and other non server applications is still well below 100MBps.this is because it is extremely difficult to get an end user application to hit a qd beyond 10which is needed to hit the drives full potential
The fact that your system used to browse a few websites gets by in PCIe 2.0 does not mean that nodes in a multi-exaflop supercomputer in 2024-2025* are going to get by fine in PCIe 2.0 either.
I might as well say that there's no point in AVX-512 because my smartphone doesn't need it to send a text message.
* which is when PCIe 6.0 will likely be implemented since the hardware doesn't launch anywhere near the date that the specification is finished.
Yes, CaedenV violated the BroGeek Super Secret Code by not seeing the need for more of something, but isn't it a bit of a reach to assume a set of usage criteria? My main gaming box is a passively cooled dual-core Bay Trail laptop with an 11.6 inch screen. Sure it fetches e-mail, but it runs its fair share of games quietly and inexpensively. Gaming on a computer can be fun regardless of the underlying hardware if you are a little bit discerning about system requirements. You certainly can fire up ebay, drop less than $80 on any still working junk laptop, and then run a game to kill a couple or a few thousand hours you just have to be semi-intelligent about it.
I think you just assumed a set of usage criteria. Many/most people who play modern games want the most of everything feasible within their budget, i.e. big screens, high frame rates, max detail etc.
Or this is what the industry will like you to believe. Clearly, there is a difference between professional markets, where interface bandwidth just can't grow fast enough and this is evidently useful, and consumer applications, where it's at this point mostly a matter of consumers being educated to always wanting more (and thus paying for more) than they actually need. Some are starting to understand that learning to have fun with a computer game without maxed-out-everything settings at uselessly-overspecced resolutions may be a more sustainable alternative.
What? I'm not arguing that anyone needs PCI 4.0 or 5.0 or whatever. I'm just saying that many/most gamers obviously *want* the biggest/fastest they can get in terms of screen size/resolution/colour depth/brightness/etc, and therefore a GPU (and maybe CPU) to match. Doesn't mean they can afford it.
No I didn't make any assumptions. I described an alternative approach that I personally use with respect to playing video games on a PC. There are no assumptions, but simply statements that support a perspective that you find personally threatening to such an extent that you found it necessary to attempt to respond by discrediting and then using a bandwagon fallacy.
I am running an ancient i7 2600 with a gtx 1080, and the benchmarks and real world results I get for 4k gaming on a 55" display are only 1 to 2 frames lower than what is claimed on many benchmarking sites.... Point being that the bottleneck these days is not the cpu or the interconnect, it is still on the GPU and SSD where it has been for about a decade now. The same can be said about server applications where most of the bottleneck is on storage, ram, and encryption.
Well, that is just my point. The end user world is still running at a pcie2 level even if we have faster connections available. My question really is "what crazy amazing options and applications is this new tech opening up?" it clearly isn't in the consumer space, so I would like to see where it is being used
> Is the point just to have fewer but faster lanes available?
Yep, partially.
> Or to use as a cpu core interconnect? Or is there some amazing real world application that truly needs this kind of speed that we just are not hearing about yet?
The likes of CXL, Gen-Z, CCIX etc which are all based on PCIe 5.0 are working towards cache coherency between CPUs, GPUs, FPGAs, neural network accelerators and other kinds of processing cores. This will make it much easier to develop software that can do seamless heterogeneous computing without having to worry about transferring data between these cores. So, yes!
"without having to worry about [the additional latency penalty that currently impacts] transferring data between these cores"
In other words, you're correct about cache coherency, but over time maintaining coherency has become one of the biggest bottlenecks to parallel computing as other areas have been optimized.
Fortunately the PCI sig group is not working to upgrade your personal PC. PCIe is used in many places, especially the data center, where extra bandwidth and performance is needed. Upcoming challenges of integrating heterogeneous architectures will require a very large jump in bandwidth and reduced latency and we seem to be standardizing on the PCIe physical layer to connect all these devices together. Your local desktop and graphics card / nvme SSD are probably not primary use cases for this upgrade but help drive adoption and cost reduction. Think AI/Machine Learning accelerators running in the cloud processing more and more data that ends up as some new feature on your phone or Facebook.
Both. Flash storage server builders want to only use a single PCIe lane per drive so they can cram more in. That means they need faster interconnects a lot more than consumer drives to. PCIe5 x1 finally lets them get the performance of a PCIe3 x4 drive in a single lane. PCIe6 will give parity with the PCEe4 x4 drives that will be coming out in the next year or two.
x8/16 slots with higher bandwidth are needed for accelerator cards; both of the super high end version of normal GPU variety (which can bottleneck if running something chatty with the CPU or that needs to stream data from system ram/SSDs because it can't fit into the GPUs ram, and the custom AI accelerators that all the big tech companies are making. 40/100gigabit networking standards also will benefit from needing fewer lanes/adapter to implement.
There are plenty of reasons. PCIe is not being driven by the demand of home-PCs, it's widely used for all kinds of interfaces in professional usecases. For example, InfiniBand right now supports 250 Gbit/s links. For a single link, it can't be done with PCIe 3.0 x16. PCIe 4.0 x16 barely manages, but then a single link uses up the full 16 lanes. PCIe 5.0 allows a InfiniBand link in just 8 lanes, so machines will be able to support 2 links as soon as PCIe 5.0 is available.
400 Gigabit Ethernet will be available soon, and again, requires a full x16 link for a single port with PCIe 5.0. PCIe 6.0 will allow 2 links.
Storage also manages to utilize more than enough PCIe lanes.
When PCIe 6.0 will be available 4-5 years from now, there will be plenty of usecases.
Try 6 years at the earliest. From standard _completion_ till the first deliveries of hardware it's usually 3 years. And that's if manufacturers start right away on support. Intel avoided PCIe4 in favor a waiting a couple more years for 5.0 so will probably be a year late when they scab it onto some new 14++++mm processor down the road. If AMD hadn't moved forward we might have even skipped 4.0 entirely. But the key point here is it takes 3 years for the chip manufactures to receive the completed spec, build it into their designs and chipsets and then fab silicon, and that will only happen if they jump right on it.
PCIe4 was competed roughly 3 years ago. PCIe5 was just completed so we should see the first hardware in 2022-2023 (depending on chip fabrication schedules). PCIe6 isn't even a completed standard, and isn't likely to be finished for at least a few years. In particular with 6.0 there as some serious political issues (between participants) that could hold it up even longer, at least from what I've read.
At least PCIe moves faster than USB does, USB takes 10 f'n years to reach broad deployment for each version.
PCI-SIG has stepped up its game. PCIe 4.0 was a long time in the making, but they moved swiftly to PCIe 5.0 which will be available shortly.
PCIe 6.0 is coming along nicely and I expect the first 0.5 draft by 2021, as planned. There's no magic required for PCIe 6.0: they have a solid plan based on widely used technologies and their main job is working out the details (which isn't easy, but also not overly ambitious).
If the final draft will take longer, there might be early adopters, basing hardware on the 0.7 draft, as IBM did with POWER9 due to the delays with PCIe 4.0.
In my field, 10 Gbit/s is absolutely standard (every device I work with has at least 15 ports supporting 10,3 Gbit/s), and we're in the process of moving to 40 Gbit/s for Ethernet and 25 Gbit/s for everything else.
The consumer market is more and more getting irrelevant and moving more and more slowly. Very few users actually care about wire networks at all; all that's relevant is fast WiFi and fast cellular, and a couple of years from now, the usage of wire-based Ethernet will certainly be in the single-digit percentage.
When landlines move to fibre over 1 Gbit/s - which is still very rare - the vast majority of users will only connect their all-in-one router to the fibre directly and use no 10 GbE at all.
PCIe 5.0 is less about higher bandwidth and more about reduced traces. Fewer traces are more compact and easier to design. I assume 6.0 is a similar thing. Why have a 4x slot of PCIe 3.0 when you can have a 1x of 5.0? Imagine video cards with 2x PCIe 6.0.
The bandwidth increase also makes it useful for certain situations that used to require special protocols, like "fabrics". We're also fast approaching non-volatile memory. A use case around that is a unified memory/storage system. Instead of 32GiB of memory and a 1TiB SSD, you instead just have 1TiB of non-volatile memory. Having ridiculously high speed connections will become more important.
40 to 60 MB/s speeds have long been possible with mechanical hard disks (albeit at a pitiful IOPS rate) so no, that is certainly not the "typical performance" of SSDs. Unless you meant low-mid range SD cards. SATA3 can also provide up to ~500 MB/s, so you got that wrong as well.
Fewer but faster lanes is one of the reasons faster PCIe revisions are developed, but not the only reason. PCIe 5.0 and above is highly desirable as the PHY layer of interconnect protocols for CPU chiplets (e.g. a future revision of Infinity Fabric), for GPU chiplets, for Infiniband networking for datacenters, for faster Ethernet, for SSDs (naturally), for silicon photonics, for next gen protocols like Gen-Z or CXL, for a future version of USB, Thunderbolt, DisplayPort etc etc
The list goes on and on, but the major application is for servers and datacenters. If the industry switches en masse to CPU and GPU chiplets that would be the consumer application that could gain most from faster PCIe revisions (unlike SSDs which need faster NAND or a faster controller with more channels to take advantage of the higher PCIe bandwidth, and unlike graphics cards which gain little from the switch to a new PCIe revision).
And lucky for us you aren't running the internet's largest websites, data-storage or AI clusters on it. You could run a Pentium 4 without any real issue for those tasks....
I just find this statement silly , I mean a dude might ask himself why researchers are working like demons for a more comfortable maxi pad when he himself has no need of such and refuses to acknowledge a portion of the population does indeed need something like this. Basically , it's a troll question even if the askee doesn't realize it.
Will PCIe 6 be usable with plug in cards ? The frequencies are high in the microwave spectrum with a wavelength of under 1/2 cm. Good microwave engineering will be needed even for motherboards - the PCIe connector is not a tightly specified microwave component. There is also a question about the power consumption of the interface chips as power requirements rise with frequency.
For what it's worth, the signal bandwidth should be similar to PCIe 5.0, right? With PAM4, we're getting even more into analog signaling, with 4 voltage levels and 2 bits per symbol.
Correct. PCIe 6.0 is using PAM4 specifically to avoid having to increase the signal frequencies.
I don't believe the PCI-SIG has officially stated that PCIe cards are going to be supported, but as they are for PCIe 5.0, there's no reason they shouldn't work for 6.0 since the signal frequencies aren't changing.
The frequencies actually won't increase this time; that's why they're moving to PAM4. Same frequency, higher symbol-density. So this time it's not about increasing frequency at all, it's all about increasing the signal-to-noise ratio.
I can also leak you the fact that pcie 7.0 is already in the mind of some guys and will appear in 2025 products. Ah, and pcie 8.0 will be in someone's head in 2021 and will reach products in 2030. Shall I continue with the leaks?
Too bad VESA Local Bus never took off. It was a huge upgrade over ISA and there were only a few things wrong with it like slot lenght, wiring complexity, connector cost, lack of components, and that stupid PCI standard which turned out to be too slow for video cards anyhow so we ended up with AGP slots. In fact, Micro Channel never really got the fair shake it deserved either because of how poorly IBM handled licensing. It was as stupid as BluRay and Betamax in that regard.
vhs, only over took betamax, cause of a certain industry chose it over the other, same with blueray i believe, and, wasnt there a licensing fee to use betamax, vs no fee for vhs ?? streaming.. pffft... IMO, cant compete with the video and audio quality of blueray..... yet, as you still loose some quality due to compression, and then having to stream it...
You are among a dwindling minority of people that assign that importance to quality. In Anandtech's comments alone, the clamor of people citing that as a factor has declined drastically and this is a community of users disconnected almost entirely from the mainstream population. But hey, it's fine with me. I am not saying your concerns are invalid. I don't stream or buy discs since watching media isn't something I do for amusement so I can't speak for the competitiveness eitehr way.
I'm very curious to see how PCIe 5.0 and above impact external GPU setups. I'm running a GTX 1060 through my eGPU right now at PCIe 3.0 x4, and the performance impact is substantial (around a 35-40% drop when running on my laptop screen). Quadrupling that bandwidth could make a world of difference, especially for people who aren't running external monitors.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
45 Comments
Back to Article
CaedenV - Tuesday, October 15, 2019 - link
Not to be a nay sayer on progress... But why this sudden push? My system is pcie 2 and it does not choke my modern high end pcie 3 GPU. New SSDs connect at pcie4 but typical performance is still well within ancient sata3 specs at 40 to 60 MBpsof throughput.Is the point just to have fewer but faster lanes available? Or to use as a cpu core interconnect? Or is there some amazing real world application that truly needs this kind of speed that we just are not hearing about yet?
csutcliff - Tuesday, October 15, 2019 - link
Have you ever used an nvme SSD?GreenReaper - Tuesday, October 15, 2019 - link
Nope. We use SATA, like BillG intended!surt - Tuesday, October 15, 2019 - link
New SSDs connected to pcie4 offer throughput numbers in the 50 00 MB/s range. You were missing two zeros.And those missing two zeros are why people are bothering to keep working on advancing pcie generation by generation.
CaedenV - Wednesday, October 16, 2019 - link
Actually they don't. While they can bench that fast, the real throughput for games, windows, and other non server applications is still well below 100MBps.this is because it is extremely difficult to get an end user application to hit a qd beyond 10which is needed to hit the drives full potentialCajunArson - Tuesday, October 15, 2019 - link
The fact that your system used to browse a few websites gets by in PCIe 2.0 does not mean that nodes in a multi-exaflop supercomputer in 2024-2025* are going to get by fine in PCIe 2.0 either.I might as well say that there's no point in AVX-512 because my smartphone doesn't need it to send a text message.
* which is when PCIe 6.0 will likely be implemented since the hardware doesn't launch anywhere near the date that the specification is finished.
PeachNCream - Tuesday, October 15, 2019 - link
Yes, CaedenV violated the BroGeek Super Secret Code by not seeing the need for more of something, but isn't it a bit of a reach to assume a set of usage criteria? My main gaming box is a passively cooled dual-core Bay Trail laptop with an 11.6 inch screen. Sure it fetches e-mail, but it runs its fair share of games quietly and inexpensively. Gaming on a computer can be fun regardless of the underlying hardware if you are a little bit discerning about system requirements. You certainly can fire up ebay, drop less than $80 on any still working junk laptop, and then run a game to kill a couple or a few thousand hours you just have to be semi-intelligent about it.rpg1966 - Wednesday, October 16, 2019 - link
I think you just assumed a set of usage criteria. Many/most people who play modern games want the most of everything feasible within their budget, i.e. big screens, high frame rates, max detail etc.JanW1 - Wednesday, October 16, 2019 - link
Or this is what the industry will like you to believe. Clearly, there is a difference between professional markets, where interface bandwidth just can't grow fast enough and this is evidently useful, and consumer applications, where it's at this point mostly a matter of consumers being educated to always wanting more (and thus paying for more) than they actually need. Some are starting to understand that learning to have fun with a computer game without maxed-out-everything settings at uselessly-overspecced resolutions may be a more sustainable alternative.rpg1966 - Wednesday, October 16, 2019 - link
What? I'm not arguing that anyone needs PCI 4.0 or 5.0 or whatever. I'm just saying that many/most gamers obviously *want* the biggest/fastest they can get in terms of screen size/resolution/colour depth/brightness/etc, and therefore a GPU (and maybe CPU) to match. Doesn't mean they can afford it.PeachNCream - Wednesday, October 16, 2019 - link
No I didn't make any assumptions. I described an alternative approach that I personally use with respect to playing video games on a PC. There are no assumptions, but simply statements that support a perspective that you find personally threatening to such an extent that you found it necessary to attempt to respond by discrediting and then using a bandwagon fallacy.rpg1966 - Wednesday, October 16, 2019 - link
Well then he also described an approach. If he was making assumptions, so were you. Or you both weren't. See? Pedantic, isn't it.PeachNCream - Thursday, October 17, 2019 - link
It's fun to watch you try, at least.CaedenV - Wednesday, October 16, 2019 - link
I am running an ancient i7 2600 with a gtx 1080, and the benchmarks and real world results I get for 4k gaming on a 55" display are only 1 to 2 frames lower than what is claimed on many benchmarking sites.... Point being that the bottleneck these days is not the cpu or the interconnect, it is still on the GPU and SSD where it has been for about a decade now. The same can be said about server applications where most of the bottleneck is on storage, ram, and encryption.CaedenV - Wednesday, October 16, 2019 - link
Well, that is just my point. The end user world is still running at a pcie2 level even if we have faster connections available.My question really is "what crazy amazing options and applications is this new tech opening up?" it clearly isn't in the consumer space, so I would like to see where it is being used
r3loaded - Tuesday, October 15, 2019 - link
> Is the point just to have fewer but faster lanes available?Yep, partially.
> Or to use as a cpu core interconnect? Or is there some amazing real world application that truly needs this kind of speed that we just are not hearing about yet?
The likes of CXL, Gen-Z, CCIX etc which are all based on PCIe 5.0 are working towards cache coherency between CPUs, GPUs, FPGAs, neural network accelerators and other kinds of processing cores. This will make it much easier to develop software that can do seamless heterogeneous computing without having to worry about transferring data between these cores. So, yes!
rpg1966 - Wednesday, October 16, 2019 - link
" without having to worry about transferring data between these cores"Can you clarify that for a dummy? Doesn't cache coherency involve moving data between the various caches anyway?
Kenshiro70 - Wednesday, October 16, 2019 - link
I suspect what he meant to say was:"without having to worry about [the additional latency penalty that currently impacts] transferring data between these cores"
In other words, you're correct about cache coherency, but over time maintaining coherency has become one of the biggest bottlenecks to parallel computing as other areas have been optimized.
CaedenV - Wednesday, October 16, 2019 - link
Very cool!You may be the only person that u derstood the point of my question lol
Now I have some things to google
guycoder - Tuesday, October 15, 2019 - link
Fortunately the PCI sig group is not working to upgrade your personal PC. PCIe is used in many places, especially the data center, where extra bandwidth and performance is needed. Upcoming challenges of integrating heterogeneous architectures will require a very large jump in bandwidth and reduced latency and we seem to be standardizing on the PCIe physical layer to connect all these devices together. Your local desktop and graphics card / nvme SSD are probably not primary use cases for this upgrade but help drive adoption and cost reduction. Think AI/Machine Learning accelerators running in the cloud processing more and more data that ends up as some new feature on your phone or Facebook.DanNeely - Tuesday, October 15, 2019 - link
Both. Flash storage server builders want to only use a single PCIe lane per drive so they can cram more in. That means they need faster interconnects a lot more than consumer drives to. PCIe5 x1 finally lets them get the performance of a PCIe3 x4 drive in a single lane. PCIe6 will give parity with the PCEe4 x4 drives that will be coming out in the next year or two.x8/16 slots with higher bandwidth are needed for accelerator cards; both of the super high end version of normal GPU variety (which can bottleneck if running something chatty with the CPU or that needs to stream data from system ram/SSDs because it can't fit into the GPUs ram, and the custom AI accelerators that all the big tech companies are making. 40/100gigabit networking standards also will benefit from needing fewer lanes/adapter to implement.
thomasg - Tuesday, October 15, 2019 - link
There are plenty of reasons.PCIe is not being driven by the demand of home-PCs, it's widely used for all kinds of interfaces in professional usecases.
For example, InfiniBand right now supports 250 Gbit/s links.
For a single link, it can't be done with PCIe 3.0 x16.
PCIe 4.0 x16 barely manages, but then a single link uses up the full 16 lanes.
PCIe 5.0 allows a InfiniBand link in just 8 lanes, so machines will be able to support 2 links as soon as PCIe 5.0 is available.
400 Gigabit Ethernet will be available soon, and again, requires a full x16 link for a single port with PCIe 5.0.
PCIe 6.0 will allow 2 links.
Storage also manages to utilize more than enough PCIe lanes.
When PCIe 6.0 will be available 4-5 years from now, there will be plenty of usecases.
rahvin - Tuesday, October 15, 2019 - link
Try 6 years at the earliest. From standard _completion_ till the first deliveries of hardware it's usually 3 years. And that's if manufacturers start right away on support. Intel avoided PCIe4 in favor a waiting a couple more years for 5.0 so will probably be a year late when they scab it onto some new 14++++mm processor down the road. If AMD hadn't moved forward we might have even skipped 4.0 entirely. But the key point here is it takes 3 years for the chip manufactures to receive the completed spec, build it into their designs and chipsets and then fab silicon, and that will only happen if they jump right on it.PCIe4 was competed roughly 3 years ago. PCIe5 was just completed so we should see the first hardware in 2022-2023 (depending on chip fabrication schedules). PCIe6 isn't even a completed standard, and isn't likely to be finished for at least a few years. In particular with 6.0 there as some serious political issues (between participants) that could hold it up even longer, at least from what I've read.
At least PCIe moves faster than USB does, USB takes 10 f'n years to reach broad deployment for each version.
thomasg - Monday, October 21, 2019 - link
PCI-SIG has stepped up its game. PCIe 4.0 was a long time in the making, but they moved swiftly to PCIe 5.0 which will be available shortly.PCIe 6.0 is coming along nicely and I expect the first 0.5 draft by 2021, as planned.
There's no magic required for PCIe 6.0: they have a solid plan based on widely used technologies and their main job is working out the details (which isn't easy, but also not overly ambitious).
If the final draft will take longer, there might be early adopters, basing hardware on the 0.7 draft, as IBM did with POWER9 due to the delays with PCIe 4.0.
CaedenV - Wednesday, October 16, 2019 - link
400gbps...i would love to see anything faster than 1gbps come standard these days!thomasg - Monday, October 21, 2019 - link
In my field, 10 Gbit/s is absolutely standard (every device I work with has at least 15 ports supporting 10,3 Gbit/s), and we're in the process of moving to 40 Gbit/s for Ethernet and 25 Gbit/s for everything else.The consumer market is more and more getting irrelevant and moving more and more slowly.
Very few users actually care about wire networks at all; all that's relevant is fast WiFi and fast cellular, and a couple of years from now, the usage of wire-based Ethernet will certainly be in the single-digit percentage.
When landlines move to fibre over 1 Gbit/s - which is still very rare - the vast majority of users will only connect their all-in-one router to the fibre directly and use no 10 GbE at all.
bcronce - Tuesday, October 15, 2019 - link
PCIe 5.0 is less about higher bandwidth and more about reduced traces. Fewer traces are more compact and easier to design. I assume 6.0 is a similar thing. Why have a 4x slot of PCIe 3.0 when you can have a 1x of 5.0? Imagine video cards with 2x PCIe 6.0.The bandwidth increase also makes it useful for certain situations that used to require special protocols, like "fabrics". We're also fast approaching non-volatile memory. A use case around that is a unified memory/storage system. Instead of 32GiB of memory and a 1TiB SSD, you instead just have 1TiB of non-volatile memory. Having ridiculously high speed connections will become more important.
Santoval - Tuesday, October 15, 2019 - link
40 to 60 MB/s speeds have long been possible with mechanical hard disks (albeit at a pitiful IOPS rate) so no, that is certainly not the "typical performance" of SSDs. Unless you meant low-mid range SD cards. SATA3 can also provide up to ~500 MB/s, so you got that wrong as well.Fewer but faster lanes is one of the reasons faster PCIe revisions are developed, but not the only reason. PCIe 5.0 and above is highly desirable as the PHY layer of interconnect protocols for CPU chiplets (e.g. a future revision of Infinity Fabric), for GPU chiplets, for Infiniband networking for datacenters, for faster Ethernet, for SSDs (naturally), for silicon photonics, for next gen protocols like Gen-Z or CXL, for a future version of USB, Thunderbolt, DisplayPort etc etc
The list goes on and on, but the major application is for servers and datacenters. If the industry switches en masse to CPU and GPU chiplets that would be the consumer application that could gain most from faster PCIe revisions (unlike SSDs which need faster NAND or a faster controller with more channels to take advantage of the higher PCIe bandwidth, and unlike graphics cards which gain little from the switch to a new PCIe revision).
danielfranklin - Wednesday, October 16, 2019 - link
And lucky for us you aren't running the internet's largest websites, data-storage or AI clusters on it.You could run a Pentium 4 without any real issue for those tasks....
Urufu - Friday, October 18, 2019 - link
I just find this statement silly , I mean a dude might ask himself why researchers are working like demons for a more comfortable maxi pad when he himself has no need of such and refuses to acknowledge a portion of the population does indeed need something like this. Basically , it's a troll question even if the askee doesn't realize it.Duncan Macdonald - Tuesday, October 15, 2019 - link
Will PCIe 6 be usable with plug in cards ? The frequencies are high in the microwave spectrum with a wavelength of under 1/2 cm. Good microwave engineering will be needed even for motherboards - the PCIe connector is not a tightly specified microwave component. There is also a question about the power consumption of the interface chips as power requirements rise with frequency.proflogic - Tuesday, October 15, 2019 - link
For what it's worth, the signal bandwidth should be similar to PCIe 5.0, right? With PAM4, we're getting even more into analog signaling, with 4 voltage levels and 2 bits per symbol.Ryan Smith - Tuesday, October 15, 2019 - link
Correct. PCIe 6.0 is using PAM4 specifically to avoid having to increase the signal frequencies.I don't believe the PCI-SIG has officially stated that PCIe cards are going to be supported, but as they are for PCIe 5.0, there's no reason they shouldn't work for 6.0 since the signal frequencies aren't changing.
thomasg - Monday, October 21, 2019 - link
The frequencies actually won't increase this time; that's why they're moving to PAM4.Same frequency, higher symbol-density.
So this time it's not about increasing frequency at all, it's all about increasing the signal-to-noise ratio.
yeeeeman - Tuesday, October 15, 2019 - link
I can also leak you the fact that pcie 7.0 is already in the mind of some guys and will appear in 2025 products. Ah, and pcie 8.0 will be in someone's head in 2021 and will reach products in 2030. Shall I continue with the leaks?jordanclock - Wednesday, October 16, 2019 - link
This isn't a leak. This is an actual release from PCI-SIG about their progress towards PCIe 6.0.PeachNCream - Tuesday, October 15, 2019 - link
Too bad VESA Local Bus never took off. It was a huge upgrade over ISA and there were only a few things wrong with it like slot lenght, wiring complexity, connector cost, lack of components, and that stupid PCI standard which turned out to be too slow for video cards anyhow so we ended up with AGP slots. In fact, Micro Channel never really got the fair shake it deserved either because of how poorly IBM handled licensing. It was as stupid as BluRay and Betamax in that regard.Qasar - Tuesday, October 15, 2019 - link
" It was as stupid as BluRay and Betamax in that regard. " how so ?PeachNCream - Wednesday, October 16, 2019 - link
Several reasons - VHS, streaming, and microplastics mainly.Qasar - Wednesday, October 16, 2019 - link
vhs, only over took betamax, cause of a certain industry chose it over the other, same with blueray i believe, and, wasnt there a licensing fee to use betamax, vs no fee for vhs ??streaming.. pffft... IMO, cant compete with the video and audio quality of blueray..... yet, as you still loose some quality due to compression, and then having to stream it...
PeachNCream - Thursday, October 17, 2019 - link
You are among a dwindling minority of people that assign that importance to quality. In Anandtech's comments alone, the clamor of people citing that as a factor has declined drastically and this is a community of users disconnected almost entirely from the mainstream population. But hey, it's fine with me. I am not saying your concerns are invalid. I don't stream or buy discs since watching media isn't something I do for amusement so I can't speak for the competitiveness eitehr way.Qasar - Thursday, October 17, 2019 - link
i know quite a few people that prefer a disc in their hand over streaming, for the same reasons, A/V quality.Kenshiro70 - Wednesday, October 16, 2019 - link
Don't forget poor Teddy RamBus. RD-RAM 4evah!PeachNCream - Wednesday, October 16, 2019 - link
Yeah, RDRAM was going to save the world. Now you can't find a continuity module to save your life.Dizoja86 - Wednesday, October 16, 2019 - link
I'm very curious to see how PCIe 5.0 and above impact external GPU setups. I'm running a GTX 1060 through my eGPU right now at PCIe 3.0 x4, and the performance impact is substantial (around a 35-40% drop when running on my laptop screen). Quadrupling that bandwidth could make a world of difference, especially for people who aren't running external monitors.