It's mostly semantics, once you understand what each one could be saying. TDP (thermal design power) measures heat energy needed to dissipate that heat. This number could be given for heat generated due to a typical load, or a maximum load, and is used by both Intel and AMD for their CPUs. TBP (typical board power) is a measure of how much power a "board" would consume, and seems to only be used by AMD for it's graphics cards.
It should be able to drive three 8K@30 Hz monitors, three at 8K@60 Hz using DSC or a single 8@60 Hz with out compress (dual cable).
The more interesting question is how many 4K displays can it drive. DP 1.3 MST hubs should be able to split the bandwidth of each DP 1.3 port to drive two 4K displays. That'd permit six 4K screens from the DP port using MST hubs. The question now is if the HDMI 2.0 port can be utilize in conjunction with all DP ports active driving more than a single display. Previously Radeon hardware had a six physical screen limitation but this card has two GPUs so the limit could actually be 12 depending on how the ports are connected.
The strategic goal of Eyefinity was to move gaming resolutions beyond 1080P as adding more displays. 4K hadn't arrived and 2560 x 1440 monitors were still expensive at the time. Eyefinity did push both AMD and nVidia to focus on higher resolution gaming, prices dropped and 4K isn't difficult to obtain.
Eyefinity as a technology though hasn't changed and it is possible to utilize multiple 4K, 5K and even 8K to create a larger virtual surface. Three 8K displays would be pushing just shy of 100 million pixels. We'll probably be stuck with this as an effective upper limit as we're reaching the limits of what passive copper cabling can do even over short distances. Active cabling and/or transition to fiber will be necessary to really go beyond 8K
Sigh, when people talk about past 8k monitor for resolution I am reminded of this week's Silicon Valley Season Premiere where the first episode had a scene where the CEO is talking about a 10% image quality boost with same bandwidth and then the users can't tell the difference between the two. No matter he was talking fake tech specs of a 12 bit color vs 10 bit and no screen uses 12 bit, and very few screens use 10 bit correctly for while the screen may be able to display 10 bit often the OS and composite software was design for 8 bit and not the true version of 8 bit color but instead a smaller color space.
Aka is it a real improvement, or just a perceived improvement for you say bigger tech specs and due to motivated reasoning I create a placebo type effect and I can truly see the difference, either through self delusion, or I can see a real difference for you force me to focus so intently on an insignificant factor that I am actually missing the full perception my eyes and brain can give me if I broaden my attention instead of hyperfocusing on nonsense information.
They're saying that a true VR experience with imperceptible pixels would require 8K per eye, and that means we'll eventually need some type of cable that can handle two simultaneous 8K streams.
It's not encouraging for AMD that their professional cards are so cheap. This card is competing against what? The Quadro P5000? That card costs twice as much. AMD's software support must be sorely lacking.
Titan Xp isn't really for content creation. It doesn't come with professional software. It's for machine learning and gaming. I suppose if you can get away being without the professional drivers and software then you can use Titan Xp for that, but that just backs up my point. This Radeon Pro Duo card /is/ a professional card. It does come with professional software and support, doesn't it? So why is it priced to go up against cards that don't come with that software. I can only imagine it means that AMD's software sucks. For instance, perhaps the best professional applications use CUDA instead of OpenCL for their computations, or they are much faster using CUDA than using OpenCL.
As far as the Mac driver, I think that's probably for the future. Apple probably has already decided to use NVIDIA cards again in some of their upcoming products.
They are trying to massively undercut nVidia to get their foot into this market a bit more heavily. They have put a lot of effort into software for this market lately and it is all open source. HIP is a good example as it allows you to automatically convert CUDA code into native code that can run on AMD GPU's with only a little bit of manual massaging work required afterwards. They have several other frameworks out there as well, so it's a bit naive saying their software sucks without doing at least a little bit of research to know what you are talking about. Check this out: http://gpuopen.com/professional-compute/
"They are trying to massively undercut nVidia to get their foot into this market a bit more heavily."
I don't think so. If so then we should see AMD's market share soar unbelievably in the next 6 months, or we should see the prices for NVIDIA's Quadro products crash. I wouldn't hold my breath if I were you.
As far as GPUOpen, there's a reason it's Open, and that's because AMD is not sufficiently supporting it themselves. But, without looking, from what I remember their professional visualization software isn't included in GPUOpen, anyway. GPUOpen consists of gaming and compute libraries. As far as HIP, dream on, it doesn't work that way. Even Otoy, which has been trying to get their CUDA code to run on AMD and I presume PowerVR GPUs, said that AMD's Boltzmann Initiative wasn't useful for them. (https://render.otoy.com/forum/viewtopic.php?f=9&am...
I have done research about what I am talking about. Besides, common sense says that if any of those frameworks worked then AMD would be selling GPUs like hotcakes for machine learning and other acceleration tasks that NVIDIA GPUs sell for. Unless NVIDIA's hardware is superior. If it's not the software, it must be the hardware. You must pick one or the other. Instead, you buy the swill PR that AMD keeps feeding to you about how their next product is going to be so successful.
Nvidia has had better cards for quite a while and Apple has stuck to AMD. Why do you think that would change?
I’ve read somewhere that Nvidia has sued Apple a while back, and shortly after that point Apple went all-AMD. If that article was true (I can’t for the life of me find it anymore), I don’t think Apple will go Nvidia anytime soon.
"Nvidia has had better cards for quite a while and Apple has stuck to AMD. Why do you think that would change?"
The reason I think it will change is because I read that NVIDIA has been posting job ads looking for Metal programmers to work with "the next generation of exciting Apple products" or something along those lines. As for why Apple has gone with AMD recently, I don't know. One thing I do know is that Apple likes to tightly control the costs from their suppliers and NVIDIA demands fat profit margins for themselves. AMD probably has offered Apple much better deals. As for why Apple might now switch to NVIDIA, again, I don't know. If I had to guess it's because their customers are clamoring for them. Performance graphics chipsets are not in the commodity category of component supply. They probably can't afford to play the same strong-arm games with graphics chips as they can with other components. In other words, Apple might need NVIDIA more than NVIDIA needs Apple, and that's probably something that has been hard for Apple to swallow. That's just purely my speculation, of course.
"I’ve read somewhere that Nvidia has sued Apple a while back, and shortly after that point Apple went all-AMD."
I don't remember, nor can I find after a quick Google search, NVIDIA ever suing Apple. I found something about NVIDIA and Apple being the target of class action suits related to some of NVIDIA's cards back in 2011.
When's the last time AMD came out with a "FirePro" card? I think this "Radeon Pro" branding replaced the "FirePro" when AMD created Radeon Technologies Group.
AMD makes products without any consideration for application usages... how many workstations are out there that had liquid cooling (their requirement for last year's product)? Now this year they drop performance have their "photoshop pro" change the product name on the 7100 card announce it and we all debate how relevant it will be. The media needs to stop letting the AMD PR machine get a voice and start holding them to account for the status of announced products, services and relationships...
Yeah, instead of demanding liquid cooling from every workstation without any consideration for application usages they should just have integrated the liquid cooling into the cards cooler..
You have a point. How many of these cards will actually be available to purchase? I can think of a good reason why AMD would come out with phantom products. Its stock price is driven by speculation and impressions of future promise more than by fundamentals. The media, for their part, would be incentivized to not look too deeply into such issues since they want to maintain a good relationship with AMD in order to have access to review samples.
Of course this is all conspiracy theory type speculation but the idea of it isn't too far-fetched. I guess if I cared all that much I could spend hours looking into the demands of professional visualization and content creation users/applications as well as what software is available with this Radeon Pro card compared with Quadro cards and Titan cards. But yeah, perhaps the price for this card is so low not because of a lack of software but because of a lack of availability for the listed price. Maybe it can only be found for a much higher price. I hadn't considered that.
The vast majority of quadro and firegl products are OEM. The only company I know of that markets a retail quadro is PNY, and they are a reference design, reference cooler.
FireGL? That's going way back. But anyway, what are you saying? The list price is just for show? AMD is really selling this card to OEMs at a much higher, instead of lower, price than list? Because I'm confident that NVIDIA's margins on their Quadro line is significantly higher than on their GeForce line, and there is more overhead per card sold for support and software for their Quadro cards. So I am doubtful that NVIDIA is selling their Quadro P5000 for $1000 a pop to OEMs, even if the overhead to bring the card from the manufacturer to the customer is less.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
36 Comments
Back to Article
vladx - Tuesday, April 25, 2017 - link
" the TBP has come down significantly, from 350W to 250W."I think you meant to say TDP.
Ryan Smith - Tuesday, April 25, 2017 - link
No, AMD measures their cards in Typical Board Power, TBP.vladx - Tuesday, April 25, 2017 - link
Oh, good to now.vladx - Tuesday, April 25, 2017 - link
*knowMorawka - Tuesday, April 25, 2017 - link
How does TDP and TBP compare and what are the advantages and disadvantages of each system?Mday - Wednesday, April 26, 2017 - link
It's mostly semantics, once you understand what each one could be saying. TDP (thermal design power) measures heat energy needed to dissipate that heat. This number could be given for heat generated due to a typical load, or a maximum load, and is used by both Intel and AMD for their CPUs. TBP (typical board power) is a measure of how much power a "board" would consume, and seems to only be used by AMD for it's graphics cards.Eden-K121D - Tuesday, April 25, 2017 - link
No its TBP Typical board powerKevin G - Tuesday, April 25, 2017 - link
It should be able to drive three 8K@30 Hz monitors, three at 8K@60 Hz using DSC or a single 8@60 Hz with out compress (dual cable).The more interesting question is how many 4K displays can it drive. DP 1.3 MST hubs should be able to split the bandwidth of each DP 1.3 port to drive two 4K displays. That'd permit six 4K screens from the DP port using MST hubs. The question now is if the HDMI 2.0 port can be utilize in conjunction with all DP ports active driving more than a single display. Previously Radeon hardware had a six physical screen limitation but this card has two GPUs so the limit could actually be 12 depending on how the ports are connected.
vladx - Tuesday, April 25, 2017 - link
You forgot about the software side needed to support more than 4 displays.Samus - Tuesday, April 25, 2017 - link
Seems like a missed opportunity to revive the MAXX line, although the target audience is different. None the less. Pro Duo?Kevin G - Tuesday, April 25, 2017 - link
More than four displays hasn't been a problem on the OS side for years now and AMD has permitted up to 6 displays on their consumer drivers per card.nathanddrews - Tuesday, April 25, 2017 - link
AMD Eyefinity has supported six 4K screens on a single GPU since 2013 (2014?).ImSpartacus - Tuesday, April 25, 2017 - link
4K?I thought Eyefinity's original mantra was to use cheap 1080p monitors.
ddriver - Tuesday, April 25, 2017 - link
Thinking is not your strong point. Also, the hardware makes no distinction between cheap and expensive monitors.Kevin G - Tuesday, April 25, 2017 - link
The strategic goal of Eyefinity was to move gaming resolutions beyond 1080P as adding more displays. 4K hadn't arrived and 2560 x 1440 monitors were still expensive at the time. Eyefinity did push both AMD and nVidia to focus on higher resolution gaming, prices dropped and 4K isn't difficult to obtain.Eyefinity as a technology though hasn't changed and it is possible to utilize multiple 4K, 5K and even 8K to create a larger virtual surface. Three 8K displays would be pushing just shy of 100 million pixels. We'll probably be stuck with this as an effective upper limit as we're reaching the limits of what passive copper cabling can do even over short distances. Active cabling and/or transition to fiber will be necessary to really go beyond 8K
Roland00Address - Wednesday, April 26, 2017 - link
Sigh, when people talk about past 8k monitor for resolution I am reminded of this week's Silicon Valley Season Premiere where the first episode had a scene where the CEO is talking about a 10% image quality boost with same bandwidth and then the users can't tell the difference between the two. No matter he was talking fake tech specs of a 12 bit color vs 10 bit and no screen uses 12 bit, and very few screens use 10 bit correctly for while the screen may be able to display 10 bit often the OS and composite software was design for 8 bit and not the true version of 8 bit color but instead a smaller color space.Aka is it a real improvement, or just a perceived improvement for you say bigger tech specs and due to motivated reasoning I create a placebo type effect and I can truly see the difference, either through self delusion, or I can see a real difference for you force me to focus so intently on an insignificant factor that I am actually missing the full perception my eyes and brain can give me if I broaden my attention instead of hyperfocusing on nonsense information.
gerz1219 - Wednesday, April 26, 2017 - link
They're saying that a true VR experience with imperceptible pixels would require 8K per eye, and that means we'll eventually need some type of cable that can handle two simultaneous 8K streams.Yojimbo - Tuesday, April 25, 2017 - link
It's not encouraging for AMD that their professional cards are so cheap. This card is competing against what? The Quadro P5000? That card costs twice as much. AMD's software support must be sorely lacking.jabbadap - Tuesday, April 25, 2017 - link
Titan X/Xp, they are meant to content creation. Why would nvidia create mac driver for them anyway.Yojimbo - Tuesday, April 25, 2017 - link
Titan Xp isn't really for content creation. It doesn't come with professional software. It's for machine learning and gaming. I suppose if you can get away being without the professional drivers and software then you can use Titan Xp for that, but that just backs up my point. This Radeon Pro Duo card /is/ a professional card. It does come with professional software and support, doesn't it? So why is it priced to go up against cards that don't come with that software. I can only imagine it means that AMD's software sucks. For instance, perhaps the best professional applications use CUDA instead of OpenCL for their computations, or they are much faster using CUDA than using OpenCL.As far as the Mac driver, I think that's probably for the future. Apple probably has already decided to use NVIDIA cards again in some of their upcoming products.
extide - Wednesday, April 26, 2017 - link
They are trying to massively undercut nVidia to get their foot into this market a bit more heavily. They have put a lot of effort into software for this market lately and it is all open source. HIP is a good example as it allows you to automatically convert CUDA code into native code that can run on AMD GPU's with only a little bit of manual massaging work required afterwards. They have several other frameworks out there as well, so it's a bit naive saying their software sucks without doing at least a little bit of research to know what you are talking about. Check this out: http://gpuopen.com/professional-compute/Yojimbo - Wednesday, April 26, 2017 - link
"They are trying to massively undercut nVidia to get their foot into this market a bit more heavily."I don't think so. If so then we should see AMD's market share soar unbelievably in the next 6 months, or we should see the prices for NVIDIA's Quadro products crash. I wouldn't hold my breath if I were you.
As far as GPUOpen, there's a reason it's Open, and that's because AMD is not sufficiently supporting it themselves. But, without looking, from what I remember their professional visualization software isn't included in GPUOpen, anyway. GPUOpen consists of gaming and compute libraries. As far as HIP, dream on, it doesn't work that way. Even Otoy, which has been trying to get their CUDA code to run on AMD and I presume PowerVR GPUs, said that AMD's Boltzmann Initiative wasn't useful for them. (https://render.otoy.com/forum/viewtopic.php?f=9&am...
I have done research about what I am talking about. Besides, common sense says that if any of those frameworks worked then AMD would be selling GPUs like hotcakes for machine learning and other acceleration tasks that NVIDIA GPUs sell for. Unless NVIDIA's hardware is superior. If it's not the software, it must be the hardware. You must pick one or the other. Instead, you buy the swill PR that AMD keeps feeding to you about how their next product is going to be so successful.
xype - Wednesday, April 26, 2017 - link
Nvidia has had better cards for quite a while and Apple has stuck to AMD. Why do you think that would change?I’ve read somewhere that Nvidia has sued Apple a while back, and shortly after that point Apple went all-AMD. If that article was true (I can’t for the life of me find it anymore), I don’t think Apple will go Nvidia anytime soon.
Yojimbo - Wednesday, April 26, 2017 - link
"Nvidia has had better cards for quite a while and Apple has stuck to AMD. Why do you think that would change?"The reason I think it will change is because I read that NVIDIA has been posting job ads looking for Metal programmers to work with "the next generation of exciting Apple products" or something along those lines. As for why Apple has gone with AMD recently, I don't know. One thing I do know is that Apple likes to tightly control the costs from their suppliers and NVIDIA demands fat profit margins for themselves. AMD probably has offered Apple much better deals. As for why Apple might now switch to NVIDIA, again, I don't know. If I had to guess it's because their customers are clamoring for them. Performance graphics chipsets are not in the commodity category of component supply. They probably can't afford to play the same strong-arm games with graphics chips as they can with other components. In other words, Apple might need NVIDIA more than NVIDIA needs Apple, and that's probably something that has been hard for Apple to swallow. That's just purely my speculation, of course.
"I’ve read somewhere that Nvidia has sued Apple a while back, and shortly after that point Apple went all-AMD."
I don't remember, nor can I find after a quick Google search, NVIDIA ever suing Apple. I found something about NVIDIA and Apple being the target of class action suits related to some of NVIDIA's cards back in 2011.
webdoctors - Tuesday, April 25, 2017 - link
You can thank Nvidia for the huge drop in price. The prior one was $1500, this one had to be less otherwise it wouldn't compete with the Titan series.These aren't the professional cards AMD sells, those are the FirePro cards.
Yojimbo - Tuesday, April 25, 2017 - link
When's the last time AMD came out with a "FirePro" card? I think this "Radeon Pro" branding replaced the "FirePro" when AMD created Radeon Technologies Group.ddriver - Tuesday, April 25, 2017 - link
It's cheap because it is mediocre trash. Bring in Vega already!!!Yojimbo - Tuesday, April 25, 2017 - link
But the hardware doesn't seem that bad. It certainly seems a lot stronger than the price indicates, for a professional card.andychow - Friday, April 28, 2017 - link
The answer is simple. Manager asks "does this support CUDA", engineer says "we actually don't use CUDA", manager buys nVidia because CUDA.hahmed330 - Tuesday, April 25, 2017 - link
I am still in awe of the insane yet surprisingly impressive R9 295X2 500W TDP.hardwarejunkie4life - Tuesday, April 25, 2017 - link
AMD makes products without any consideration for application usages... how many workstations are out there that had liquid cooling (their requirement for last year's product)? Now this year they drop performance have their "photoshop pro" change the product name on the 7100 card announce it and we all debate how relevant it will be. The media needs to stop letting the AMD PR machine get a voice and start holding them to account for the status of announced products, services and relationships...MrSpadge - Tuesday, April 25, 2017 - link
Yeah, instead of demanding liquid cooling from every workstation without any consideration for application usages they should just have integrated the liquid cooling into the cards cooler..extide - Wednesday, April 26, 2017 - link
They did... the last Radeon Pro Duo had a complete closed loop cooler -- it did not require a water-cooling system in the workstation already...Yojimbo - Tuesday, April 25, 2017 - link
You have a point. How many of these cards will actually be available to purchase? I can think of a good reason why AMD would come out with phantom products. Its stock price is driven by speculation and impressions of future promise more than by fundamentals. The media, for their part, would be incentivized to not look too deeply into such issues since they want to maintain a good relationship with AMD in order to have access to review samples.Of course this is all conspiracy theory type speculation but the idea of it isn't too far-fetched. I guess if I cared all that much I could spend hours looking into the demands of professional visualization and content creation users/applications as well as what software is available with this Radeon Pro card compared with Quadro cards and Titan cards. But yeah, perhaps the price for this card is so low not because of a lack of software but because of a lack of availability for the listed price. Maybe it can only be found for a much higher price. I hadn't considered that.
Samus - Wednesday, April 26, 2017 - link
The vast majority of quadro and firegl products are OEM. The only company I know of that markets a retail quadro is PNY, and they are a reference design, reference cooler.Yojimbo - Wednesday, April 26, 2017 - link
FireGL? That's going way back. But anyway, what are you saying? The list price is just for show? AMD is really selling this card to OEMs at a much higher, instead of lower, price than list? Because I'm confident that NVIDIA's margins on their Quadro line is significantly higher than on their GeForce line, and there is more overhead per card sold for support and software for their Quadro cards. So I am doubtful that NVIDIA is selling their Quadro P5000 for $1000 a pop to OEMs, even if the overhead to bring the card from the manufacturer to the customer is less.