I was hoping for a straight NDA lift and benchmark right away instead of a hour long product announcement. Let's see if AMD will really claim the new card is good enough for 5K gaming.
I always have a kind of affection for AMD - they are just so amateur at marketing. Watching this now and it's just so unpolished as a presentation! The chap from Stardock seems to be the only one really comfortable in front of a crowd of people so far. It's kind of endearing in its way.
I take it all back. Lisa Su has more energy and confidence than all rest of them together. "What you see is perfect gaming". :) She's certainly happy with their product!
"What you see is perfect computing. Bulldozer is the future of computing. AMD is leading the forefront of computing in a way no competitor could possibly think to." (Or ever would want to.)
I wonder what happened with Bulldozer? I mean it's only "bad" compared to Intel's chips. In the scheme of things it's awesome.
I hope AMD gets something more competitive though, but I'll still consider their CPUs just to support them (and have to "suffer" with awesome performance instead of awesome plus LOL)
I'm still suuuuuuuuuuuuuper leery of their GPUs though only because of their drivers.
I'm also down with switchable graphics of any sort. I swear the people who test them don't LIVE with them, don't spend hours gaming on them to find out "oh, this crashes a bunch, actually".
You've got to admit they've got a fine sense of humor, though: "the new era of PC gaming" starts with an almost complete lineup of simple rebrands (at slightly higher clock speeds and with more memory, which they could have given us years ago). Progress on new process nodes is slowing down and costs are rising, so it's not a bad choice to protect your investment and use the existing architecture and mask sets as long as you can, with as few changes as you can get by. A new era, indeed.
Likewise. Press conferences are pretty tedious; executives get thrust out on stage to make small talk and scripted jokes, hype all over the place, five minutes of actual news. I don't know how members of the press do it. Maybe they'll lift the NDA at the end of this show? I was really hoping for a full product review to go with lunch.
AMD clearly has a problem: both GPU and CPU lines for the last few generations have been respins/rebrands of overheated architectures on antiquated process nodes.
Everything in the mainstream GPU segment is now a rebrand of a rebranded rebrand. If it weren't for nvidia leaving such a big hole in pricing between a 960 and a 750 they'd struggle to make sales.
And the top end, although new, is looking like a case of "same performance in a slightly less horrible power and heat envelope" over the last generation (given all their talk here was entirely about performance per watt).
The fact that the top end is still liquid cooled means that for all their talk of power busting focus, they're still running too hot.
They said the card was 275 watts, which is comparable to top end single gpu cards from both camps. They did watercooling because they wanted to, 275 watts is easily air cooled as well.
i dunno about easyily.. Think of these big honkin Metal coolers for i7 cpu's.. They are massive and made out of pure metal. And a CPU is only pulling 85w or less.
These chips are dense which makes heat build up much much faster than a PSU or CPU. it's not easlily cooled. it takes a lot of surface area especially at 300w
Definitely not easy, they're already effectively "overclocking" the chip to compete with 980Ti/Titan X so they needed the WCE to accomplish that. Similarly, the NVTTM cooler is basically at its thermal limits at around 275W, it can't overclock much at all and Nvidia has already acknowledged they are evaluating better reference coolers for their top end flagships.
Don't confuse watercooling with excessive TDP and difficulty with aircooling. We've seen horrible cooling solutions on the 290 and 290X from AMD themselves, but their partners also made some really good solutions as well.
The Fury X likely has watercooling for two reasons: 1 - Yes, one is the high TDP. WC helps in that respect tremendously, even if it can be cooled with a rather large air cooler (and it can) 2 - The density of HBM + GPU on the same package allows for a significantly decreased PCB, which can also cut costs -- trace, GDDR5 dies, and bus width all increases size and cost of a GPU. Watercooling the entire die, memory and all, is *way* easier than on a traditional GDDR5 GPU.
David Kanter spoke about this in some detail on TR's podcast with Scott Wasson.
@mrdude, I think the point stands honestly, 275W stated typical power draw is extremely high for a single GPU and would erase much of the benefit of some of the bulky custom coolers. I do agree the solution makes sense and I'm using AIO myself on my Titan X's and the difference is tremendous, 40C delta between ref air cooler and AIO. But the point is Titan X and 980Ti didn't "need" the AIO, the can use that thermal headroom for bigger gains, while the Fury X is starting there. We'll see how perf shapes up in the end.
This argument makes no sense when the Titan X has a 250W TDP and uses an air cooler. And 275W is very easily chewed up by a slight overclock on the Titan X, yet it still doesn't require a watercooler.
Look at the watercooled Fiji PCB and notice how much smaller it is than your Titan X. If you take apart the HSF on your Titan X, you'll also notice that the VRAM and the traces make it impossible to make a smaller PCB while still utilizing regular air-cooling.
The watercooler on Fiji has nothing to do with an extra 25W TDP. That's just plain ol' idiotic. It's to do with the shorter PCB and the HBM being on-package right beside the die.
250W is typical for stock Titan X yes, and when you increase power target to 275W you quickly saturate the capability of the stock cooler which quickly raises temperatures to the temp limit of ~85C where the GPU begins to throttle and downclock. Running at these higher temps also increases power consumption due to leakage, I can see this with my 2 cards (one AIO, one Aircooled) that the Aircooled Titan X hits the power limit (1/0) while the AIO cooled one stays at 0 on power limit while running just 43C under load.
What does PCB size have to do with power consumption again? I am not even sure what your point is here. The watercooler on the Fiji DOES have a significant impact on total power consumption, did you not see that the aircooled version not only has fewer SP enabled, but it still runs at the same 275W typical board power as the Fury X? Again, we will see quite clearly, that leakage and operating temps are going to play a significant role in how all of these cards behave.
The point is, that without the AIO cooler, Fury X may have only achieved maybe 950MHz if it had to operate at 250W limit using an aircooler, and it most certainly "stands" if you just look at how the non-Fury X behaves. The same behavior was mentioned with 290/X when custom coolers came out, the very high power consumption of the reference blower came down with these custom coolers that kept the GPU at lower operating temps.
Size has quite a bit to do with power consumption. But that wasn't my point. I was referring to the size of the PCB and the cooling required. Fiji doesn't need a large PCB, thus it can decrease the PCB and allow for smaller form factors. In order to do so, AMD had to utilize a watercooler because a smaller PCB wouldn't allow for a heatsink large enough to dissipate the required heat. This is a choice AMD made that nVidia doesn't have, at least with GPUs that are roughly the same TDP. nVidia can't fit a Titan X or 980Ti into a form factor that's the size of Fiji. The 970 was possible, sure, but not even the 980 isn't available in that size.
As to your other point regarding the HSFs being ineffective at 275W TDP -- that, too, is nonsense. The R9 290X non-reference cards dissipate even more heat than Fiji and Titan X and do so with an air cooler. The limiting factor for Titan X is the blower design and not the TDP. Non-reference 290X coolers with triple or double-fans dissipate more heat and tend to do so more quietly than the Titan X's blower style cooler. And, again, AMD could have opted for that once more but instead chose to decrease the size of the GPU which meant they required watercooling.
The one argument you can make regarding the use of watercooling is maybe the inconvenience factor, but it also allows for smaller form factors that would otherwise be impossible. The one aspect AMD overdid was the cooling capacity of their watercooling solution. 500W for a GPU that at max power can only consume 375W seems more than a bit excessive.
Again, how is size an important factor when the much larger PCB on the Titan X/980Ti are rated for 250W while the tiny Fiji board rates 275W? Oh right, because the GPU is the primary determinant factor, not size. The PCB size has NOTHING to do with the need to use watercooling, they can and will be using an air cooler on the Fury, if they are worried about size, they can just use a blank PCB extension like many of the 970 OEMS did. Sure, Nvidia can't use a smaller PCB until they move to HBM but that's a non-issue when it comes to using standard air cooling or an AIO, if anything Nvidia has MORE options. And Nvidia can't fit the Titan X or 980Ti into the same form factor? Proprietary, maybe, but most if not all mini-ITX SFF can fit the NVTTM ref dual-slot design, but not the Fury X due to no room for the rad.
I said the NVTTM ref design tops out at 275W, why are you comparing to the 290X non-ref and not the reference that showed the exact same problems? Was I talking about the same 3-fan open faced coolers that are on the 980Ti? No, I wasn't. But my point stands there as well as I've already acknowledged the custom coolers and AIOs do a better job of cooling and reducing power consumption in the form of leakage and fan power draw, which brings me back to the point a reference cooler on the Fury X might have very well failed at 275W making it a 300W monster, while the reference blower IS sufficient at 250W and up to 275W for Titan X. Obviously both the 980Ti and the Titan X would benefit immensely from an AIO as well, but I think many buyers already understand this.
The form factor won't be an issue for towers, but it will be for SFF designs that don't have a place to hang or wedge the radiator. And the 500W max is not overkill, because again, the extra cooling capacity will be used to keep the operating temps lower, thereby reducing power consumption from leakage.
This is exactly what i was going to say. My suspicion is Nvidia knew or highly suspected it was slower than the Titan X, and released the 980 ti at $650 to put them in a hard position. My suspicion is they wanted to charge 800-850 for this card and it be slightly slower than a Titan X, maybe 5-10%. My suspicion is they had to delay release so they could figure out a way to get it to "beat" the titan x/980ti. So, they did do a smart thing by releasing it "stock" as a watercooled solution. A lot of enthusiasts will think thats cool. The smart ones among us know they likely did this so they could "overclock" the chip they originally intended to aircool so that it could beat the titan x/980ti. Even if its only slightly. I highly doubt this chip will have much in the way of overclocking headroom.
The other thing to consider is with a water cooled 980 ti, its cost ~$100 more than a reference card but is performing 15-20% better than the ref card, which means its gonna outperform the Fury X in a more apples to apples comparison (due to the water cooling) and justify the higher cost for the pre water cooled GM-200.
@Kutark, agree with this analysis to the tee. Its obvious Nvidia pre-emptively struck with the 980Ti to limit AMD's landing spots and taking that $850 mark off the table. We'll see what the final clocks and performance is, but my guess is we see the tables turn where AMD may win the lower resolutions but lose at the higher ones due to fewer ROPs than expected, 4GB limitation.
Sorry but I don't believe this. There are just too many factors that don't agree: A) AMD knew NVIDIA had a big gap between the Titan X and the 980. Add to this the fact that the Titan X uses a different (bigger) chip then the 980 with all units unlocked it was highly likely NVIDIA was going to release a product with a crippled Titan X chip around the Fury release. B) The watercooler reference design as stock cooler fopr the top card from this generation from AMD has popped up over the internet for a very long time. this is by no means a quick fix. C) They could have definately released the FuryX with a good air cooler with the same clocks, water coolers aren't so special as many people think. D) Nvidia has been launching new products or decreasing product pricing around AMD launches for a long time E) AMD knows its currrent 4GB limit for the Fury cards is a very important limitation that should be fixed soon. This is a disadvantage compared to the 980Ti/TitanX but also to their own 390/390X so this limits their potential price. If AMD were to launch the Fury at 850 USD while equipping the 390/390X with 8GB for half that price would have been a Joke. Either 8GB is BS and they are admitting it, or the Fury is very short lived product and they hope people are stupid and buy it still. F) AMD had always had an aggressive price compared to Nvidia. Look at the price/performance graphs from the last years and you will see AMD always has more good scores than Nvidia here. Nvidia is more relying on having other advantages like G-Sync, Phys-X acceleration etc.
So to sum up, I don't believe: - the Fury was ever going to launch at 850 - the water cooler was a late adjustment as a reaction on the 980Ti - AMD did not expect the 980Ti - AMD forgot that max 4GB is an important limitation for a new high-end product
C) That is impossible to determine, without the actual product available for testing.
E) 4GB VRAM is pretty good, if you know how to use it. Problem is, AMD has apparently had an "exceedingly poor" way of handling VRAM in the past, and they tried compensating for it by putting more VRAM to make up for their inefficiency. Their marketing of course did its best to convince all the rubes that more VRAM was always better.
"An obvious concern is the limit of 4GB of memory for the upcoming Fiji GPU – even though AMD didn’t verify that claim for the upcoming release, implementation of HBM today guarantees that will be the case. Is this enough for a high end GPU? After all, both AMD and NVIDIA have been crusading for larger and larger memory capacities including AMD’s 8GB R9 290X offerings released last year. Will gaming suffer on the high end with only 4GB? Macri (Joe Macri, Corporate Vice President and Product CTO at AMD) doesn’t believe so; mainly because of a renewed interest in optimizing frame buffer utilization. Macri admitted that in the past very little effort was put into measuring and improving the utilization of the graphics memory system, calling it “exceedingly poor.” The solution was to just add more memory – it was easy to do and relatively cheap. With HBM that isn’t the case as there is a ceiling of what can be offered this generation. Macri told us that with just a couple of engineers it was easy to find ways to improve utilization and he believes that modern resolutions and gaming engines will not suffer at all from a 4GB graphics memory limit."
I'm not sure why you think that poor usage of the graphics memory was an AMD-only problem based on that quote. They were plainly talking about the subject in general. Please keep anti-AMD biases out of your analysis. Your not Chizow so hopefully that's not an impossible request.
@AeroWB you make reasonable points, but ask yourself this. Would AMD really bother with the embarassment of rebranding their entire product stack to launch an "ultra-premium" brand that cost the same as their previous 290/X parts? It just doesn't make any sense. The only explanation that does make sense is that they had their eyes on a bigger prize, ie. the Titan.
@chizow I think that AMD had too make a few though choices as their R&D budget is much smaller then that of Nvidia and Intel and creating a new chip is really expensive. The fact that all new GPU's are still 28nm is something that both Nvidia and AMD were not expecting a few years ago when they started designing the chips for say 2014/15. So they new chips we've seen this year are probably all designs that had to be sort of rushed which means either take a new design and scale it back to 28nm or take an existing design and improve or upscale it. AMD has had a big advantage here as they already had a lot of chips that could support DX12/Vulcan/FreeSync etc. So rebranding them was a reasonable and cheap option. AMD did not have an answer to its high power draw and didn't have a high end part to compete with the titan and so they decided to create one more GPU on 28nm with (almost) the same graphics cores as the latest version use in the R9 285 chip but now scale that chip to be really huge and to address the power draw add some more power features like target frame rate control, some power optimizations in the chip and HBM to top it off. With this approach they had to create only one new chip and at the same time develop HBM further so it can be used on the next generation of cards that will be at a new process node. Because the R&D cost of the 200 series chips have been earned back with current sales they can afford to drop the price and so be able to compete with Nvidia. Nvidia on the other hand had much more need to create a new 28nm part as their older chips were missing essential features for DX12/Vulcan so in order to fix this they had to design multiple new chips to cover the whole line-up. But instead of just "upgrading" their old core designs they did a big redesign which resulted in much less power hungry cards. Probably by integrating much of the tech they developed for the mobile parts. So both AMD and Nvidia made different decision on what to develop on another 28nm design and both choices sound very reasonable to me. Though I am, like a lot of people, disappointed the new 300 series are all rebrands as the Fiji is too expensive for me and I had hoped to pick up much better price/performance card than that was possible with the R9 290 (X) that I was going to buy. And now the new R9 390 (X) are only marginally better
@AeroWB: I actually agree with pretty much everything you wrote here, I guess where we disagree is on the actual execution and pricing decisions for Fury. At this point its probably a moot point now, I just feel like 980Ti did really surprise AMD in both price and how close it performed to Titan X, greatly limiting their landing spots for the Fury X. It just doesn't make sense to me that AMD would go through the trouble of creating an Ultra Premium brand at $650 max when they already hit those price points with their standard R9 series in the past, while bringing on this rebranding scrutiny for the rest of their R9 300 series.
Wow Chizow (shill) you're really ramping up the number of bullshit Anti AMD posts as the hard launch approaches. Guess you Nvidia employees must be a little nervous as the prospect of losing the performance crown approaches...keep it coming man, this is very amusing :)
Wow Ranger101 (fanboy), would be awesome if AMD just did what the big boys do and just hard launch when a product is ready. ;)
But yes, I know you must be getting lonely in that cube farm in Sunnyvale/Austin/Toronto as you see more and more of your buddies leaving for greener pastures. ;)
Chizow - these products are obviously not for you so please move aside. Go to nvidia's articles or wait for their turn. You are seriously annoying and you added heaps of rubbish to constructive forum (yes you did!). I wonder why any serious site would allow this to happen.
Look up the word forum, then move along if you find my posts seriously annoying.
Do you go around policing Nvidia articles in a similar fashion? I bet you don't. I think there's quite a few who were in the market for an upgrade that were disappointed by what AMD has done here, if that's not you, move along!
Disappointed for what? First leaked 3d mark firestrike benchmarks in 4k (witch is very memory intensive) shows the Fury x ahead the Titan x for half the price. I thing nvidia customers should be disappointed for throwing their money out of the window, if i were a titan x owner i would be pretty pissed off.
Why do you compare with Titan X which is a card thought for those that do architecture rendering or run simulations that <b>cannot</b> be run with a 4GB card? Compare it with a 980Ti. And look at the custom version too which come with a quite overclocked GPU without needing a liquid cooler to not melt down.
Except the Fury X market is going to be extremely limited, the rest of the market that held off in the sub-$500 price point expecting and waiting for an AMD update before making a decision has to be disappointed with this reality of Rebrandeon, because the reality is, they could've been enjoying this same level of performance at the same price, 10 months ago when Nvidia launched the 970/980 and forced price cuts on AMD's flagships 290/290X down to the $220-$300 price points.
Also, 3D Mark is NOT memory intensive lol, the entire runtime is not 4GB, so how would we expect it to utilize a 4GB VRAM buffer?
I could not really think to read this kind of crap on a site like this. Really guys, do you a favor, go and learn a bit of physics before putting your fingers on a keyboard.
The respins do have better performance and honestly what does one expect? Why would they waste precious resources on jumping to 20nm when they have almost perfected the move to 14nm and are doing so full scale next year.
You have to consider the fact that developer of GPUs is planned much further in advanced than something like a game, its hard to move and adapt to a change in circumstance which in this case is TSMC being an extremely unreliable foundry in meeting their own timelines, it put a large spanner in AMDs plans and really I think credit should be given because their same architecture is matching Nvidias!
Nvidia only has a a bit of a performance per watt advantage but its negated by initial costs and the fact that AMD is generally favorable in the bang for buck. Its a shame development is not simple because both AMD and Nvidia have different development timelines, generally they're 6 months apart so people compare whichever company just launch their chips to the other companies 'current' chip and say THEY ARENT DOING ANYTHING X IS BETTER THAN THEM!
Lets see what the Fury X and Fury can do against Nvidias current GPUs and it should be a much 'fairer' battle :D
>> Why would they waste precious resources on jumping to 20nm when they have almost perfected the move to 14nm and are doing so full scale next year.
Because 28nm rebrands won't sell shit vs nVidia's 900 series this year? Check out Steam's hardware survey and the graphics card deltas, you can't get a better temperature reading on the gaming market.
Because 14nm is a totally unproven technology for high power 8-9 billion transistor chips? Even Intel with their tick-tocks have been dragging their feet starting with tiny 4.5W processors and working their way up and still haven't released a normal 14nm desktop CPU.
>> Its a shame development is not simple because both AMD and Nvidia have different development timelines, generally they're 6 months apart
GTX 780: May 2013 Radeon R9 290X: October 2013 - response: GTX 780 Ti GTX 980: September 2014 - response? Radeon R9 285 They had almost a full year from the 290X and the R285 was the best they could come up with. Nine months later and they're paper launching a card that'll raise the bar for the first time in 20 months. It's AMD that's very, very late not nVidia that's early.
Hasn't nVidia made 3 recent jumps on 28nm ? Why can't amd handle it, they care so much and are so concerned with gamers and gaming and wear a holy crown of selflessness.
Considering Maxwell came out a year after the R9 2XX series and had the same performance your posting isn't accurate at all.
Nvidia has been quite embarrassing. The 970 should not be equal to a card a year older. Especially aftering the memory fiasco I'm going to be selling my 970 for a new AMD card.
And the GTX 970 was only a mid-range card and forced AMD to drop prices on their high-end cards to keep them selling. These new Fiji cards are suffering the same affliction - Nvidia has determined the selling price...not AMD.
Oh, yes, embarassing that with a chip quite smaller than the huge GK110 they manage to get better performance with less power consumption. Maybe you could not understand that nvidia aim is to make money, and the 980/870 launch just manage them to make a lot of it. Embarrassing are fanboys that have non connection with reality and live in they own world made of colored bars and number printed on papers instead of evaluating thing in their fullness.
They were stuck for far too long on 32nm, and now they've been stuck for years on 28nm, with absolutely no sign of process improvements from GloFo. Their processor woes probably wouldn't have been nearly as bad if they'd been able to get a power saving die shrink out of a lower process node. But nobody's been able to deliver one for them.
Intel is literally 10-15 years ahead of the competition at this stage, and as pointed out, they're having problems. If the absolute best in the business can't get 14nm working, GloFo has no hope.
The "14nm" from Samsung and GloFo is low power optimized and probably unsuitable in the first instance for big gpu chips. I'd expect the SOC chips to be the first to get 14nm treatment.
You do not have to expect nothing, as Samsung is already producing their SoC at 14nm and is selling them into Galaxy S6, for example. Next one to use it should be Qualcomm with their new Snapdragon 820 (after the epic fail of 810 on 20nm).
@Hicks12, still can't admit the 300 series are just rebrands huh? lol. AMD can't spend the resources because they don't have them, they were too busy wasting resources on dead end APIs like Mantle. But why would they? To remain competitive and relevant maybe? The market didn't want their R9 200 series even after AMD was forced to drop prices, and they offer FAR better price:perf than these rebranded 300 series cards.
The software engineers were working on Mantle, that didn't stop the hardware engineers from doing their own thing. They've had other issues regarding their hardware development, such as the delays from the fabs. They spend considerable hardware resources on developing 20nm products that they couldn't build, and it's no simple task to just move the new designs back to 28nm. NVIDIA suffered similar setbacks but were able to handle the situation a bit better.
Oh, and the market did want the R9 series, especially during the cryptocoin boom. For some time they were considerably more expensive than the NVIDIA counterparts due to their superior compute performance. The market moves much quicker than the GPU development process, though.
Dude, save your breath. Chizow=NVIDIA troll. He treats his own and everyone elses speculation as fact as long as it suits him. He can't be honest about the merit of a product. Fanboys are impossible to have a conversation with.
Most will be respins/rebrands. HBM is simply un-affordable at the mid-range and lower end cards. So if the cards all move down a notch on the performance list and are priced accordingly, I'm fine with it. One thing they need to do is make them all GCN 1.2 or whatever the latest is. If not they don't need to be in the 300 series. The exception to that would be the budget cards. You shouldnt however have a budget card that has lets say true audio and offer GCN 1.2, then the next card up the chain offer a 15% perf bump but then lack true audio and only be GCN 1.1. That's just confusing to customers and will piss them off when they realized their new shiny card doesnt supt the latest features. NVIDIA can do the same thing. If they respun/rebadged a 980 into a 1070 or whatever their next series will be I would be OK with that as long as its price goes down with it's level in the series tier. Then I'd pick one up and SLI it with my 980. As far as cooling goes, AMD can't bust out with some BS cooler that doesnt work. They need to take a cue from NVIDIA and put a really good ref cooler on the card. Third Party vendors have put excellent coolers on the 290 series cards and they run nice and cool, dissipating the heat very well, no throttling and even allowing for a bit of overclock. For the new Fury cards with the HBM, I think they should release an top end air cooled version as well as teh AIO WC. For air cooling if they dont do it themselves, they should have a release partner that has one available. AMD needs customers to stay, and they want customers to come back. Doing this will give customers options. Plus it will show customers that the card isn't show inherently inefficient that they have to resort to AIO WC only just to get it to stay competitive.
@Manch: So you do agree that there is significant market confusion that results from AMD's rebrand strategy? So why do you think I am just trolling when coming to these same conclusions and bringing them to light when AMD's biggest fanboys/supporters are busy sweeping them under the rug? Certainly from your own very well formulated and thought out concerns, a less educated/informed consumer should be able to benefit from that same level of understanding and knowledge?
Voicing your concerns as you did are what give you better products, not apologizing/defending and being a homer for the team you root for. If you bothered to go back and look you would see I was VERY critical of what Nvidia did with the Kepler GK104 and GK110 launches (leading to Titan) and beyond that I am very vocal about issues I run into when using their products, but the difference is, I'm actually CRITICAL about them and not busy sweeping them under the rug, because I *KNOW* Nvidia listens and will fix them if the issues are brought to light. Just food for thought. :)
I've now seen there line up, though I'm waiting on benchmarks. I don't see how they're going to charge 100$ more for a respin/rebadge. I was hoping the 390X would be released at current 290X 8GB prices. Usually every generation a rebadge moves down a notch not take up the same spot on the line up or go up in price. It would need to have significant improvement which I just don't see. I guess only the Fury cards are GCN 1.2 That at least eliminates confusion being that is at the top end only. I haven't read up on what the difference is though. I'm not terribly concerned with 8GB frame buffers either. Most games right now don't take advantage of 4GB @ 4k resolutions. There are exceptions SoM for example uses about 5GB I think. If they are actually paying attention to how the memory is being used and their compression algorithms are that much better than the previous gen then it shouldn't be an issue. I however will not be buying any cards from either NVIDIA or AMD. I only buy every other gen at the min. I already have two 290X 8GB and a 980. I'm good for right now.
Not everything you say is bunk. The fact that you think NVIDA can do no wrong and how you say things makes you come of as a troll. You're making a lot of assumptions before benchmarks or anything else is released. Some conjecture is easy enough to do as it will not take a huge leap of faith to guess how the 390's will perform but no one has solid benchmarks on fury yet, whether it will overclock, throttle, or if it gives off a rose like smell when its cooking along. As far as you being critical of NVIDIA? that may be the case in a few instances but you absolutely flame AMD for anything and everything. Your last statement about you KNOW Nvidia will listen is kinda funny. They like every other company will *listen* if there is enough outcry and it can expect to hurt sales. Good example is locking out overclocking on the mobile GPU's or AMDs crossfire issues. It took them a whole other chip to "fix" that which is why I've ever cross fired or SLI'd anything up until now. I hope the Fury X and diet Fury or whatever its called do relly well. I want them to put price pressure on Nvidia for once so I can get a second 980 for cheap.
As far as you being a troll, yeah I think you're trolling. If you were more even handed with your criticism, then it would give greater weight to your arguments. As it stands now, people largely treat your comments as trolling as do I.
Yep, even that would've been a more worthwhile endeavour, I just don't see why people think these decisions are made in a vacuum without any repercussions on other parts of the business. AMD's downfall is just a domino effect of multiple bad decisions, beginning with the over-valued acquisition of ATI.
Gigaplex, of course it did, because AMD leadership green lighted and pink slipped either/or. Its really simple, someone in their graphics division OK'd spending money on Mantle instead of allocating those funds into saving R&D jobs, and now we see the ramifications. Mantle is dead, years wasted, money wasted funding EA/DICE's development of Mantle and implementing it in-game. 1 new ASIC since 2013 and a bunch of rebrands, instead of a new set of ASICs to complement Fiji.
You could actually probably put AMD and Fanboys in there and it would be more apropos, how is Mantle working out for you guys? Money well spent amirite! :)
But hey works for me, I've come to expect low-brow humor and low intellect from the types that would actually buy AMD products.
Yep, certainly the impact is being felt of all the R&D cuts and layoffs. I think they were caught off guard by not only being stuck at 28nm but also that Nvidia bothered to produce a completely new gen on 28nm.
AMD built another core too, that's what Fiji is. The only real difference in that Nvidia didn't change the name of the 750 to 950. The fact that AMD can flat out rebrand the 200 series to 300 and still be competitive with Nvidia shows how level the playing field actually is.
I guess if you consider "level" to mean does the same thing but uses 2x as much power, runs 10-15db louder, and can be used as a space heater in the winter.
Nvidia built 3 new cores (GM200, GM204, GM206), and a 4th considering the GM107 was an inbetweener. The fact AMD had to rebrand their 200 series shows they had no other choice. Guess what? This wouldn't be the first time that a previous gen flagship remained competitive with the next-gen perf mid-range SKU 1 slot down, but that sure as heck isn't going to do much for your margins.
AMD has built 3 cores on 28nm... I dont get why there is so much hate for AMD right now and most of it is.. well based on false assumptions, just straight up WRONG INFORMATION
Vague talk about VR out of the way. Check. Next up: Launch of the Rebrandeon line for 2015. Check. Hopefully after that: Launch of an actual new card. Common Fury! Don't make us think that AMD should have called you "Moderate Annoyance"!
No performance numbers, looks like they avoided discussing the 4GB RAM limitation, $649 would suggest it's not a Titan X rival. "Moderate Annoyance" may just apply to this press conference.
"R9 390 and 390X start at $329 and $429. Both have 8GB of GDDR5. Meant for 4K gaming"
LMFAO. Sure AMD, Sure. At least Ngreedia was smart enough to say that you really want a GTX-980Ti or Titan X for 4K gaming.... and even those cards aren't really great for 4K, more like "almost adequate".
You guys should seriously tune into the live stream, it's a "New Era of PC gaming"... lol, the marketing BS is almost too much to bear XD. So many half truths, and avoiding details... "Full DX12 support"... and no mention of the GPU's used across the 300 series, I wonder why?
They better be able to beat a 980 from last year. As for the 980Ti and Titan X.... they would have intentionally leaked info last month if the Fury was really beating those two cards.
You mean when the 290x came out at about half the price for the same perf as the original Titan? Which is pretty much what they are doing now, except a bit more than half price.
:) The math may be wrong but his point still stands. AMD's "victory" at 4K is more or less academic. especially when you factor in the variable sync implementations (gsync vs freesync).
They're such tools... they watched all the badboy movies and decided their receding mat must go because they are, HOMELAND SECURITY AND BFG KFA rolled into one !
Been team Red since ATI but I am honestly just starting to feel sorry for this company. If they don't pull a rabbit out of their hat soon there isn't going to be anything left.
Looks like AMD listened to all the recent negative press on the 200 series (290/x mainly).
Much better cooling system, Check Much better perf / watt, Check Much better Industrial design, Check
I am sure the haters will find something else to complain about though.
My bet on Fury X perf: Based on the $650 street price, 15-20% faster than 980 Ti. I mean think about it, they are not undercutting nVidia here-- so they must be getting better perf... Just how much?
15-20% faster than 980Ti means it will be faster than Titan X. AMD wouldn't be stupid enough to sell a card that beats a $1000 card at $650.
My guess is Fury X will more or less tie with 980Ti. Some people will be willing to get smaller 4GB HBM over 6GB GDDR5 so they are price the same. Fury X2 will be two Fury (not Fury X) cores and cost $1000+ to easily eclipse Titan X.
Yes. It is impossible that this card is faster than TitanX because by hw specs it is as fast as 50% faster than 290X. So, that is the most they can get overall. They might be better in games at 5K that don't eat more than 4GB, but that is very much a very narrow area. The price shows it will at best run against 980Ti. Probably trade blows with it.
You're misinterpreting the numbers. Performance per watt is not equivalent to performance. Fury X's performance per watt is 50% better than the 290x but Fury Nano was said to be 100% better than the 290x. According to you (mid)understanding, that means the Nano is faster than the Fury X. It's not and your extrapolation of how much Fury X's performance ceiling is wrong.
If watts are the same, it absolutely is equivalant to performance. They stated Fury X was 275W typical, assuming this is roughly the same as Hawaii, a statement of 1.5x perf/w is going to directly translate into 1.5x perf, which is not going to be enough to beat 980Ti/Titan X.
Similarly, if Nano is say 1.3x perf of 290X at 185 vs. 275W, that will get you to the 100% improvement in perf/w.
"Similarly, if Nano is say 1.3x perf of 290X at 185 vs. 275W, that will get you to the 100% improvement in perf/w."
I think AMD achieving 2x perf per W of the 290X with a cut down Fiji is quite extraordinary considering a fully enabled GPU only manages 1.5x. I'm curious to see how they achieved that.
I suppose, but lowering the clocks doesn't usually result in dramatic improvements in performance per W unless the GPU is operating well outside its optimum frequency range to begin with. I suppose that's possible, but even then I struggle to see how that in itself could account for such a significant difference in efficiency. In addition binned 3rd tier dies, which is what it sounds like the Nano will use, typically have worse efficiency than fully enabled dies.
Ya, that may significantly effect power consumption, but we're talking efficiency here, perf per W, not just power consumption. If you drastically lower the clocks/voltages, you will decrease power consumption, but you'll also drastically decrease performance. Like I said, every GPU is going to have an optimum frequency range for efficiency. If Nano is merely a binned lower clocked Fiji then FuryX must be operating well above that range.
Hadn't considered that - however, even if that's the case, still a premature extrapolation since we don't know if the shaders are essentially equivalent or how much impact the memory architecture will have (if any). We'll all know how Fury matches up soon enough so I think it's just silly to make sweeping pronouncements of how much faster it can or cannot be based on some assumptions that may or may not be accurate.
Strangely, the comments page behaves better on UC Browser on my Lumia 1020 than Firefox on my PC. Your response does indeed branch off from extide's post.
"My bet on Fury X perf: Based on the $650 street price, 15-20% faster than 980 Ti."
...
You sound like you know what you're talking about.
"Much better cooling system, Check... Much better Industrial design, Check" Since when has quality cooling and industrial design in a stock cooler even been a thing for AMD fanboys? In retrospect I think the more insightful question becomes when have AMD fanboys stopped mocking features that Nvidia championed and brought to market first, and start loving and parroting them instead? GPGPU computing? Hmm, around 2012. Oddly enough it seemed to coincide with the launch of GCN, AMD's first focused GPGPU computing architecture. Variable refresh rate? Around the time FreeSync was announced. Industrial design in GPU coolers? Well, now I guess.
To get some clearer answers to that original question, we can basically rephrase and simplify it down to: when did AMD start taking these features seriously?
Looks like ... now is when they started taking that stuff seriously.
In any case it seems like for the 290x, they were just like we need all the performance we can get, period. And just went all out with 2816 shaders and ended up with a gpu that uses a lot of power.
This time they were like ok, people really care about the power use, the cooling, how loud it is, how hot it runs (with stock cooling) etc -- and so they actually put some effort into that stuff. It's nice to see although the partner cards will of course have their own designs.
I am also glad to see that they did the Nano. They tok the packaging size advantage and turned that into a product idea as well. I really wish we got more details on the different fury products. How many shaders in Fiji Pro? How many shaders on Nano? What clock speeds do those run also? How about specs for the dual gpu card? Is it cut down or fully enabled chips?
Time will tell of course. I wonder when we will get benchmarks of Fury X? Will we have to wait until next wednesday when it releases?
Ummm... someone needs to define "industrial design", because it's a retarded buzzphrase for morons.
Furthermore, amd didn't just suddenly find out people care about power usage - they after all were the ones screaming housefires from the very top down to their mind controlled fanboys when the 470 and 480 were released then they went batso insane on the 570 and 580, and AMD even made a VIDEO WITH THE FBI AND HOMELAND SECURITY DETECTING THE ELECTRIC PULL AND HOUSEFIRES AT THE NVIDIA CRIMINAL RESIDENCE AND PUT IT ON YOUTUBE !
So no, amd didn't suddenly notice people "care about power".
@dragonsqrrl - great points, I've noticed that as well, its a natural defense mechanism of AMD and their fanboys to downplay and poo poo features Nvidia introduces that they don't have. Then later when they introduce some half-baked version, its "Oh we love it! We were waiting for open standards and now our version is much better" even though of its just not as good or well supported/implemented.
I notice they made specific mention of the Fury (vanilla) being air cooled, but not the Fury X. So does that mean the Fury X is running so hot it has to be liquid cooled even in a reference design? Because yikes!
What's with the hot stuff and AMD. All high-end AMD graphics I had run cooler as any high-end nvidia. He said the Fury X runs at 50C under load and is an overclockers dream. When that is achieved by water cooling, who gives a shit really.
I've build couple of ITX systems with closed loop systems. http://www.lian-li.com/en/dt_portfolio/pc-q18/ you can get two 140mm rads inside this case, in fact I'm running one with one for the CPU right now with space for one more on the top if I wanted to (you need to drill some holes and stuff, since that case wasn't made for rads, but you get the drift).
But in a perfect world really, that card should never be in an ITX case (tho you can find cases as the one above), and if you go to MATX or ATX, you have plenty of room in pretty much any case on the market.
If a case is too small for a radiator there's a strong chance it's too small for a super long graphics card, which up to now includes basically all high end cards.
If you have 8 GPUs then: 1. You are not common folk, most likely one of few around the world and not a target of any company 2. For 8 GPUs you can use different case as the cost does'n seem to be a problem 3. You can have custom loop that will still be better than 8 blower type cards beside each other 4. Many other options if you want to use that card instead of complaining that you don't have space for rads ;)
Maybe the system you build in your head, but yeah lol no you are not building systems with 8 titans. You would have to do some really interesting stuff to run 8 titans in one computer.
It's a 275w card. That can definitely be air cooled. They water cooled the ref card.... because they wanted to, thats all. There will be plenty of partner boards with air cooling for the Fury X.
Fiji is a solid part at that price point, but most likely lower than AMD wanted as a reaction to 980Ti. Its too bad too, they could've avoided a lot of this Rebrandeon scrutiny and criticism if they just did the logical thing:
At least this way they could've avoided some of the product stratification in terms of support and feature sets. You'd have at least GCN 1.1+ and it would've actually made sense.
To me its obvious AMD had its eyes on Titan's ultra premium market, but the 980Ti being with 3-5% of that perf at a $650 price point threw a wrench in their plans.
That also would've been a lot more truthful from a feature level perspective given AMD's announcement of "full DX12 support" in their presentation. With their current lineup feature level support ranges from 11.1 to 12.0. No word yet on Fiji, but I'm not sure why they wouldn't mention 12.1 support if it were present, unless they simply didn't want to draw attention to the rest of their lineup.
Yep, I think that would have covered Huddy's "All new GPUs in 2015 will support FreeSync" lie too if they dropped Pitcairn. Although he could've meant just Fiji, since that is the only "new" GPU for AMD in 2015. :D
I am curious to see how AMD is going to manage the whole DX12 issue though, I guess we'll need to wait for reviews for them to cover that.
People who will buy the low-end stuff aren't often the kind that know about or care for DX iterations. Hence most likely, this particular marketing promise will get swept under the rug over time. Or better yet AMD marketing will convince the fans that it is somehow better for the world just the way it is.
I don't know that there's much "Rebradeon scrutiny and criticism" outside the green fanboi brigade; actual consumers mostly only care if a card is competitive within its price bracket and has a featureset that meets their needs. If a given architecture is good enough to be competitive for multiple generations (ref: G92, Tahiti/Tonga), there's no rational reason to discard it prematurely.
If AMD was surprised by either the 980Ti's performance or price point, they were even more inexcusably history-blind than disgruntled Titan X owners. Every generation of Titan has been shortly followed by a cheaper card that either matched or exceeded it, and there was no reason to expect that the third time would be any different.
On that basis, either AMD was stupid (certainly possible), or they fully expected to sell a card at nVidia's $650 price point, but which outperforms their competitor's part by a reasonable margin, which is also not historically unusual. I'm sure they would LIKE for the 980Ti to not exist, but there was no rational reason to expect such a situation to obtain.
"actual consumers mostly only care if a card is competitive within its price bracket and has a featureset that meets their needs"
Does that include the AMD fanboys that had no intention of ever purchasing a 970?
"If a given architecture is good enough to be competitive for multiple generations (ref: G92, Tahiti/Tonga), there's no rational reason to discard it prematurely."
Well, except for one of the things you brought up in just your previous sentence, "featureset", amongst many other things like production costs and perf per W. Seriously, I struggle to draw a coherent analogy between something like Pitcairn and G92. Perhaps you could clarify?
"On that basis, either AMD was stupid (certainly possible), or they fully expected to sell a card at nVidia's $650 price point, but which outperforms their competitor's part by a reasonable margin, which is also not historically unusual."
Really? Could you point to a time when this has happened recently with a top of the lineup card? And by recently I mean the past ~8-9 years. I can think of only one that might qualify, which would make what you're saying quite historically unusual. AMD usually hits lower price points for their top cards, but they also usually under perform Nvidia's top cards. And when they have had a performance advantage they've priced their products accordingly.
>Does that include the AMD fanboys that had no intention of ever purchasing a 970?
Obviously not considering the 970 came out in late 2014 and was equal speed (and slower at high resolutions) with the R9 290 from 2013. They also lied about memory. Quite embarrassing for Nvidia.
The 970 has a vastly larger feature set and beats the amd card in 99% of the resolutions and settings.
I'm sick of amd claiming it's faster when it is not - it pulls choppy chunked 1 and 5 fps mins on rezzes it cannot support then the fanboys scream win.
Really, so the OEMs are the ones selling those R9 380 and R9 390X rebrands at Best Buy right now for considerably more than their previous 200 rebranded predecessors?
Rebrands *USED* to be for OEMs only, until AMD decided to rebrand their entire RETAIL product stack to R9 300.
What changes? There are 8GB 290X for a lot cheaper than $430 and plenty of 290X clocked at 1050MHz or higher, its the same chip in slightly different configs. Obviously a rebrand. Same for the rest of the stack below it. Not a single new chip = Rebrandeon 300 series.
As an AMD supporter and potential customer, you don't mind that AMD has rebranded their entire 300 series stack as new? You can say all you like that consumers only care if a card is competitive, but this only seems to hold true for AMD fanboys/loyalists, as the last 10 months since 970/980 launched prove otherwise. Your claims only prove AMD caters to a niche group of frugal spenders that seeks low prices over all else (support, drivers, features), but the overwhelming majority of the market (77% last I checked) does seem to value newer features, lower power consumption, and better support over just pure price:performance considerations.
Nvidia has never launched a GeForce flagship so close to the Titan launch, or so close in performance. The 780 was considerably slower than original Titan and the 780Ti was launched some 9 months later and before the Titan Black. No one could've expected Nvidia would've launched basically a cut-down Titan X at a much cheaper price point, but it is obvious now why they did.
Again, look at it realistically. Why would they bother to try and launch a new "ultra-premium" brand but then price it at the 290/X historical price point of $500/650? They basically bumped their normal R9 x00 series SKUs down below $500, forever. Its obvious they though Nvidia would ask for more for the 980Ti, and that it wouldn't come so close to the Titan X, which would give them a landing spot in that $750-850 range as rumored. Obviously Nvidia caught wind of this, launched the 980Ti at higher than expected performance and lower than expected pricing, which made AMD change course accordingly.
But, we will see in a few days once we see some benches! :)
Why would anyone care about the rebrands? I mean seriously what negative does it have? NONE! ITS FOR THE OEMS! So they can have a shiny new model number, THATS ALL. It didn't even get in the way of AMD designing and launching an entirely new chip too.
Holy crap and we're sick of reading such stupidity!
If there's no problems with rebrands, why doesn't AMD just come out and say it? Why are they going through so much trouble to Rebrand even the ASIC codename? Its obvious why, because its deceptive. Rebrands are going to be older generation parts that have dated features and higher power consumption. People upgrade to get more performance, with more features, at the same or lower power consumption and price.
In nearly every one of these important metrics, AMD doesn't move forward with performance and features, and actually moves BACKWARDs in price and power consumption. And they're trying to sell these as new parts! Fortunately, the market is smarter than this, these Rebrandeon parts won't fare any better than the 200 series which was slaughtered in the marketplace by Nvidia's Maxwell series.
ok I have to take back my previous comment, you are more Nvidia shill than ever. It's like you have completely forgotten the G92 that lasted generation after generation as a rebrand. Why are you pretending this is an "AMD trick"? You think Nvidia doesn't re-brand? Yes they made Maxwell, yes Maxwell is completely awesome, you don't think they will re-brand Maxwell next year? Is your skull so thick that you cannot see reality in front of you if it isn't painted green? I owned an 8800GTS 512 that died prematurely.....AND I owned a Radeon 5850. Both stellar cards, the 5850 never died. Does that make AMD better? Only a fool would say so, but it seems you are just such a fool but on the other side. I have no loyalty to either camp, I buy what works best for me. You don't need to continually bash anyone that has different priorities than you, it just makes you look like an ignorant tool who chooses which facts to play loose with depending on what makes Nvidia look better. Now back to lurking I go.
Again, look at the G92, it is ONE ASIC carried over, first in the mid, then low end which is where you would expect to see rebrands. Not an ENTIRE chip family carried over exactly in the same product/marketing positions. 4 chips, none new, carrying the exact same marketing SKU designation as their predecessor. And why are they being so secretive about the actual rebrands? Its a deceptive attempt to confuse the market, plain and simple.
@Gigaplex, thank you for the honesty, that is exactly the point. Anyone who waited for 10 months on the promise of an upgrade in their price bracket is probably regretting the fact they waited.
"actual consumers mostly only care if a card is competitive within its price bracket and has a featureset that meets their needs"
Well that rules out a chunk of the rebrands for me since I want FreeSync. I was miffed when I found out that my 270X doesn't support it. At the time I purchased it, AMD were still claiming that FreeSync would be supported on it in a future driver release.
Actually, it's even more obvious that Nvidia panicked and released the GTX 980 Ti simply to try and get as many sales as they could before the Fury X could be unveiled. The $1,000 Titan X had only been on the market for 2 ½ months when Nvidia released the just-as-fast GTX 980 Ti at $650. There's no way Nvidia would cannibalize sales of their Titan X cash cow unless they were forced to by AMD's new flagship.
You're all crazy as moonbats. AMD delayed it's release countless times and many months, NO TIMING IS POSSIBLE WHEN CARDS ARE RELEASED AS SOON AS GOOD SILICON PUMPS OUT THE 10,000 OR 50,000 REQUIRED FOR WORLDWIDE DIST.
They don't get to pick and choose - there is NO EXCESS PRODUCTION NOR NODES TO TOY AROUND WITH.
LMAO panicked? You mean when AMD panicked and cancelled all GPU product announcements at the 2nd biggest electronics trade show in the world once they confirmed Nvidia was launching the 980Ti? :D
Yes Creig, they panicked so much that they pre-emptively rained on AMD's "Premium" debut of Fury by pricing the 980Ti so low and setting performance so close to the Titan X, that the only possible way AMD could justify their rumored $850 price point for the Fury X was if they convincingly beat both. The obvious conclusion is that AMD realized they weren't going to do that with the Fury X, so they had to settle for simply matching the 980Ti pricing.
So yes, for Nvidia it was a slam dunk, sell a ton of cards, make it even harder for AMD to compete in the marketplace, and ensure Titan still has no peer as the ultra premium part at that $1000 price point.
What stopped AMD from stopping the bleeding at any point in the last 3 months since Titan X launched? We know for a FACT their R9 300 series Rebrandeons were ready to go since they were just rebadges. Oh right, they simply weren't ready to launch Fiji. 275W and an extra thick rad tells me they overclocked this GPU to the tits again just to reach parity with Nvidia's 980Ti and Titan X at 250W. But we'll see soon enough!
Chizow, I have laughed at your pro-Nvidia bias for quite some time, but I have to say this particular post actually seems to be unbiased. I didn't know you had it in you! Unfortunately now I may have to take some of your other posts more seriously.....
I wonder if they demonstrated some working Fury setups (at least for the press). Without any working hardware, any price announcements would just be empty marketing talk.
They had a fury system running on stage, at least 1 maybe 2. THey are also being released to retail in 1 week .. so yeah obviously the cards are working...
It probably is 4GB, but even if that's the case I bet it will be fine for single GPU gaming, only when you go crossfire will 4GB really be limiting IMHO -- except for MAYBE 1 or 2 games.
Mostly people complaining about 4GB are just complaining to complain about something.
I think everyone needs to remember that you can't judge 4GB of HBM RAM the same as 4GB of GDDR5 RAM. AMD have already stated that you need much less RAM to run the high texture resolution of 4K on HBM than on GDDR5. Instead of freaking out about the 4GB issue, wait for benchmarks, everyone might be pleasantly surprised to see that what Titan X can do with 12GB, AMD can do with 4GB.
"I think everyone needs to remember that you can't judge 4GB of HBM RAM the same as 4GB of GDDR5 RAM." Uhh, by volume you can.
"AMD have already stated that you need much less RAM to run the high texture resolution of 4K on HBM than on GDDR5." Wow, when did they say that? Source?
"In the HBM article anandtech had a few weeks ago."
No, nowhere in that article does it state that GPU's utilizing HBM will inherently consume less frame buffer than GDDR5. Perhaps you could be a little more specific, maybe with a link and a quote, so we can address the misunderstanding.
"Also remember for dx12 titles memory is not duplicated. Two 4g cards is 8g of addressable resources unlike the current status with dx11 and lower."
In that article it states that Joe Macri said in an interview that the current way of doing things is just adding more RAM, but with HBM and its 1st gen 4GB limitation, AMD have been working on overcoming this perceived problem through engineering.
@ekagori Yes, but that has nothing to do with HBM, other than the fact that 1st gen HBM's capacity limitations in top tier cards spurred these driver optimizations from AMD. As I already wrote in my response to Etsp, there's no reason AMD couldn't implement similar optimizations on cards that are limited by their GDDR5 capacity.
No, it does not work that way. You can't, even for a tiny bit, supplement memory space with memory bandwidth. If your texture is 35MB it is 35MB, no more, no less. You can (like Nvidia does) compress memory and steal some processing and bandwidth to save on space, but that is not a very significant amount and it can not work to make your 4GB card automagically work like a 6, 8 or 12GB. Bandwidth can only help in scenarios where you have a huge amount of data to move through that memory and that is at huge data chunks (NOT huge take, but huge chunks) which only a handful of games even can think about. The card should, due to bandwidth, show some advantage over 980Ti at ultra-high resolutions like 4K or 5K, but not nearly as you'd think.
And no, AMD 4GB can not do anything TitanX can with 12GB. They can fit less stuff and that is what they can do.
Fortunately for AMD, at this moment only half-a-handful of games actually need more VRAM than 4GB - GTA5, Witcher 3, Shadow of Mordor and that crap CoD:AW... maybe Assassin's Creed: Unity... and only at 4K or higher resolution.
So, yes, AMD's 4GB will be enough for 1440 games at max and some 4K games at medium/high (depending on game) level details and no AA.
The last four paragraphs on that page are the relevant parts.
Basically, there was a lot of data in the GPU's memory that didn't need to be there, taking up space. This wasn't a problem when they were adding more memory chips to up bandwidth (the bottleneck at the time), so they had plenty of memory, except for edge cases (4x crossfire).
@Etsp That has nothing to do with HBM, as is stated quite clearly in the article you referenced. What you're referring to are essentially driver optimizations, and there's no reason AMD couldn't implement similar optimizations on cards with limited amounts of GDDR5.
You're right, the technology involved has nothing to do with HBM. However, HBM's 4GB limit is the reason that they're putting effort into addressing that area. I think we may be seeing some big memory utilization improvements on existing hardware once those drivers are released. Not just HBM cards.
The point is that once those drivers are in place, 4GB won't be as problematic as it is now.
All the fury cards will have 4gb as far as I'm aware this generation. It is right now just a limitation of the new technology. I guess more than 4gb stacks isn't reliable to produce, or feesable package wise at the moment.
(possibly the dual GPU Fury will have 8, I don't know for sure but I heard a rumor of the fury cards handling CF memory differently, but I hold that worth a grain of salt until I see it.)
I don't believe it will be the bottleneck everyone is afraid it will be for 4k gaming though. So long as the bandwidth improvements from HBM live up to the hype that is.
Alot of your people seem to miss understand these products. First people complaining about the only 4 gb of hbm. The speed of this memory is far higher than what was used before, it is supposed to be equivalent to 6gb of ggdr5, whihc would make it on par memory wise with the gtx 980 ti. Second people say this is the reason it wont compete with a titan x. Its not meant to compete against a titan x, its supposed to compete against the 980 ti. Whoever buys the titan x, is brain dead and has no idea what there doing, it has no value point and provides insignificant performance at its price point, as all titans do.
"First people complaining about the only 4 gb of hbm. The speed of this memory is far higher than what was used before, it is supposed to be equivalent to 6gb of ggdr5, whihc would make it on par memory wise with the gtx 980 ti."
I have no idea where some of you are getting this from, or how you're bold enough to claim that other more informed people somehow misunderstand, but unfortunately you're misinformed and objectively wrong. The interface/bandwidth has nothing to do with the memory consumed for a given task.
"Whoever buys the titan x, is brain dead and has no idea what there doing, it has no value point and provides insignificant performance at its price point, as all titans do. "
This coming from someone who just wrote the previous paragraph.
I think a link to these bold claims would be nice, that way we can decide for ourselves. I think somebody is getting delta compression and HBM confused, perhaps.
Unfortunately, that's not based in reality. Except for color compression, you only have as much memory as you have. Making access to the memory faster doesn't help with a deficit of available storage. If you're lacking in storage, then you're going to have to hit up system RAM though the PCIe bus (much slower) to access the required textures and shuffle them through as you use them. That's a very inefficient way to process graphics and having a fast VRAM interface won't help you in that case. Nothing about the speed of the interface solves the problem of storage space.
It's a card with 4gb of memory, with all the standard limitations you get with that amount of memory. HBM doesn't magically do anything. While yes it does have more bandwidth then a 290 it also has a lot more stream processors so in actual fact bandwidth/processor is probably pretty similar.
"4 gb of hbm. The speed of this memory is far higher than what was used before, it is supposed to be equivalent to 6gb of ggdr5"
So by that logic...... For you computer system, because DDR4 memory is faster than DDR3 or DDR2.... 4 or 6 GB of DDR4 = 8 GB of DDR3/2....
Come on... 4GB of memory is 4GB no matter how fast or slow the bandwidth/Speed is.... Faster speed/bandwidth cant make up for limited memory no matter how you slice it up.
Lots of wise remarks from both camps about the wait till benchmarks as well as the usual hardened gambling quick praise/critisize remarks. For those more enlightened than me, do games/benchmarks see HBM memory differently to GDDR5? that is: if a game's miminum requirement is xGB of graphics memory it won't matter whether that's HBM or GGDR5? I think usual spec sheet recommend a mimnimum graphics card as opposed to minimum graphic memory anyway but it's nice to know. What I am sure will be quite annoying with the slew of upcoming benchmarks that deals with the two variables is teething out by how much margain has each contributed to the final gain/loss in performance.
Memory controllers and GPU design itself can make better or worse use of the xGB a card has to an extent, but HBM v GDDR won't change much if anything.
With HBM in mind. Does AMD holds the patent for this? Is Nvidia just going to use HBM for free? Any one care to elaborate ? Because if Nvidia gets to use it for free then that's really funny for AMD side considering they are the one who research it and developed it. Am I making sense?
I don't see any benchmarks, nor a product. This is NOT A REVIEW. Whatever...Paper launchers...Stop shilling for AMD. What new era in gaming? I see no products for review. I see no specs, speeds etc. Jeez...Not even a review of the rebranded crap (which I hope comes with better speeds, faster mem etc), so what exactly launched? I can online tonight hoping to read a dozen new reviews (yeah, I do that), and see ZERO.
Correct me if I'm wrong here, I thought we would be able to BUY something today from this "LAUNCH". Is that not right?
To clarify, specs for something not coming until "fall"? etc mean nothing to us now and can change. I meant what are the specs of all these cards that are supposed to be launched right now to compete with NV's stuff? I thought the 390x and below would be reviewed today and for sale (not fiji/hbm stuff, but where are the rest?).
What's the point of reviewing anything below R9 Fury? You can already find reviews for 290X 4GB vs 8GB and 290X @ 1.2GHz vs GTX980 @ 1.5GHz. The performance for 290X/390X is known. Unless AMD managed to make it much cooler or OC more than 1.2GHz on air, there is no point.
Yeah Anand used to really punish paper launches but at some point it became OK I guess to have these tech events and then launch weeks later. I'm grateful at least that Nvidia got the message and does a really great job of hard launching products in recent years, ends all the speculation about price and performance pretty quickly!
They're overclocked versions of the same chip, compare them to previous custom cooled versions including the 8GB 290X or 970/980 and they're middle of the pack where they were before. Just rebrands with a different BIOS, nothing to see here.
It was the 8800GT/GTS 512, not the Ultra. And it's good that you can count the number of generations. If you can think of any other similarities between between g92 and Pitcairn/Bonaire/Tonga/Hawaii please let me know.
"It's a 275w card. That can definitely be air cooled. They water cooled the ref card.... because they wanted to, that's all. There will be plenty of partner boards with air cooling for the Fury X."
I wonder whether the small size of the board had anything to do with choosing water cooling? The size of board might be a limit on the fan size that could be used and I suspect we all have experience of some of the older cards from both camps that sounded like a fighter aircraft taking off.
From what I have seen I rather like Fury X, and I have been using Nvidia for years. Obviously need to see benchmarks and would like to see cards with waterblocks for use in custom water cooling loops.
The size of the PCB makes no difference on the size of the air cooler. They can just use blank PCB extensions like many OEMs did already for the 970 to accomodate bigger coolers, heck even most of the big Al coolers now extend beyond full size 10.5-12" PCBs.
The NVIDIA astroturfing is really getting ridiculous in the comments here. Is it too much to ask for some moderation here by the Anandtech staff to make this comment section readable?
Awww... it sounds like kyuu wants to read his pro-AMD fluff pieces in peace, there is a forum for that too its called AMDzone.com. Do you go around hand-waving in protest like this in the Nvidia stories against all the AMD fanboys/trolls. I bet you don't. ;)
As for "astroturfing" (this is a new term to me, must be in the AMD fanboy handbook), do any of these neckbeards look familiar to you?
I just got a SFF PC and been looking for graphic cards for 4K 60fps and there is almost nothing out there. Intel integrated is the only thing that will work with 4K monitors.
Most current graphic cards don't have a Displayport output and if they do; it a $800 card!
I don't care about 4k until 70" tv's are a thousand bucks and 24" monitors are $200.
Also, there would have to be 4K content readily available. Legally, I'd have to have access to every tv show and movie I already own in 1080p get upgraded to 4k for free. Obviously illegally you can get around this, but for most people the problem stands.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
272 Comments
Back to Article
Peichen - Tuesday, June 16, 2015 - link
I was hoping for a straight NDA lift and benchmark right away instead of a hour long product announcement. Let's see if AMD will really claim the new card is good enough for 5K gaming.h4rm0ny - Tuesday, June 16, 2015 - link
I always have a kind of affection for AMD - they are just so amateur at marketing. Watching this now and it's just so unpolished as a presentation! The chap from Stardock seems to be the only one really comfortable in front of a crowd of people so far. It's kind of endearing in its way.h4rm0ny - Tuesday, June 16, 2015 - link
I take it all back. Lisa Su has more energy and confidence than all rest of them together. "What you see is perfect gaming". :) She's certainly happy with their product!jimmy$mitty - Tuesday, June 16, 2015 - link
When you are CEO of a company, whether you "believe" in the product or not you sell it. I mean look at how hard the last CEO sold BD.HisDivineOrder - Tuesday, June 16, 2015 - link
"What you see is perfect computing. Bulldozer is the future of computing. AMD is leading the forefront of computing in a way no competitor could possibly think to." (Or ever would want to.)Chillin1248 - Tuesday, June 16, 2015 - link
Ouch, haha.
Check it out, the new AMD Quantum PC uses an Intel CPU:
https://youtu.be/1RdISWkIUmM?t=1m5s
Asrock Z97E ITX motherboard with most of the rear IO removed.
Gigaplex - Tuesday, June 16, 2015 - link
Not surprising, since AMD Zen isn't ready.Wolfpup - Thursday, June 18, 2015 - link
I wonder what happened with Bulldozer? I mean it's only "bad" compared to Intel's chips. In the scheme of things it's awesome.I hope AMD gets something more competitive though, but I'll still consider their CPUs just to support them (and have to "suffer" with awesome performance instead of awesome plus LOL)
I'm still suuuuuuuuuuuuuper leery of their GPUs though only because of their drivers.
I'm also down with switchable graphics of any sort. I swear the people who test them don't LIVE with them, don't spend hours gaming on them to find out "oh, this crashes a bunch, actually".
Jon Tseng - Tuesday, June 16, 2015 - link
I've met Lisa she's smart and a tough cookie. Admittedly been dealt a pretty tough hand. Good luck to her.Wolfpup - Thursday, June 18, 2015 - link
I didn't know AMD had a new president, and how awesome is that it's a woman? I wasn't exactly shocked she's an MIT grad LOLMrSpadge - Tuesday, June 16, 2015 - link
You've got to admit they've got a fine sense of humor, though: "the new era of PC gaming" starts with an almost complete lineup of simple rebrands (at slightly higher clock speeds and with more memory, which they could have given us years ago). Progress on new process nodes is slowing down and costs are rising, so it's not a bad choice to protect your investment and use the existing architecture and mask sets as long as you can, with as few changes as you can get by. A new era, indeed.figus77 - Wednesday, June 17, 2015 - link
HBM tell you nothnig?MrSpadge - Wednesday, June 17, 2015 - link
That's why I wrote "almost complete lineup of simple rebrands". If there was no Fiji chip it would be simply "complete lineup of simple rebrands".Mr Perfect - Tuesday, June 16, 2015 - link
Likewise. Press conferences are pretty tedious; executives get thrust out on stage to make small talk and scripted jokes, hype all over the place, five minutes of actual news. I don't know how members of the press do it. Maybe they'll lift the NDA at the end of this show? I was really hoping for a full product review to go with lunch.Sttm - Tuesday, June 16, 2015 - link
At least throw us some performance numbers. Like BF4 "Fury vs 980ti at 4k"... thats all I wanted, but no, nothing!Zak - Tuesday, June 16, 2015 - link
That's because the benchmarks would likely not support their overblown claims.Hairs_ - Tuesday, June 16, 2015 - link
AMD clearly has a problem: both GPU and CPU lines for the last few generations have been respins/rebrands of overheated architectures on antiquated process nodes.Everything in the mainstream GPU segment is now a rebrand of a rebranded rebrand. If it weren't for nvidia leaving such a big hole in pricing between a 960 and a 750 they'd struggle to make sales.
And the top end, although new, is looking like a case of "same performance in a slightly less horrible power and heat envelope" over the last generation (given all their talk here was entirely about performance per watt).
The fact that the top end is still liquid cooled means that for all their talk of power busting focus, they're still running too hot.
extide - Tuesday, June 16, 2015 - link
They said the card was 275 watts, which is comparable to top end single gpu cards from both camps. They did watercooling because they wanted to, 275 watts is easily air cooled as well.Morawka - Tuesday, June 16, 2015 - link
i dunno about easyily.. Think of these big honkin Metal coolers for i7 cpu's.. They are massive and made out of pure metal. And a CPU is only pulling 85w or less.These chips are dense which makes heat build up much much faster than a PSU or CPU. it's not easlily cooled. it takes a lot of surface area especially at 300w
angrypatm - Tuesday, June 16, 2015 - link
made out of pure metal, not that cheap half plastic chinese metal. lolGigaplex - Tuesday, June 16, 2015 - link
A CPU pulls much more than 85W when overclocked. At stock, you don't need the fancy CPU heatsinks.chizow - Tuesday, June 16, 2015 - link
Definitely not easy, they're already effectively "overclocking" the chip to compete with 980Ti/Titan X so they needed the WCE to accomplish that. Similarly, the NVTTM cooler is basically at its thermal limits at around 275W, it can't overclock much at all and Nvidia has already acknowledged they are evaluating better reference coolers for their top end flagships.mrdude - Tuesday, June 16, 2015 - link
Don't confuse watercooling with excessive TDP and difficulty with aircooling. We've seen horrible cooling solutions on the 290 and 290X from AMD themselves, but their partners also made some really good solutions as well.The Fury X likely has watercooling for two reasons:
1 - Yes, one is the high TDP. WC helps in that respect tremendously, even if it can be cooled with a rather large air cooler (and it can)
2 - The density of HBM + GPU on the same package allows for a significantly decreased PCB, which can also cut costs -- trace, GDDR5 dies, and bus width all increases size and cost of a GPU. Watercooling the entire die, memory and all, is *way* easier than on a traditional GDDR5 GPU.
David Kanter spoke about this in some detail on TR's podcast with Scott Wasson.
chizow - Tuesday, June 16, 2015 - link
@mrdude, I think the point stands honestly, 275W stated typical power draw is extremely high for a single GPU and would erase much of the benefit of some of the bulky custom coolers. I do agree the solution makes sense and I'm using AIO myself on my Titan X's and the difference is tremendous, 40C delta between ref air cooler and AIO. But the point is Titan X and 980Ti didn't "need" the AIO, the can use that thermal headroom for bigger gains, while the Fury X is starting there. We'll see how perf shapes up in the end.mrdude - Wednesday, June 17, 2015 - link
Watercooled Fiji doesn't "need" the AIO either.This argument makes no sense when the Titan X has a 250W TDP and uses an air cooler. And 275W is very easily chewed up by a slight overclock on the Titan X, yet it still doesn't require a watercooler.
Look at the watercooled Fiji PCB and notice how much smaller it is than your Titan X. If you take apart the HSF on your Titan X, you'll also notice that the VRAM and the traces make it impossible to make a smaller PCB while still utilizing regular air-cooling.
The watercooler on Fiji has nothing to do with an extra 25W TDP. That's just plain ol' idiotic. It's to do with the shorter PCB and the HBM being on-package right beside the die.
Your point doesn't "stand." At all. Here's the video to David Kanter's and Scott Wasson's discussion on the matter:
http://techreport.com/review/28246/the-tr-podcast-...
chizow - Wednesday, June 17, 2015 - link
250W is typical for stock Titan X yes, and when you increase power target to 275W you quickly saturate the capability of the stock cooler which quickly raises temperatures to the temp limit of ~85C where the GPU begins to throttle and downclock. Running at these higher temps also increases power consumption due to leakage, I can see this with my 2 cards (one AIO, one Aircooled) that the Aircooled Titan X hits the power limit (1/0) while the AIO cooled one stays at 0 on power limit while running just 43C under load.What does PCB size have to do with power consumption again? I am not even sure what your point is here. The watercooler on the Fiji DOES have a significant impact on total power consumption, did you not see that the aircooled version not only has fewer SP enabled, but it still runs at the same 275W typical board power as the Fury X? Again, we will see quite clearly, that leakage and operating temps are going to play a significant role in how all of these cards behave.
The point is, that without the AIO cooler, Fury X may have only achieved maybe 950MHz if it had to operate at 250W limit using an aircooler, and it most certainly "stands" if you just look at how the non-Fury X behaves. The same behavior was mentioned with 290/X when custom coolers came out, the very high power consumption of the reference blower came down with these custom coolers that kept the GPU at lower operating temps.
mrdude - Wednesday, June 17, 2015 - link
Size has quite a bit to do with power consumption. But that wasn't my point. I was referring to the size of the PCB and the cooling required. Fiji doesn't need a large PCB, thus it can decrease the PCB and allow for smaller form factors. In order to do so, AMD had to utilize a watercooler because a smaller PCB wouldn't allow for a heatsink large enough to dissipate the required heat. This is a choice AMD made that nVidia doesn't have, at least with GPUs that are roughly the same TDP. nVidia can't fit a Titan X or 980Ti into a form factor that's the size of Fiji. The 970 was possible, sure, but not even the 980 isn't available in that size.As to your other point regarding the HSFs being ineffective at 275W TDP -- that, too, is nonsense. The R9 290X non-reference cards dissipate even more heat than Fiji and Titan X and do so with an air cooler. The limiting factor for Titan X is the blower design and not the TDP. Non-reference 290X coolers with triple or double-fans dissipate more heat and tend to do so more quietly than the Titan X's blower style cooler. And, again, AMD could have opted for that once more but instead chose to decrease the size of the GPU which meant they required watercooling.
The one argument you can make regarding the use of watercooling is maybe the inconvenience factor, but it also allows for smaller form factors that would otherwise be impossible. The one aspect AMD overdid was the cooling capacity of their watercooling solution. 500W for a GPU that at max power can only consume 375W seems more than a bit excessive.
chizow - Wednesday, June 17, 2015 - link
Again, how is size an important factor when the much larger PCB on the Titan X/980Ti are rated for 250W while the tiny Fiji board rates 275W? Oh right, because the GPU is the primary determinant factor, not size. The PCB size has NOTHING to do with the need to use watercooling, they can and will be using an air cooler on the Fury, if they are worried about size, they can just use a blank PCB extension like many of the 970 OEMS did. Sure, Nvidia can't use a smaller PCB until they move to HBM but that's a non-issue when it comes to using standard air cooling or an AIO, if anything Nvidia has MORE options. And Nvidia can't fit the Titan X or 980Ti into the same form factor? Proprietary, maybe, but most if not all mini-ITX SFF can fit the NVTTM ref dual-slot design, but not the Fury X due to no room for the rad.I said the NVTTM ref design tops out at 275W, why are you comparing to the 290X non-ref and not the reference that showed the exact same problems? Was I talking about the same 3-fan open faced coolers that are on the 980Ti? No, I wasn't. But my point stands there as well as I've already acknowledged the custom coolers and AIOs do a better job of cooling and reducing power consumption in the form of leakage and fan power draw, which brings me back to the point a reference cooler on the Fury X might have very well failed at 275W making it a 300W monster, while the reference blower IS sufficient at 250W and up to 275W for Titan X. Obviously both the 980Ti and the Titan X would benefit immensely from an AIO as well, but I think many buyers already understand this.
The form factor won't be an issue for towers, but it will be for SFF designs that don't have a place to hang or wedge the radiator. And the 500W max is not overkill, because again, the extra cooling capacity will be used to keep the operating temps lower, thereby reducing power consumption from leakage.
mrdude - Wednesday, June 17, 2015 - link
Once more, but this time in comprehensible English.chizow - Wednesday, June 17, 2015 - link
Read it again, or work on your English reading comprehension, it makes perfect sense to someone that knows what they are talking about.Kutark - Tuesday, June 16, 2015 - link
This is exactly what i was going to say. My suspicion is Nvidia knew or highly suspected it was slower than the Titan X, and released the 980 ti at $650 to put them in a hard position. My suspicion is they wanted to charge 800-850 for this card and it be slightly slower than a Titan X, maybe 5-10%. My suspicion is they had to delay release so they could figure out a way to get it to "beat" the titan x/980ti. So, they did do a smart thing by releasing it "stock" as a watercooled solution. A lot of enthusiasts will think thats cool. The smart ones among us know they likely did this so they could "overclock" the chip they originally intended to aircool so that it could beat the titan x/980ti. Even if its only slightly. I highly doubt this chip will have much in the way of overclocking headroom.The other thing to consider is with a water cooled 980 ti, its cost ~$100 more than a reference card but is performing 15-20% better than the ref card, which means its gonna outperform the Fury X in a more apples to apples comparison (due to the water cooling) and justify the higher cost for the pre water cooled GM-200.
chizow - Tuesday, June 16, 2015 - link
@Kutark, agree with this analysis to the tee. Its obvious Nvidia pre-emptively struck with the 980Ti to limit AMD's landing spots and taking that $850 mark off the table. We'll see what the final clocks and performance is, but my guess is we see the tables turn where AMD may win the lower resolutions but lose at the higher ones due to fewer ROPs than expected, 4GB limitation.AeroWB - Wednesday, June 17, 2015 - link
Sorry but I don't believe this. There are just too many factors that don't agree:A) AMD knew NVIDIA had a big gap between the Titan X and the 980. Add to this the fact that the Titan X uses a different (bigger) chip then the 980 with all units unlocked it was highly likely NVIDIA was going to release a product with a crippled Titan X chip around the Fury release.
B) The watercooler reference design as stock cooler fopr the top card from this generation from AMD has popped up over the internet for a very long time. this is by no means a quick fix.
C) They could have definately released the FuryX with a good air cooler with the same clocks, water coolers aren't so special as many people think.
D) Nvidia has been launching new products or decreasing product pricing around AMD launches for a long time
E) AMD knows its currrent 4GB limit for the Fury cards is a very important limitation that should be fixed soon. This is a disadvantage compared to the 980Ti/TitanX but also to their own 390/390X so this limits their potential price. If AMD were to launch the Fury at 850 USD while equipping the 390/390X with 8GB for half that price would have been a Joke. Either 8GB is BS and they are admitting it, or the Fury is very short lived product and they hope people are stupid and buy it still.
F) AMD had always had an aggressive price compared to Nvidia. Look at the price/performance graphs from the last years and you will see AMD always has more good scores than Nvidia here. Nvidia is more relying on having other advantages like G-Sync, Phys-X acceleration etc.
So to sum up, I don't believe:
- the Fury was ever going to launch at 850
- the water cooler was a late adjustment as a reaction on the 980Ti
- AMD did not expect the 980Ti
- AMD forgot that max 4GB is an important limitation for a new high-end product
D. Lister - Wednesday, June 17, 2015 - link
C) That is impossible to determine, without the actual product available for testing.E) 4GB VRAM is pretty good, if you know how to use it. Problem is, AMD has apparently had an "exceedingly poor" way of handling VRAM in the past, and they tried compensating for it by putting more VRAM to make up for their inefficiency. Their marketing of course did its best to convince all the rubes that more VRAM was always better.
"An obvious concern is the limit of 4GB of memory for the upcoming Fiji GPU – even though AMD didn’t verify that claim for the upcoming release, implementation of HBM today guarantees that will be the case. Is this enough for a high end GPU? After all, both AMD and NVIDIA have been crusading for larger and larger memory capacities including AMD’s 8GB R9 290X offerings released last year. Will gaming suffer on the high end with only 4GB? Macri (Joe Macri, Corporate Vice President and Product CTO at AMD) doesn’t believe so; mainly because of a renewed interest in optimizing frame buffer utilization. Macri admitted that in the past very little effort was put into measuring and improving the utilization of the graphics memory system, calling it “exceedingly poor.” The solution was to just add more memory – it was easy to do and relatively cheap. With HBM that isn’t the case as there is a ceiling of what can be offered this generation. Macri told us that with just a couple of engineers it was easy to find ways to improve utilization and he believes that modern resolutions and gaming engines will not suffer at all from a 4GB graphics memory limit."
http://www.pcper.com/reviews/General-Tech/High-Ban...
kyuu - Thursday, June 18, 2015 - link
I'm not sure why you think that poor usage of the graphics memory was an AMD-only problem based on that quote. They were plainly talking about the subject in general. Please keep anti-AMD biases out of your analysis. Your not Chizow so hopefully that's not an impossible request.chizow - Wednesday, June 17, 2015 - link
@AeroWB you make reasonable points, but ask yourself this. Would AMD really bother with the embarassment of rebranding their entire product stack to launch an "ultra-premium" brand that cost the same as their previous 290/X parts? It just doesn't make any sense. The only explanation that does make sense is that they had their eyes on a bigger prize, ie. the Titan.AeroWB - Thursday, June 18, 2015 - link
@chizow I think that AMD had too make a few though choices as their R&D budget is much smaller then that of Nvidia and Intel and creating a new chip is really expensive. The fact that all new GPU's are still 28nm is something that both Nvidia and AMD were not expecting a few years ago when they started designing the chips for say 2014/15. So they new chips we've seen this year are probably all designs that had to be sort of rushed which means either take a new design and scale it back to 28nm or take an existing design and improve or upscale it. AMD has had a big advantage here as they already had a lot of chips that could support DX12/Vulcan/FreeSync etc. So rebranding them was a reasonable and cheap option. AMD did not have an answer to its high power draw and didn't have a high end part to compete with the titan and so they decided to create one more GPU on 28nm with (almost) the same graphics cores as the latest version use in the R9 285 chip but now scale that chip to be really huge and to address the power draw add some more power features like target frame rate control, some power optimizations in the chip and HBM to top it off. With this approach they had to create only one new chip and at the same time develop HBM further so it can be used on the next generation of cards that will be at a new process node. Because the R&D cost of the 200 series chips have been earned back with current sales they can afford to drop the price and so be able to compete with Nvidia. Nvidia on the other hand had much more need to create a new 28nm part as their older chips were missing essential features for DX12/Vulcan so in order to fix this they had to design multiple new chips to cover the whole line-up. But instead of just "upgrading" their old core designs they did a big redesign which resulted in much less power hungry cards. Probably by integrating much of the tech they developed for the mobile parts. So both AMD and Nvidia made different decision on what to develop on another 28nm design and both choices sound very reasonable to me. Though I am, like a lot of people, disappointed the new 300 series are all rebrands as the Fiji is too expensive for me and I had hoped to pick up much better price/performance card than that was possible with the R9 290 (X) that I was going to buy. And now the new R9 390 (X) are only marginally betterchizow - Thursday, June 18, 2015 - link
@AeroWB: I actually agree with pretty much everything you wrote here, I guess where we disagree is on the actual execution and pricing decisions for Fury. At this point its probably a moot point now, I just feel like 980Ti did really surprise AMD in both price and how close it performed to Titan X, greatly limiting their landing spots for the Fury X. It just doesn't make sense to me that AMD would go through the trouble of creating an Ultra Premium brand at $650 max when they already hit those price points with their standard R9 series in the past, while bringing on this rebranding scrutiny for the rest of their R9 300 series.Ranger101 - Wednesday, June 17, 2015 - link
Wow Chizow (shill) you're really ramping up the number of bullshit Anti AMD posts asthe hard launch approaches. Guess you Nvidia employees must be a little nervous as the
prospect of losing the performance crown approaches...keep it coming man, this is very amusing :)
chizow - Wednesday, June 17, 2015 - link
Wow Ranger101 (fanboy), would be awesome if AMD just did what the big boys do and just hard launch when a product is ready. ;)But yes, I know you must be getting lonely in that cube farm in Sunnyvale/Austin/Toronto as you see more and more of your buddies leaving for greener pastures. ;)
dmnwlv - Wednesday, June 17, 2015 - link
Chizow - these products are obviously not for you so please move aside. Go to nvidia's articles or wait for their turn. You are seriously annoying and you added heaps of rubbish to constructive forum (yes you did!). I wonder why any serious site would allow this to happen.chizow - Wednesday, June 17, 2015 - link
Look up the word forum, then move along if you find my posts seriously annoying.Do you go around policing Nvidia articles in a similar fashion? I bet you don't. I think there's quite a few who were in the market for an upgrade that were disappointed by what AMD has done here, if that's not you, move along!
loguerto - Thursday, June 18, 2015 - link
Disappointed for what? First leaked 3d mark firestrike benchmarks in 4k (witch is very memory intensive) shows the Fury x ahead the Titan x for half the price. I thing nvidia customers should be disappointed for throwing their money out of the window, if i were a titan x owner i would be pretty pissed off.CiccioB - Thursday, June 18, 2015 - link
Why do you compare with Titan X which is a card thought for those that do architecture rendering or run simulations that <b>cannot</b> be run with a 4GB card?Compare it with a 980Ti. And look at the custom version too which come with a quite overclocked GPU without needing a liquid cooler to not melt down.
chizow - Thursday, June 18, 2015 - link
Except the Fury X market is going to be extremely limited, the rest of the market that held off in the sub-$500 price point expecting and waiting for an AMD update before making a decision has to be disappointed with this reality of Rebrandeon, because the reality is, they could've been enjoying this same level of performance at the same price, 10 months ago when Nvidia launched the 970/980 and forced price cuts on AMD's flagships 290/290X down to the $220-$300 price points.Also, 3D Mark is NOT memory intensive lol, the entire runtime is not 4GB, so how would we expect it to utilize a 4GB VRAM buffer?
Hairs_ - Tuesday, June 16, 2015 - link
That's power draw not heat output.R9 cards can have similar power draw to maxwell cards but the heat output is horrific.
Essence_of_War - Tuesday, June 16, 2015 - link
Where, exactly, do you think all of that input power is going if it doesn't become thermal energy?Spoiler: It is not producing mechanical work.
FlushedBubblyJock - Tuesday, June 16, 2015 - link
Except when it blows elecrtomigration tunnels and sears new bulging leakage nodes all through the maxxed out amd core.extide - Tuesday, June 16, 2015 - link
Power in is pretty much equal to heat out, ALWAYS.The reason some cards get hotter than others is because of their heat removal ability.
Temperature = heat generated - heat removed
medi03 - Wednesday, June 17, 2015 - link
Jesus, guys, get a clue.Heat generated = heat removed, else card will keep getting hotter.
It is easier to cool hotter cards (same amount of air transfers more energy), so temperature of things is about how good cooling is.
D. Lister - Wednesday, June 17, 2015 - link
"Power in is pretty much equal to heat out, ALWAYS."It depends on the process efficiency and design of the architecture. In a poorly designed and inefficient chip, more energy would leak out as heat.
extide - Wednesday, June 17, 2015 - link
No, it doesnt depend on any of that crap. A less efficient design on a leaky process will use more power, and thus generate more heat.CiccioB - Thursday, June 18, 2015 - link
I could not really think to read this kind of crap on a site like this. Really guys, do you a favor, go and learn a bit of physics before putting your fingers on a keyboard.loguerto - Thursday, June 18, 2015 - link
another guy who think he's Einstein :Dloguerto - Thursday, June 18, 2015 - link
you are reinventing the laws of physics here :Dpiiman - Saturday, June 20, 2015 - link
They also said there would be a air cooled version.Hicks12 - Tuesday, June 16, 2015 - link
The respins do have better performance and honestly what does one expect? Why would they waste precious resources on jumping to 20nm when they have almost perfected the move to 14nm and are doing so full scale next year.You have to consider the fact that developer of GPUs is planned much further in advanced than something like a game, its hard to move and adapt to a change in circumstance which in this case is TSMC being an extremely unreliable foundry in meeting their own timelines, it put a large spanner in AMDs plans and really I think credit should be given because their same architecture is matching Nvidias!
Nvidia only has a a bit of a performance per watt advantage but its negated by initial costs and the fact that AMD is generally favorable in the bang for buck. Its a shame development is not simple because both AMD and Nvidia have different development timelines, generally they're 6 months apart so people compare whichever company just launch their chips to the other companies 'current' chip and say THEY ARENT DOING ANYTHING X IS BETTER THAN THEM!
Lets see what the Fury X and Fury can do against Nvidias current GPUs and it should be a much 'fairer' battle :D
Kjella - Tuesday, June 16, 2015 - link
>> Why would they waste precious resources on jumping to 20nm when they have almost perfected the move to 14nm and are doing so full scale next year.Because 28nm rebrands won't sell shit vs nVidia's 900 series this year? Check out Steam's hardware survey and the graphics card deltas, you can't get a better temperature reading on the gaming market.
Because 14nm is a totally unproven technology for high power 8-9 billion transistor chips? Even Intel with their tick-tocks have been dragging their feet starting with tiny 4.5W processors and working their way up and still haven't released a normal 14nm desktop CPU.
>> Its a shame development is not simple because both AMD and Nvidia have different development timelines, generally they're 6 months apart
GTX 780: May 2013
Radeon R9 290X: October 2013 - response: GTX 780 Ti
GTX 980: September 2014 - response? Radeon R9 285
They had almost a full year from the 290X and the R285 was the best they could come up with. Nine months later and they're paper launching a card that'll raise the bar for the first time in 20 months. It's AMD that's very, very late not nVidia that's early.
FlushedBubblyJock - Tuesday, June 16, 2015 - link
Thanks for straightening out the amd fans spin.Hasn't nVidia made 3 recent jumps on 28nm ? Why can't amd handle it, they care so much and are so concerned with gamers and gaming and wear a holy crown of selflessness.
extide - Tuesday, June 16, 2015 - link
AMD did 3 big releases on 28nm also, Tahiti, Hawaii, and Fiji.MisterAnon - Tuesday, June 16, 2015 - link
Considering Maxwell came out a year after the R9 2XX series and had the same performance your posting isn't accurate at all.Nvidia has been quite embarrassing. The 970 should not be equal to a card a year older. Especially aftering the memory fiasco I'm going to be selling my 970 for a new AMD card.
Kutark - Tuesday, June 16, 2015 - link
Im sorry but is this a joke? The 970 outperforms the 770 by about 23% on average and does it using about 40% less power. What planet do you live on?chizow - Tuesday, June 16, 2015 - link
So you're buying a Fury X? Because everything else is an old AMD card.Gigaplex - Wednesday, June 17, 2015 - link
Both the Fury (non X) and Nano cards are new too.chizow - Wednesday, June 17, 2015 - link
Those won't be immediately available, and they are still based off the Fiji chip, which is the only new one.D. Lister - Wednesday, June 17, 2015 - link
"Nvidia has been quite embarrassing."lol, you're lucky denial isn't a fatal affliction.
TEAMSWITCHER - Wednesday, June 17, 2015 - link
And the GTX 970 was only a mid-range card and forced AMD to drop prices on their high-end cards to keep them selling. These new Fiji cards are suffering the same affliction - Nvidia has determined the selling price...not AMD.CiccioB - Thursday, June 18, 2015 - link
Oh, yes, embarassing that with a chip quite smaller than the huge GK110 they manage to get better performance with less power consumption.Maybe you could not understand that nvidia aim is to make money, and the 980/870 launch just manage them to make a lot of it.
Embarrassing are fanboys that have non connection with reality and live in they own world made of colored bars and number printed on papers instead of evaluating thing in their fullness.
medi03 - Wednesday, June 17, 2015 - link
R9 290X is much cheaper than 780 Ti, at least in Europe.In the current release, FIji Nano looks like something I'm likely to buy.
Hairs_ - Tuesday, June 16, 2015 - link
Are they moving to 14nm?They were stuck for far too long on 32nm, and now they've been stuck for years on 28nm, with absolutely no sign of process improvements from GloFo. Their processor woes probably wouldn't have been nearly as bad if they'd been able to get a power saving die shrink out of a lower process node. But nobody's been able to deliver one for them.
Intel is literally 10-15 years ahead of the competition at this stage, and as pointed out, they're having problems. If the absolute best in the business can't get 14nm working, GloFo has no hope.
silverblue - Tuesday, June 16, 2015 - link
No, they're not. Samsung (and, by extension, GloFo) will have volumn production on 14nm in less than a year.Michael Bay - Tuesday, June 16, 2015 - link
More like 20nm, hehe.What`s more important, they`ll stay on this node for literally a decade, while Intel and to lesser extent IBM will keep moving.
errorr - Wednesday, June 17, 2015 - link
The "14nm" from Samsung and GloFo is low power optimized and probably unsuitable in the first instance for big gpu chips. I'd expect the SOC chips to be the first to get 14nm treatment.CiccioB - Thursday, June 18, 2015 - link
You do not have to expect nothing, as Samsung is already producing their SoC at 14nm and is selling them into Galaxy S6, for example. Next one to use it should be Qualcomm with their new Snapdragon 820 (after the epic fail of 810 on 20nm).extide - Tuesday, June 16, 2015 - link
Intel has 12nm fixed and is making the full range of chips on it, including the 660mm^2 Phi.extide - Tuesday, June 16, 2015 - link
I meant 14nm, of courseerrorr - Wednesday, June 17, 2015 - link
Yeah, the real proof is that the fpga is finally yielding in quantity.Gigaplex - Wednesday, June 17, 2015 - link
Exaggeration much? 10-15 years ago we were hovering around ~100nm.chizow - Tuesday, June 16, 2015 - link
@Hicks12, still can't admit the 300 series are just rebrands huh? lol. AMD can't spend the resources because they don't have them, they were too busy wasting resources on dead end APIs like Mantle. But why would they? To remain competitive and relevant maybe? The market didn't want their R9 200 series even after AMD was forced to drop prices, and they offer FAR better price:perf than these rebranded 300 series cards.Gigaplex - Wednesday, June 17, 2015 - link
The software engineers were working on Mantle, that didn't stop the hardware engineers from doing their own thing. They've had other issues regarding their hardware development, such as the delays from the fabs. They spend considerable hardware resources on developing 20nm products that they couldn't build, and it's no simple task to just move the new designs back to 28nm. NVIDIA suffered similar setbacks but were able to handle the situation a bit better.Oh, and the market did want the R9 series, especially during the cryptocoin boom. For some time they were considerably more expensive than the NVIDIA counterparts due to their superior compute performance. The market moves much quicker than the GPU development process, though.
Manch - Wednesday, June 17, 2015 - link
Dude, save your breath. Chizow=NVIDIA troll. He treats his own and everyone elses speculation as fact as long as it suits him. He can't be honest about the merit of a product. Fanboys are impossible to have a conversation with.chizow - Wednesday, June 17, 2015 - link
@Manch let's gauge your AMD troll meter. AMD 300 series = rebrand or not? :)Manch - Thursday, June 18, 2015 - link
Most will be respins/rebrands. HBM is simply un-affordable at the mid-range and lower end cards. So if the cards all move down a notch on the performance list and are priced accordingly, I'm fine with it. One thing they need to do is make them all GCN 1.2 or whatever the latest is. If not they don't need to be in the 300 series. The exception to that would be the budget cards. You shouldnt however have a budget card that has lets say true audio and offer GCN 1.2, then the next card up the chain offer a 15% perf bump but then lack true audio and only be GCN 1.1. That's just confusing to customers and will piss them off when they realized their new shiny card doesnt supt the latest features. NVIDIA can do the same thing. If they respun/rebadged a 980 into a 1070 or whatever their next series will be I would be OK with that as long as its price goes down with it's level in the series tier. Then I'd pick one up and SLI it with my 980. As far as cooling goes, AMD can't bust out with some BS cooler that doesnt work. They need to take a cue from NVIDIA and put a really good ref cooler on the card. Third Party vendors have put excellent coolers on the 290 series cards and they run nice and cool, dissipating the heat very well, no throttling and even allowing for a bit of overclock. For the new Fury cards with the HBM, I think they should release an top end air cooled version as well as teh AIO WC. For air cooling if they dont do it themselves, they should have a release partner that has one available. AMD needs customers to stay, and they want customers to come back. Doing this will give customers options. Plus it will show customers that the card isn't show inherently inefficient that they have to resort to AIO WC only just to get it to stay competitive.loguerto - Thursday, June 18, 2015 - link
Actually at this price point GCN cards are very good products, i have a hawaii card and i'm throwing all i can on it, and it never disappointed me.chizow - Thursday, June 18, 2015 - link
@Manch: So you do agree that there is significant market confusion that results from AMD's rebrand strategy? So why do you think I am just trolling when coming to these same conclusions and bringing them to light when AMD's biggest fanboys/supporters are busy sweeping them under the rug? Certainly from your own very well formulated and thought out concerns, a less educated/informed consumer should be able to benefit from that same level of understanding and knowledge?Voicing your concerns as you did are what give you better products, not apologizing/defending and being a homer for the team you root for. If you bothered to go back and look you would see I was VERY critical of what Nvidia did with the Kepler GK104 and GK110 launches (leading to Titan) and beyond that I am very vocal about issues I run into when using their products, but the difference is, I'm actually CRITICAL about them and not busy sweeping them under the rug, because I *KNOW* Nvidia listens and will fix them if the issues are brought to light. Just food for thought. :)
Manch - Sunday, June 21, 2015 - link
I've now seen there line up, though I'm waiting on benchmarks. I don't see how they're going to charge 100$ more for a respin/rebadge. I was hoping the 390X would be released at current 290X 8GB prices. Usually every generation a rebadge moves down a notch not take up the same spot on the line up or go up in price. It would need to have significant improvement which I just don't see. I guess only the Fury cards are GCN 1.2 That at least eliminates confusion being that is at the top end only. I haven't read up on what the difference is though. I'm not terribly concerned with 8GB frame buffers either. Most games right now don't take advantage of 4GB @ 4k resolutions. There are exceptions SoM for example uses about 5GB I think. If they are actually paying attention to how the memory is being used and their compression algorithms are that much better than the previous gen then it shouldn't be an issue. I however will not be buying any cards from either NVIDIA or AMD. I only buy every other gen at the min. I already have two 290X 8GB and a 980. I'm good for right now.Not everything you say is bunk. The fact that you think NVIDA can do no wrong and how you say things makes you come of as a troll. You're making a lot of assumptions before benchmarks or anything else is released. Some conjecture is easy enough to do as it will not take a huge leap of faith to guess how the 390's will perform but no one has solid benchmarks on fury yet, whether it will overclock, throttle, or if it gives off a rose like smell when its cooking along. As far as you being critical of NVIDIA? that may be the case in a few instances but you absolutely flame AMD for anything and everything. Your last statement about you KNOW Nvidia will listen is kinda funny. They like every other company will *listen* if there is enough outcry and it can expect to hurt sales. Good example is locking out overclocking on the mobile GPU's or AMDs crossfire issues. It took them a whole other chip to "fix" that which is why I've ever cross fired or SLI'd anything up until now. I hope the Fury X and diet Fury or whatever its called do relly well. I want them to put price pressure on Nvidia for once so I can get a second 980 for cheap.
As far as you being a troll, yeah I think you're trolling. If you were more even handed with your criticism, then it would give greater weight to your arguments. As it stands now, people largely treat your comments as trolling as do I.
D. Lister - Wednesday, June 17, 2015 - link
"The software engineers were working on Mantle"If they spent their valuable time working on their drivers, both AMD and their customers would have been better off.
chizow - Wednesday, June 17, 2015 - link
Yep, even that would've been a more worthwhile endeavour, I just don't see why people think these decisions are made in a vacuum without any repercussions on other parts of the business. AMD's downfall is just a domino effect of multiple bad decisions, beginning with the over-valued acquisition of ATI.chizow - Wednesday, June 17, 2015 - link
Gigaplex, of course it did, because AMD leadership green lighted and pink slipped either/or. Its really simple, someone in their graphics division OK'd spending money on Mantle instead of allocating those funds into saving R&D jobs, and now we see the ramifications. Mantle is dead, years wasted, money wasted funding EA/DICE's development of Mantle and implementing it in-game. 1 new ASIC since 2013 and a bunch of rebrands, instead of a new set of ASICs to complement Fiji.loguerto - Thursday, June 18, 2015 - link
wasting resources on Mantle? ok you asked for it ..(•_•) .. <-----------------Nvidia
∫\ \___( •_•) <---------------------chizow
_∫∫ _∫∫ɯ \ \
( ͡° ͜ʖ ͡°)
chizow - Thursday, June 18, 2015 - link
You could actually probably put AMD and Fanboys in there and it would be more apropos, how is Mantle working out for you guys? Money well spent amirite! :)But hey works for me, I've come to expect low-brow humor and low intellect from the types that would actually buy AMD products.
chizow - Tuesday, June 16, 2015 - link
Yep, certainly the impact is being felt of all the R&D cuts and layoffs. I think they were caught off guard by not only being stuck at 28nm but also that Nvidia bothered to produce a completely new gen on 28nm.FlushedBubblyJock - Tuesday, June 16, 2015 - link
Yep, nVidia didn't rebrand and wring it's hands and overclock the crud out of their cores sending out heatmonsters -THEY BUILT ANOTHER CORE ON 28nm.
All the excuses in the world for amd have failed. It's amd's fault, no one else's.
Nagorak - Tuesday, June 16, 2015 - link
AMD built another core too, that's what Fiji is. The only real difference in that Nvidia didn't change the name of the 750 to 950. The fact that AMD can flat out rebrand the 200 series to 300 and still be competitive with Nvidia shows how level the playing field actually is.Kutark - Tuesday, June 16, 2015 - link
I guess if you consider "level" to mean does the same thing but uses 2x as much power, runs 10-15db louder, and can be used as a space heater in the winter.chizow - Tuesday, June 16, 2015 - link
Nvidia built 3 new cores (GM200, GM204, GM206), and a 4th considering the GM107 was an inbetweener. The fact AMD had to rebrand their 200 series shows they had no other choice. Guess what? This wouldn't be the first time that a previous gen flagship remained competitive with the next-gen perf mid-range SKU 1 slot down, but that sure as heck isn't going to do much for your margins.extide - Tuesday, June 16, 2015 - link
AMD has built 3 cores on 28nm... I dont get why there is so much hate for AMD right now and most of it is.. well based on false assumptions, just straight up WRONG INFORMATIONKeenM - Wednesday, June 17, 2015 - link
然并卵NvidiaWins - Wednesday, June 17, 2015 - link
12k gaming moron, 12k gaming comes out later this year......CajunArson - Tuesday, June 16, 2015 - link
Vague talk about VR out of the way. Check.Next up: Launch of the Rebrandeon line for 2015. Check.
Hopefully after that: Launch of an actual new card. Common Fury! Don't make us think that AMD should have called you "Moderate Annoyance"!
Wreckage - Tuesday, June 16, 2015 - link
No performance numbers, looks like they avoided discussing the 4GB RAM limitation, $649 would suggest it's not a Titan X rival. "Moderate Annoyance" may just apply to this press conference.nathanddrews - Tuesday, June 16, 2015 - link
It's happeninnnnnnng!/ronpaul
CajunArson - Tuesday, June 16, 2015 - link
"R9 390 and 390X start at $329 and $429. Both have 8GB of GDDR5. Meant for 4K gaming"LMFAO. Sure AMD, Sure. At least Ngreedia was smart enough to say that you really want a GTX-980Ti or Titan X for 4K gaming.... and even those cards aren't really great for 4K, more like "almost adequate".
invinciblegod - Tuesday, June 16, 2015 - link
Hey you never know. Maybe these new cards are so much faster that 4k is pretty smooth! Probably not though.Sttm - Tuesday, June 16, 2015 - link
Those are not new cards, they are rebadged. Only the Fury parts are actually new.Gigaplex - Wednesday, June 17, 2015 - link
Fury and Nano. Everyone seems to keep forgetting the Nano.D. Lister - Wednesday, June 17, 2015 - link
It's Fury Nano < Fury < Fury X. "Nano" is just a tier of the Fury arch.dragonsqrrl - Tuesday, June 16, 2015 - link
You guys should seriously tune into the live stream, it's a "New Era of PC gaming"... lol, the marketing BS is almost too much to bear XD. So many half truths, and avoiding details... "Full DX12 support"... and no mention of the GPU's used across the 300 series, I wonder why?FlushedBubblyJock - Tuesday, June 16, 2015 - link
I read the title and I just about puked.A NON NEW era of gaming... nothing is new, it's all been released before, several times.
Jtaylor1986 - Tuesday, June 16, 2015 - link
I didn't realize that AMD was announcing DX12 at the launch today.HotBBQ - Tuesday, June 16, 2015 - link
Can they use these new cards to help Twitch sync their audio and video?CajunArson - Tuesday, June 16, 2015 - link
" And of course, the final consumer Rift ships in Q1 of next year"Oh yeah Occulus, I TOTALLY believe that one.
zlandar - Tuesday, June 16, 2015 - link
Blah blah blah blah blah blah....Show us some BENCHMARKS on the new GPUs. Can you beat Nvidia's 970-980 series YES or NO?
CajunArson - Tuesday, June 16, 2015 - link
They better be able to beat a 980 from last year.As for the 980Ti and Titan X.... they would have intentionally leaked info last month if the Fury was really beating those two cards.
Refuge - Tuesday, June 16, 2015 - link
IDK, the last time they tried to steal nVidia's thunder it made them look silly... lolextide - Tuesday, June 16, 2015 - link
You mean when the 290x came out at about half the price for the same perf as the original Titan? Which is pretty much what they are doing now, except a bit more than half price.FlushedBubblyJock - Tuesday, June 16, 2015 - link
So amd's new cards aren't going to be 30% faster than nvidia flagships ?Not even 20% ?
I bet we get treated to 33% faster @ 6k, nvidia 2fps, amd 3fps FTW !!!!
silverblue - Tuesday, June 16, 2015 - link
If you're going to attempt to be amusing, at least get the maths right. :)Gigaplex - Wednesday, June 17, 2015 - link
I dunno, the bad math kind of makes it more amusing.D. Lister - Wednesday, June 17, 2015 - link
:) The math may be wrong but his point still stands. AMD's "victory" at 4K is more or less academic. especially when you factor in the variable sync implementations (gsync vs freesync).CajunArson - Tuesday, June 16, 2015 - link
NEW GAME! CHRIS HOOK VS. HUDDY! WHO HAS THE SHINIER BALD HEAD!GO!
FlushedBubblyJock - Tuesday, June 16, 2015 - link
They're such tools... they watched all the badboy movies and decided their receding mat must go because they are, HOMELAND SECURITY AND BFG KFA rolled into one !Jtaylor1986 - Tuesday, June 16, 2015 - link
Been team Red since ATI but I am honestly just starting to feel sorry for this company. If they don't pull a rabbit out of their hat soon there isn't going to be anything left.at80eighty - Tuesday, June 16, 2015 - link
Lol @ the dude in this thread losing his marbles at AMDnaxeem - Tuesday, June 16, 2015 - link
Wake me up when we can see some benchmarks.extide - Tuesday, June 16, 2015 - link
Looks like AMD listened to all the recent negative press on the 200 series (290/x mainly).Much better cooling system, Check
Much better perf / watt, Check
Much better Industrial design, Check
I am sure the haters will find something else to complain about though.
My bet on Fury X perf: Based on the $650 street price, 15-20% faster than 980 Ti. I mean think about it, they are not undercutting nVidia here-- so they must be getting better perf... Just how much?
jimmy$mitty - Tuesday, June 16, 2015 - link
I doubt it is performing that much better. If it was they would have priced it higher.Peichen - Tuesday, June 16, 2015 - link
15-20% faster than 980Ti means it will be faster than Titan X. AMD wouldn't be stupid enough to sell a card that beats a $1000 card at $650.My guess is Fury X will more or less tie with 980Ti. Some people will be willing to get smaller 4GB HBM over 6GB GDDR5 so they are price the same. Fury X2 will be two Fury (not Fury X) cores and cost $1000+ to easily eclipse Titan X.
naxeem - Tuesday, June 16, 2015 - link
Yes. It is impossible that this card is faster than TitanX because by hw specs it is as fast as 50% faster than 290X. So, that is the most they can get overall. They might be better in games at 5K that don't eat more than 4GB, but that is very much a very narrow area.The price shows it will at best run against 980Ti. Probably trade blows with it.
ddarko - Tuesday, June 16, 2015 - link
You're misinterpreting the numbers. Performance per watt is not equivalent to performance. Fury X's performance per watt is 50% better than the 290x but Fury Nano was said to be 100% better than the 290x. According to you (mid)understanding, that means the Nano is faster than the Fury X. It's not and your extrapolation of how much Fury X's performance ceiling is wrong.extide - Tuesday, June 16, 2015 - link
I think he is going based on 2816 --> 4096 shaders as roughly a 50% increase -- not the perf/wattchizow - Tuesday, June 16, 2015 - link
If watts are the same, it absolutely is equivalant to performance. They stated Fury X was 275W typical, assuming this is roughly the same as Hawaii, a statement of 1.5x perf/w is going to directly translate into 1.5x perf, which is not going to be enough to beat 980Ti/Titan X.Similarly, if Nano is say 1.3x perf of 290X at 185 vs. 275W, that will get you to the 100% improvement in perf/w.
But I guess we will see soon enough.
dragonsqrrl - Tuesday, June 16, 2015 - link
"Similarly, if Nano is say 1.3x perf of 290X at 185 vs. 275W, that will get you to the 100% improvement in perf/w."I think AMD achieving 2x perf per W of the 290X with a cut down Fiji is quite extraordinary considering a fully enabled GPU only manages 1.5x. I'm curious to see how they achieved that.
silverblue - Tuesday, June 16, 2015 - link
Lower frequencies, I should imagine.Don't forget that Fiji is no longer bound by memory limitations; additionally, it'll have Tonga's improved compression.
silverblue - Tuesday, June 16, 2015 - link
Clarification - bandwidth limitations, as opposed to memory buffer.dragonsqrrl - Tuesday, June 16, 2015 - link
I suppose, but lowering the clocks doesn't usually result in dramatic improvements in performance per W unless the GPU is operating well outside its optimum frequency range to begin with. I suppose that's possible, but even then I struggle to see how that in itself could account for such a significant difference in efficiency. In addition binned 3rd tier dies, which is what it sounds like the Nano will use, typically have worse efficiency than fully enabled dies.silverblue - Wednesday, June 17, 2015 - link
Could be Little Fiji, if there is such a thing. I doubt 14nm is ready yet, so it could be a different version much like Little and Big Maxwell.extide - Wednesday, June 17, 2015 - link
If you lower the clock enough that you can lower the voltage a bit, then you can drop the power use quite a bit.dragonsqrrl - Wednesday, June 17, 2015 - link
Ya, that may significantly effect power consumption, but we're talking efficiency here, perf per W, not just power consumption. If you drastically lower the clocks/voltages, you will decrease power consumption, but you'll also drastically decrease performance. Like I said, every GPU is going to have an optimum frequency range for efficiency. If Nano is merely a binned lower clocked Fiji then FuryX must be operating well above that range.extide - Tuesday, June 16, 2015 - link
15-20 may be a bit high, but 10-15% is probably not unrealistic. I guess we will have to wait and see...ddarko - Tuesday, June 16, 2015 - link
Hadn't considered that - however, even if that's the case, still a premature extrapolation since we don't know if the shaders are essentially equivalent or how much impact the memory architecture will have (if any). We'll all know how Fury matches up soon enough so I think it's just silly to make sweeping pronouncements of how much faster it can or cannot be based on some assumptions that may or may not be accurate.ddarko - Tuesday, June 16, 2015 - link
Sorry my comment above was intended to be a reply to extide's comment about fury's shader count.silverblue - Tuesday, June 16, 2015 - link
Strangely, the comments page behaves better on UC Browser on my Lumia 1020 than Firefox on my PC. Your response does indeed branch off from extide's post.dragonsqrrl - Tuesday, June 16, 2015 - link
"My bet on Fury X perf: Based on the $650 street price, 15-20% faster than 980 Ti."...
You sound like you know what you're talking about.
"Much better cooling system, Check... Much better Industrial design, Check"
Since when has quality cooling and industrial design in a stock cooler even been a thing for AMD fanboys? In retrospect I think the more insightful question becomes when have AMD fanboys stopped mocking features that Nvidia championed and brought to market first, and start loving and parroting them instead? GPGPU computing? Hmm, around 2012. Oddly enough it seemed to coincide with the launch of GCN, AMD's first focused GPGPU computing architecture. Variable refresh rate? Around the time FreeSync was announced. Industrial design in GPU coolers? Well, now I guess.
To get some clearer answers to that original question, we can basically rephrase and simplify it down to: when did AMD start taking these features seriously?
extide - Tuesday, June 16, 2015 - link
Looks like ... now is when they started taking that stuff seriously.In any case it seems like for the 290x, they were just like we need all the performance we can get, period. And just went all out with 2816 shaders and ended up with a gpu that uses a lot of power.
This time they were like ok, people really care about the power use, the cooling, how loud it is, how hot it runs (with stock cooling) etc -- and so they actually put some effort into that stuff. It's nice to see although the partner cards will of course have their own designs.
I am also glad to see that they did the Nano. They tok the packaging size advantage and turned that into a product idea as well. I really wish we got more details on the different fury products. How many shaders in Fiji Pro? How many shaders on Nano? What clock speeds do those run also? How about specs for the dual gpu card? Is it cut down or fully enabled chips?
Time will tell of course. I wonder when we will get benchmarks of Fury X? Will we have to wait until next wednesday when it releases?
FlushedBubblyJock - Tuesday, June 16, 2015 - link
Ummm... someone needs to define "industrial design", because it's a retarded buzzphrase for morons.Furthermore, amd didn't just suddenly find out people care about power usage - they after all were the ones screaming housefires from the very top down to their mind controlled fanboys when the 470 and 480 were released then they went batso insane on the 570 and 580, and AMD even made a VIDEO WITH THE FBI AND HOMELAND SECURITY DETECTING THE ELECTRIC PULL AND HOUSEFIRES AT THE NVIDIA CRIMINAL RESIDENCE AND PUT IT ON YOUTUBE !
So no, amd didn't suddenly notice people "care about power".
extide - Tuesday, June 16, 2015 - link
That's true but it's all about sales and in sales you promote your advantages when you have them. I mean this shouldn't be a surprise to anyone...chizow - Tuesday, June 16, 2015 - link
@dragonsqrrl - great points, I've noticed that as well, its a natural defense mechanism of AMD and their fanboys to downplay and poo poo features Nvidia introduces that they don't have. Then later when they introduce some half-baked version, its "Oh we love it! We were waiting for open standards and now our version is much better" even though of its just not as good or well supported/implemented.Peichen - Tuesday, June 16, 2015 - link
Wait, so the new line up is Nano < Fury < Fury X and Fury X2 at 4 / 4 / 4 and 4+4 GB?Refuge - Tuesday, June 16, 2015 - link
The nano, is below the 290x in your scenario I believe good sir.Akrovah - Tuesday, June 16, 2015 - link
I notice they made specific mention of the Fury (vanilla) being air cooled, but not the Fury X. So does that mean the Fury X is running so hot it has to be liquid cooled even in a reference design? Because yikes!FMinus - Tuesday, June 16, 2015 - link
What's with the hot stuff and AMD. All high-end AMD graphics I had run cooler as any high-end nvidia. He said the Fury X runs at 50C under load and is an overclockers dream. When that is achieved by water cooling, who gives a shit really.Murloc - Tuesday, June 16, 2015 - link
huh anyone who has to put it into a case where there's no space for a radiator?FMinus - Tuesday, June 16, 2015 - link
I've build couple of ITX systems with closed loop systems. http://www.lian-li.com/en/dt_portfolio/pc-q18/ you can get two 140mm rads inside this case, in fact I'm running one with one for the CPU right now with space for one more on the top if I wanted to (you need to drill some holes and stuff, since that case wasn't made for rads, but you get the drift).But in a perfect world really, that card should never be in an ITX case (tho you can find cases as the one above), and if you go to MATX or ATX, you have plenty of room in pretty much any case on the market.
Nagorak - Tuesday, June 16, 2015 - link
If a case is too small for a radiator there's a strong chance it's too small for a super long graphics card, which up to now includes basically all high end cards.BryanC - Wednesday, June 17, 2015 - link
The systems I build have 8 Titan X in one box. Closed-loop coolers are not an option - no space for 8 radiators.Aisalem - Wednesday, June 17, 2015 - link
If you have 8 GPUs then:1. You are not common folk, most likely one of few around the world and not a target of any company
2. For 8 GPUs you can use different case as the cost does'n seem to be a problem
3. You can have custom loop that will still be better than 8 blower type cards beside each other
4. Many other options if you want to use that card instead of complaining that you don't have space for rads ;)
extide - Wednesday, June 17, 2015 - link
Maybe the system you build in your head, but yeah lol no you are not building systems with 8 titans. You would have to do some really interesting stuff to run 8 titans in one computer.extide - Tuesday, June 16, 2015 - link
It's a 275w card. That can definitely be air cooled. They water cooled the ref card.... because they wanted to, thats all. There will be plenty of partner boards with air cooling for the Fury X.D. Lister - Wednesday, June 17, 2015 - link
They water-cooled maybe because they had to. To compensate for the higher clocks that a Fury X may have to maintain to stay competitive.extide - Wednesday, June 17, 2015 - link
The clocks DONT MATTER. 275 watts, thats what matters, and that is easily air coolable, period!chizow - Tuesday, June 16, 2015 - link
Fiji is a solid part at that price point, but most likely lower than AMD wanted as a reaction to 980Ti. Its too bad too, they could've avoided a lot of this Rebrandeon scrutiny and criticism if they just did the logical thing:390X = Fiji
380X = Hawaii/Grenada rebrand
370X = Tonga
360 = Bonaire
At least this way they could've avoided some of the product stratification in terms of support and feature sets. You'd have at least GCN 1.1+ and it would've actually made sense.
To me its obvious AMD had its eyes on Titan's ultra premium market, but the 980Ti being with 3-5% of that perf at a $650 price point threw a wrench in their plans.
Now to wait for benches!
dragonsqrrl - Tuesday, June 16, 2015 - link
That also would've been a lot more truthful from a feature level perspective given AMD's announcement of "full DX12 support" in their presentation. With their current lineup feature level support ranges from 11.1 to 12.0. No word yet on Fiji, but I'm not sure why they wouldn't mention 12.1 support if it were present, unless they simply didn't want to draw attention to the rest of their lineup.FlushedBubblyJock - Tuesday, June 16, 2015 - link
Because they suck, and they know it, is a good answer then.extide - Tuesday, June 16, 2015 - link
Wow dude, why don't you take off your nVidia panties and come talk with the grownups.chizow - Tuesday, June 16, 2015 - link
Yep, I think that would have covered Huddy's "All new GPUs in 2015 will support FreeSync" lie too if they dropped Pitcairn. Although he could've meant just Fiji, since that is the only "new" GPU for AMD in 2015. :DI am curious to see how AMD is going to manage the whole DX12 issue though, I guess we'll need to wait for reviews for them to cover that.
D. Lister - Wednesday, June 17, 2015 - link
People who will buy the low-end stuff aren't often the kind that know about or care for DX iterations. Hence most likely, this particular marketing promise will get swept under the rug over time. Or better yet AMD marketing will convince the fans that it is somehow better for the world just the way it is.dragonsqrrl - Thursday, June 18, 2015 - link
Looks like it's been confirmed that Fiji is GCN 1.2, same as Tonga, so feature level 12.0.chizow - Thursday, June 18, 2015 - link
Nice, thanks for the info, it also looks like AMD confirmed no HDMI 2.0 support either.Black Obsidian - Tuesday, June 16, 2015 - link
I don't know that there's much "Rebradeon scrutiny and criticism" outside the green fanboi brigade; actual consumers mostly only care if a card is competitive within its price bracket and has a featureset that meets their needs. If a given architecture is good enough to be competitive for multiple generations (ref: G92, Tahiti/Tonga), there's no rational reason to discard it prematurely.If AMD was surprised by either the 980Ti's performance or price point, they were even more inexcusably history-blind than disgruntled Titan X owners. Every generation of Titan has been shortly followed by a cheaper card that either matched or exceeded it, and there was no reason to expect that the third time would be any different.
On that basis, either AMD was stupid (certainly possible), or they fully expected to sell a card at nVidia's $650 price point, but which outperforms their competitor's part by a reasonable margin, which is also not historically unusual. I'm sure they would LIKE for the 980Ti to not exist, but there was no rational reason to expect such a situation to obtain.
dragonsqrrl - Tuesday, June 16, 2015 - link
"actual consumers mostly only care if a card is competitive within its price bracket and has a featureset that meets their needs"Does that include the AMD fanboys that had no intention of ever purchasing a 970?
"If a given architecture is good enough to be competitive for multiple generations (ref: G92, Tahiti/Tonga), there's no rational reason to discard it prematurely."
Well, except for one of the things you brought up in just your previous sentence, "featureset", amongst many other things like production costs and perf per W. Seriously, I struggle to draw a coherent analogy between something like Pitcairn and G92. Perhaps you could clarify?
"On that basis, either AMD was stupid (certainly possible), or they fully expected to sell a card at nVidia's $650 price point, but which outperforms their competitor's part by a reasonable margin, which is also not historically unusual."
Really? Could you point to a time when this has happened recently with a top of the lineup card? And by recently I mean the past ~8-9 years. I can think of only one that might qualify, which would make what you're saying quite historically unusual. AMD usually hits lower price points for their top cards, but they also usually under perform Nvidia's top cards. And when they have had a performance advantage they've priced their products accordingly.
MisterAnon - Tuesday, June 16, 2015 - link
>Does that include the AMD fanboys that had no intention of ever purchasing a 970?Obviously not considering the 970 came out in late 2014 and was equal speed (and slower at high resolutions) with the R9 290 from 2013. They also lied about memory. Quite embarrassing for Nvidia.
FlushedBubblyJock - Tuesday, June 16, 2015 - link
The 970 has a vastly larger feature set and beats the amd card in 99% of the resolutions and settings.I'm sick of amd claiming it's faster when it is not - it pulls choppy chunked 1 and 5 fps mins on rezzes it cannot support then the fanboys scream win.
silverblue - Tuesday, June 16, 2015 - link
Vastly?D. Lister - Wednesday, June 17, 2015 - link
I guess he was only counting features that actually work. :pdragonsqrrl - Tuesday, June 16, 2015 - link
"Obviously not considering the 970 came out in late 2014 and was equal speed (and slower at high resolutions) with the R9 290 from 2013."So what's the point in AMD launching the 300 series? I hope the irony isn't lost on you this time.
extide - Tuesday, June 16, 2015 - link
For the OEM's ... that's why rebrands exist.... Come on you guys should know this by now!chizow - Tuesday, June 16, 2015 - link
Really, so the OEMs are the ones selling those R9 380 and R9 390X rebrands at Best Buy right now for considerably more than their previous 200 rebranded predecessors?Rebrands *USED* to be for OEMs only, until AMD decided to rebrand their entire RETAIL product stack to R9 300.
just4U - Wednesday, June 17, 2015 - link
Your acting as though these are all rebadged. There have been changes hence the naming to coincide with their new high-end cards.chizow - Wednesday, June 17, 2015 - link
What changes? There are 8GB 290X for a lot cheaper than $430 and plenty of 290X clocked at 1050MHz or higher, its the same chip in slightly different configs. Obviously a rebrand. Same for the rest of the stack below it. Not a single new chip = Rebrandeon 300 series.chizow - Tuesday, June 16, 2015 - link
As an AMD supporter and potential customer, you don't mind that AMD has rebranded their entire 300 series stack as new? You can say all you like that consumers only care if a card is competitive, but this only seems to hold true for AMD fanboys/loyalists, as the last 10 months since 970/980 launched prove otherwise. Your claims only prove AMD caters to a niche group of frugal spenders that seeks low prices over all else (support, drivers, features), but the overwhelming majority of the market (77% last I checked) does seem to value newer features, lower power consumption, and better support over just pure price:performance considerations.Nvidia has never launched a GeForce flagship so close to the Titan launch, or so close in performance. The 780 was considerably slower than original Titan and the 780Ti was launched some 9 months later and before the Titan Black. No one could've expected Nvidia would've launched basically a cut-down Titan X at a much cheaper price point, but it is obvious now why they did.
Again, look at it realistically. Why would they bother to try and launch a new "ultra-premium" brand but then price it at the 290/X historical price point of $500/650? They basically bumped their normal R9 x00 series SKUs down below $500, forever. Its obvious they though Nvidia would ask for more for the 980Ti, and that it wouldn't come so close to the Titan X, which would give them a landing spot in that $750-850 range as rumored. Obviously Nvidia caught wind of this, launched the 980Ti at higher than expected performance and lower than expected pricing, which made AMD change course accordingly.
But, we will see in a few days once we see some benches! :)
extide - Tuesday, June 16, 2015 - link
Why would anyone care about the rebrands? I mean seriously what negative does it have? NONE! ITS FOR THE OEMS! So they can have a shiny new model number, THATS ALL. It didn't even get in the way of AMD designing and launching an entirely new chip too.Holy crap I am sick of posting that.
chizow - Tuesday, June 16, 2015 - link
Holy crap and we're sick of reading such stupidity!If there's no problems with rebrands, why doesn't AMD just come out and say it? Why are they going through so much trouble to Rebrand even the ASIC codename? Its obvious why, because its deceptive. Rebrands are going to be older generation parts that have dated features and higher power consumption. People upgrade to get more performance, with more features, at the same or lower power consumption and price.
In nearly every one of these important metrics, AMD doesn't move forward with performance and features, and actually moves BACKWARDs in price and power consumption. And they're trying to sell these as new parts! Fortunately, the market is smarter than this, these Rebrandeon parts won't fare any better than the 200 series which was slaughtered in the marketplace by Nvidia's Maxwell series.
fanofanand - Wednesday, June 17, 2015 - link
ok I have to take back my previous comment, you are more Nvidia shill than ever. It's like you have completely forgotten the G92 that lasted generation after generation as a rebrand. Why are you pretending this is an "AMD trick"? You think Nvidia doesn't re-brand? Yes they made Maxwell, yes Maxwell is completely awesome, you don't think they will re-brand Maxwell next year? Is your skull so thick that you cannot see reality in front of you if it isn't painted green? I owned an 8800GTS 512 that died prematurely.....AND I owned a Radeon 5850. Both stellar cards, the 5850 never died. Does that make AMD better? Only a fool would say so, but it seems you are just such a fool but on the other side. I have no loyalty to either camp, I buy what works best for me. You don't need to continually bash anyone that has different priorities than you, it just makes you look like an ignorant tool who chooses which facts to play loose with depending on what makes Nvidia look better. Now back to lurking I go.chizow - Wednesday, June 17, 2015 - link
Again, look at the G92, it is ONE ASIC carried over, first in the mid, then low end which is where you would expect to see rebrands. Not an ENTIRE chip family carried over exactly in the same product/marketing positions. 4 chips, none new, carrying the exact same marketing SKU designation as their predecessor. And why are they being so secretive about the actual rebrands? Its a deceptive attempt to confuse the market, plain and simple.Gigaplex - Wednesday, June 17, 2015 - link
"Why would anyone care about the rebrands?"Because as a 200 series card owner who wants a new card, it's disheartening to see they're not offering anything new in my target price bracket.
fuicharles - Wednesday, June 17, 2015 - link
"Because as a 200 series card owner who wants a new card, it's disheartening to see they're not offering anything new in my target price bracket."Yeah, you mentioned Price bracket.
To tangle 980, AMD has Fury (vanilla) new card
To tangle 970, AMD has Fury Nano, Enhanced Hawaii and Tonga new cards
Who say no new card, only rebranding.
chizow - Wednesday, June 17, 2015 - link
@Gigaplex, thank you for the honesty, that is exactly the point. Anyone who waited for 10 months on the promise of an upgrade in their price bracket is probably regretting the fact they waited.Gigaplex - Wednesday, June 17, 2015 - link
"actual consumers mostly only care if a card is competitive within its price bracket and has a featureset that meets their needs"Well that rules out a chunk of the rebrands for me since I want FreeSync. I was miffed when I found out that my 270X doesn't support it. At the time I purchased it, AMD were still claiming that FreeSync would be supported on it in a future driver release.
Creig - Tuesday, June 16, 2015 - link
Actually, it's even more obvious that Nvidia panicked and released the GTX 980 Ti simply to try and get as many sales as they could before the Fury X could be unveiled. The $1,000 Titan X had only been on the market for 2 ½ months when Nvidia released the just-as-fast GTX 980 Ti at $650. There's no way Nvidia would cannibalize sales of their Titan X cash cow unless they were forced to by AMD's new flagship.FlushedBubblyJock - Tuesday, June 16, 2015 - link
You're all crazy as moonbats.AMD delayed it's release countless times and many months, NO TIMING IS POSSIBLE WHEN CARDS ARE RELEASED AS SOON AS GOOD SILICON PUMPS OUT THE 10,000 OR 50,000 REQUIRED FOR WORLDWIDE DIST.
They don't get to pick and choose - there is NO EXCESS PRODUCTION NOR NODES TO TOY AROUND WITH.
chizow - Tuesday, June 16, 2015 - link
LMAO panicked? You mean when AMD panicked and cancelled all GPU product announcements at the 2nd biggest electronics trade show in the world once they confirmed Nvidia was launching the 980Ti? :DYes Creig, they panicked so much that they pre-emptively rained on AMD's "Premium" debut of Fury by pricing the 980Ti so low and setting performance so close to the Titan X, that the only possible way AMD could justify their rumored $850 price point for the Fury X was if they convincingly beat both. The obvious conclusion is that AMD realized they weren't going to do that with the Fury X, so they had to settle for simply matching the 980Ti pricing.
So yes, for Nvidia it was a slam dunk, sell a ton of cards, make it even harder for AMD to compete in the marketplace, and ensure Titan still has no peer as the ultra premium part at that $1000 price point.
What stopped AMD from stopping the bleeding at any point in the last 3 months since Titan X launched? We know for a FACT their R9 300 series Rebrandeons were ready to go since they were just rebadges. Oh right, they simply weren't ready to launch Fiji. 275W and an extra thick rad tells me they overclocked this GPU to the tits again just to reach parity with Nvidia's 980Ti and Titan X at 250W. But we'll see soon enough!
fanofanand - Wednesday, June 17, 2015 - link
Chizow, I have laughed at your pro-Nvidia bias for quite some time, but I have to say this particular post actually seems to be unbiased. I didn't know you had it in you! Unfortunately now I may have to take some of your other posts more seriously.....chizow - Wednesday, June 17, 2015 - link
Like I always say, you might actually learn something. ;)D. Lister - Tuesday, June 16, 2015 - link
I wonder if they demonstrated some working Fury setups (at least for the press). Without any working hardware, any price announcements would just be empty marketing talk.extide - Tuesday, June 16, 2015 - link
They had a fury system running on stage, at least 1 maybe 2. THey are also being released to retail in 1 week .. so yeah obviously the cards are working...FlushedBubblyJock - Tuesday, June 16, 2015 - link
When and amd card is "working" - that's a matter of perspective.p1nky - Tuesday, June 16, 2015 - link
How much RAM is on the Fury cards? I don't see any numbers in the blog and other comments mention 4GB? That would be a no go for me at least.dragonsqrrl - Tuesday, June 16, 2015 - link
They made no mention of it. I think that's as much a confirmation of the 4GB rumors as anything.extide - Tuesday, June 16, 2015 - link
It probably is 4GB, but even if that's the case I bet it will be fine for single GPU gaming, only when you go crossfire will 4GB really be limiting IMHO -- except for MAYBE 1 or 2 games.Mostly people complaining about 4GB are just complaining to complain about something.
p1nky - Tuesday, June 16, 2015 - link
I'm using 4x R9 290 4GB now and have to scale back the settings in GTA V due to memory, I certainly won't buy new cards to have the same issue again.ekagori - Tuesday, June 16, 2015 - link
I think everyone needs to remember that you can't judge 4GB of HBM RAM the same as 4GB of GDDR5 RAM. AMD have already stated that you need much less RAM to run the high texture resolution of 4K on HBM than on GDDR5. Instead of freaking out about the 4GB issue, wait for benchmarks, everyone might be pleasantly surprised to see that what Titan X can do with 12GB, AMD can do with 4GB.dragonsqrrl - Tuesday, June 16, 2015 - link
"I think everyone needs to remember that you can't judge 4GB of HBM RAM the same as 4GB of GDDR5 RAM."Uhh, by volume you can.
"AMD have already stated that you need much less RAM to run the high texture resolution of 4K on HBM than on GDDR5."
Wow, when did they say that? Source?
Kraszmyl - Tuesday, June 16, 2015 - link
In the HBM article anandtech had a few weeks ago.Also remember for dx12 titles memory is not duplicated. Two 4g cards is 8g of addressable resources unlike the current status with dx11 and lower.
dragonsqrrl - Tuesday, June 16, 2015 - link
"In the HBM article anandtech had a few weeks ago."No, nowhere in that article does it state that GPU's utilizing HBM will inherently consume less frame buffer than GDDR5. Perhaps you could be a little more specific, maybe with a link and a quote, so we can address the misunderstanding.
"Also remember for dx12 titles memory is not duplicated. Two 4g cards is 8g of addressable resources unlike the current status with dx11 and lower."
What does that have to do with HBM or GDDR5?
ekagori - Tuesday, June 16, 2015 - link
http://www.tomshardware.co.uk/amd-high-bandwidth-m...In that article it states that Joe Macri said in an interview that the current way of doing things is just adding more RAM, but with HBM and its 1st gen 4GB limitation, AMD have been working on overcoming this perceived problem through engineering.
dragonsqrrl - Tuesday, June 16, 2015 - link
@ekagori Yes, but that has nothing to do with HBM, other than the fact that 1st gen HBM's capacity limitations in top tier cards spurred these driver optimizations from AMD. As I already wrote in my response to Etsp, there's no reason AMD couldn't implement similar optimizations on cards that are limited by their GDDR5 capacity.naxeem - Tuesday, June 16, 2015 - link
No, it does not work that way. You can't, even for a tiny bit, supplement memory space with memory bandwidth. If your texture is 35MB it is 35MB, no more, no less. You can (like Nvidia does) compress memory and steal some processing and bandwidth to save on space, but that is not a very significant amount and it can not work to make your 4GB card automagically work like a 6, 8 or 12GB.Bandwidth can only help in scenarios where you have a huge amount of data to move through that memory and that is at huge data chunks (NOT huge take, but huge chunks) which only a handful of games even can think about. The card should, due to bandwidth, show some advantage over 980Ti at ultra-high resolutions like 4K or 5K, but not nearly as you'd think.
And no, AMD 4GB can not do anything TitanX can with 12GB. They can fit less stuff and that is what they can do.
Fortunately for AMD, at this moment only half-a-handful of games actually need more VRAM than 4GB - GTA5, Witcher 3, Shadow of Mordor and that crap CoD:AW... maybe Assassin's Creed: Unity... and only at 4K or higher resolution.
So, yes, AMD's 4GB will be enough for 1440 games at max and some 4K games at medium/high (depending on game) level details and no AA.
Etsp - Tuesday, June 16, 2015 - link
Kraszmyl was referencing info that wasn't in the AT article, but was a quote from another article referenced in the comments of the AT article: http://techreport.com/review/28294/amd-high-bandwi...The last four paragraphs on that page are the relevant parts.
Basically, there was a lot of data in the GPU's memory that didn't need to be there, taking up space. This wasn't a problem when they were adding more memory chips to up bandwidth (the bottleneck at the time), so they had plenty of memory, except for edge cases (4x crossfire).
dragonsqrrl - Tuesday, June 16, 2015 - link
@Etsp That has nothing to do with HBM, as is stated quite clearly in the article you referenced. What you're referring to are essentially driver optimizations, and there's no reason AMD couldn't implement similar optimizations on cards with limited amounts of GDDR5.Etsp - Tuesday, June 16, 2015 - link
You're right, the technology involved has nothing to do with HBM. However, HBM's 4GB limit is the reason that they're putting effort into addressing that area. I think we may be seeing some big memory utilization improvements on existing hardware once those drivers are released. Not just HBM cards.The point is that once those drivers are in place, 4GB won't be as problematic as it is now.
Dug - Tuesday, June 16, 2015 - link
Unless the GDDR5 isn't fast enough.chizow - Tuesday, June 16, 2015 - link
@naxeem, exactly.Refuge - Tuesday, June 16, 2015 - link
Good news is, you can sell your furnace, you won't need it this winter! :PCreig - Tuesday, June 16, 2015 - link
The Fury X is confirmed to have 4GB of HBM.Refuge - Tuesday, June 16, 2015 - link
All the fury cards will have 4gb as far as I'm aware this generation. It is right now just a limitation of the new technology. I guess more than 4gb stacks isn't reliable to produce, or feesable package wise at the moment.(possibly the dual GPU Fury will have 8, I don't know for sure but I heard a rumor of the fury cards handling CF memory differently, but I hold that worth a grain of salt until I see it.)
I don't believe it will be the bottleneck everyone is afraid it will be for 4k gaming though. So long as the bandwidth improvements from HBM live up to the hype that is.
And that is a VERY big if.
naxeem - Tuesday, June 16, 2015 - link
No. DX12 handles CF memory possibly differently, nothing to do with HBM. But, we don't have and won't have such games for quite some time.xenol - Tuesday, June 16, 2015 - link
I don't think engaging an AT-AT with an MG-42 will be effective.Refuge - Tuesday, June 16, 2015 - link
I'm glad I'm not the only one who thought that when I saw that picture. Haha!Talk about bringing a flashlight to a fucking tank fight eh? :P
Zak - Tuesday, June 16, 2015 - link
"New Era in PC Gaming" LOL! Benchies or it didn't happen.araczynski - Tuesday, June 16, 2015 - link
kept scrolling and scrolling but saw nothing but photos and marketing crap, no benchmarks = it doesn't matter.Guspaz - Tuesday, June 16, 2015 - link
The Project Quantum PC looks neat. It's too bad it probably won't be available with an Intel CPU.AMD makes nice graphics cards. Their CPUs... not so much. They did, once upon a time, and I hope those days return.
bonesbrigade - Tuesday, June 16, 2015 - link
Alot of your people seem to miss understand these products.First people complaining about the only 4 gb of hbm. The speed of this memory is far higher than what was used before, it is supposed to be equivalent to 6gb of ggdr5, whihc would make it on par memory wise with the gtx 980 ti.
Second people say this is the reason it wont compete with a titan x. Its not meant to compete against a titan x, its supposed to compete against the 980 ti. Whoever buys the titan x, is brain dead and has no idea what there doing, it has no value point and provides insignificant performance at its price point, as all titans do.
dragonsqrrl - Tuesday, June 16, 2015 - link
"First people complaining about the only 4 gb of hbm. The speed of this memory is far higher than what was used before, it is supposed to be equivalent to 6gb of ggdr5, whihc would make it on par memory wise with the gtx 980 ti."I have no idea where some of you are getting this from, or how you're bold enough to claim that other more informed people somehow misunderstand, but unfortunately you're misinformed and objectively wrong. The interface/bandwidth has nothing to do with the memory consumed for a given task.
"Whoever buys the titan x, is brain dead and has no idea what there doing, it has no value point and provides insignificant performance at its price point, as all titans do. "
This coming from someone who just wrote the previous paragraph.
silverblue - Tuesday, June 16, 2015 - link
I think a link to these bold claims would be nice, that way we can decide for ourselves. I think somebody is getting delta compression and HBM confused, perhaps.Michael Bay - Wednesday, June 17, 2015 - link
>what there doing>there
sabrewings - Tuesday, June 16, 2015 - link
Unfortunately, that's not based in reality. Except for color compression, you only have as much memory as you have. Making access to the memory faster doesn't help with a deficit of available storage. If you're lacking in storage, then you're going to have to hit up system RAM though the PCIe bus (much slower) to access the required textures and shuffle them through as you use them. That's a very inefficient way to process graphics and having a fast VRAM interface won't help you in that case. Nothing about the speed of the interface solves the problem of storage space.Dribble - Tuesday, June 16, 2015 - link
It's a card with 4gb of memory, with all the standard limitations you get with that amount of memory. HBM doesn't magically do anything. While yes it does have more bandwidth then a 290 it also has a lot more stream processors so in actual fact bandwidth/processor is probably pretty similar.warmon6 - Tuesday, June 16, 2015 - link
"4 gb of hbm. The speed of this memory is far higher than what was used before, it is supposed to be equivalent to 6gb of ggdr5"So by that logic...... For you computer system, because DDR4 memory is faster than DDR3 or DDR2.... 4 or 6 GB of DDR4 = 8 GB of DDR3/2....
Come on... 4GB of memory is 4GB no matter how fast or slow the bandwidth/Speed is.... Faster speed/bandwidth cant make up for limited memory no matter how you slice it up.
Gunbuster - Tuesday, June 16, 2015 - link
exactly! I have a SSD It's an order of magnitude faster than my spindle drive so the 250GB SSD magically has more storage than the 1TB spinner. :pwarmon6 - Wednesday, June 17, 2015 - link
didn't even think about that one. "have a TB of video's, files, and music? Buy our 250GB SSD TODAY!" lol XDFlushedBubblyJock - Tuesday, June 16, 2015 - link
12:48, is it a boy or a girl ?amd is now androgenous ... perhaps transitioning transexxual...
sabrewings - Tuesday, June 16, 2015 - link
Definitely not regretting my 980 Ti so far. We'll see once the benchmarks roll around.K_Space - Tuesday, June 16, 2015 - link
Lots of wise remarks from both camps about the wait till benchmarks as well as the usual hardened gambling quick praise/critisize remarks.For those more enlightened than me, do games/benchmarks see HBM memory differently to GDDR5? that is: if a game's miminum requirement is xGB of graphics memory it won't matter whether that's HBM or GGDR5? I think usual spec sheet recommend a mimnimum graphics card as opposed to minimum graphic memory anyway but it's nice to know.
What I am sure will be quite annoying with the slew of upcoming benchmarks that deals with the two variables is teething out by how much margain has each contributed to the final gain/loss in performance.
jwcalla - Tuesday, June 16, 2015 - link
In terms of capacity, the source (HBM or GDDR5) doesn't matter.frostyfiredude - Tuesday, June 16, 2015 - link
Memory controllers and GPU design itself can make better or worse use of the xGB a card has to an extent, but HBM v GDDR won't change much if anything.amilayajr - Tuesday, June 16, 2015 - link
With HBM in mind. Does AMD holds the patent for this? Is Nvidia just going to use HBM for free? Any one care to elaborate ? Because if Nvidia gets to use it for free then that's really funny for AMD side considering they are the one who research it and developed it. Am I making sense?apunari - Tuesday, June 16, 2015 - link
Neither amd nor nvidia develop the hbm. It is sk hynix.TheJian - Tuesday, June 16, 2015 - link
I don't get it...FEATURED REVIEW?I don't see any benchmarks, nor a product. This is NOT A REVIEW. Whatever...Paper launchers...Stop shilling for AMD. What new era in gaming? I see no products for review. I see no specs, speeds etc. Jeez...Not even a review of the rebranded crap (which I hope comes with better speeds, faster mem etc), so what exactly launched? I can online tonight hoping to read a dozen new reviews (yeah, I do that), and see ZERO.
Correct me if I'm wrong here, I thought we would be able to BUY something today from this "LAUNCH". Is that not right?
TheJian - Tuesday, June 16, 2015 - link
To clarify, specs for something not coming until "fall"? etc mean nothing to us now and can change. I meant what are the specs of all these cards that are supposed to be launched right now to compete with NV's stuff? I thought the 390x and below would be reviewed today and for sale (not fiji/hbm stuff, but where are the rest?).Peichen - Wednesday, June 17, 2015 - link
What's the point of reviewing anything below R9 Fury? You can already find reviews for 290X 4GB vs 8GB and 290X @ 1.2GHz vs GTX980 @ 1.5GHz. The performance for 290X/390X is known. Unless AMD managed to make it much cooler or OC more than 1.2GHz on air, there is no point.just4U - Wednesday, June 17, 2015 - link
I certainly would want to see the reviews.. as there are likely to be some refinements to go /w lower costs to performance ratios.Ryan Smith - Wednesday, June 17, 2015 - link
To be clear, the 16th isn't the launch, just the announcement.As for the sub-title "featured review" is the default unless we put something else in there. That was an oversight on my part.
chizow - Wednesday, June 17, 2015 - link
Yeah Anand used to really punish paper launches but at some point it became OK I guess to have these tech events and then launch weeks later. I'm grateful at least that Nvidia got the message and does a really great job of hard launching products in recent years, ends all the speculation about price and performance pretty quickly!Dark_Archonis - Wednesday, June 17, 2015 - link
Rebrandeon...hahaha, I love it.The story/history of AMD/ATI continues to be quite sad.
fuicharles - Wednesday, June 17, 2015 - link
"Rebrandeon ?"Are you blind ?
Didn't you see Fury X, Fury , Nano
silverblue - Wednesday, June 17, 2015 - link
You can blame chizow (et al) for that.chizow - Wednesday, June 17, 2015 - link
Yes I must take credit for coining that one, you're all welcome. :D And, I told you so on the Rebrandeons for 300 series, not just OEM. ;)fingerbob69 - Thursday, June 18, 2015 - link
Except they're not.Call them respins ...upgrades doesn't matter. First reviews are either ecstatic
http://www.overclock3d.net/reviews/gpu_displays/ms...
to the 'meh'
http://www.guru3d.com/articles-pages/msi-radeon-r9...
Toe-to-toe with a 980 for upto £100 ($150) less. Amazing what just changing a sticker will do! ;)
chizow - Thursday, June 18, 2015 - link
They're overclocked versions of the same chip, compare them to previous custom cooled versions including the 8GB 290X or 970/980 and they're middle of the pack where they were before. Just rebrands with a different BIOS, nothing to see here.fingerbob69 - Thursday, June 18, 2015 - link
Overclocked.Increased memory.
New Bios.
More than just a sticker change then!
chizow - Thursday, June 18, 2015 - link
There are overclocked 290X at 1050MHz, there are also 8GB 290X. New BIOS = rebrand.chizow - Wednesday, June 17, 2015 - link
300 series is a full stack of Rebrandeons, "Fury" is the new branding AMD came up with to try and compete with Titan in the ultra premium market.FMinus - Wednesday, June 17, 2015 - link
Did we forget the 8800Ultra - 9800gtx - 9800gtx+ - GTS250, oh I think we did, oh SNAP! 3 gens.chizow - Wednesday, June 17, 2015 - link
yep, and one ASIC across 3 gens, not 4 Rebrandeon ASICs in 1 gen. ;)dragonsqrrl - Wednesday, June 17, 2015 - link
It was the 8800GT/GTS 512, not the Ultra. And it's good that you can count the number of generations. If you can think of any other similarities between between g92 and Pitcairn/Bonaire/Tonga/Hawaii please let me know.phenom135 - Wednesday, June 17, 2015 - link
Why is this slide not in the article it at least gives some point of comparison to nvidiahttp://scr3.golem.de/screenshots/1506/AMD-Fiji-Tec...
cjs150 - Wednesday, June 17, 2015 - link
"It's a 275w card. That can definitely be air cooled. They water cooled the ref card.... because they wanted to, that's all. There will be plenty of partner boards with air cooling for the Fury X."I wonder whether the small size of the board had anything to do with choosing water cooling? The size of board might be a limit on the fan size that could be used and I suspect we all have experience of some of the older cards from both camps that sounded like a fighter aircraft taking off.
From what I have seen I rather like Fury X, and I have been using Nvidia for years. Obviously need to see benchmarks and would like to see cards with waterblocks for use in custom water cooling loops.
chizow - Thursday, June 18, 2015 - link
The size of the PCB makes no difference on the size of the air cooler. They can just use blank PCB extensions like many OEMs did already for the 970 to accomodate bigger coolers, heck even most of the big Al coolers now extend beyond full size 10.5-12" PCBs.NvidiaWins - Wednesday, June 17, 2015 - link
The Fiji Fury already lost to the 980Ti....nothing to see here folks.....just another loser in the gpu race!FMinus - Wednesday, June 17, 2015 - link
show me!dragonsqrrl - Wednesday, June 17, 2015 - link
Well I would expect Fury to lose to the 980Ti, not FuryX.SunLord - Wednesday, June 17, 2015 - link
Is there a 380X or is this just a 285 to 380 rebrand?NinuGie - Wednesday, June 17, 2015 - link
No hdmi 2.0 on fury cards! WTFkyuu - Thursday, June 18, 2015 - link
The NVIDIA astroturfing is really getting ridiculous in the comments here. Is it too much to ask for some moderation here by the Anandtech staff to make this comment section readable?BillyHerrington - Thursday, June 18, 2015 - link
So true, it seem Anandtech will turn into the like of wccftech & fudzilla, those 2 sites especially wccf have the worst comment section ever.chizow - Thursday, June 18, 2015 - link
What's funny is the AMD fanboys take the first swing and then cry about it when I swing back. Put on your big boy pants and join the conversation!chizow - Thursday, June 18, 2015 - link
Awww... it sounds like kyuu wants to read his pro-AMD fluff pieces in peace, there is a forum for that too its called AMDzone.com. Do you go around hand-waving in protest like this in the Nvidia stories against all the AMD fanboys/trolls. I bet you don't. ;)As for "astroturfing" (this is a new term to me, must be in the AMD fanboy handbook), do any of these neckbeards look familiar to you?
http://www.legitreviews.com/amd-shows-radeon-r9-30...
Harry_Wild - Thursday, June 18, 2015 - link
I just got a SFF PC and been looking for graphic cards for 4K 60fps and there is almost nothing out there. Intel integrated is the only thing that will work with 4K monitors.Most current graphic cards don't have a Displayport output and if they do; it a $800 card!
Hrel - Friday, June 19, 2015 - link
I don't care about 4k until 70" tv's are a thousand bucks and 24" monitors are $200.Also, there would have to be 4K content readily available. Legally, I'd have to have access to every tv show and movie I already own in 1080p get upgraded to 4k for free. Obviously illegally you can get around this, but for most people the problem stands.
Anassis - Saturday, November 7, 2015 - link
AMD is going down i think.. it can`t resist against Nvidia, its much cheaper but not the Quality at all...in Germany Nvidia is much more popular.<a href="http://welcher-computer.de/">Empfehlungen für Gamer PC, Konsolen, Tablets und mehr </a>
Anassis - Saturday, November 7, 2015 - link
AMD is going down i think.. it can`t resist against Nvidia, its much cheaper but not the Quality at all...in Germany Nvidia is much more popular.http://welcher-computer.de/