NVIDIA’s GeForce GTX Titan Review, Part 2: Titan's Performance Unveiled
by Ryan Smith & Rahul Garg on February 21, 2013 9:00 AM ESTEarlier this week NVIDIA announced their new top-end single-GPU consumer card, the GeForce GTX Titan. Built on NVIDIA’s GK110 and named after the same supercomputer that GK110 first powered, the GTX Titan is in many ways the apex of the Kepler family of GPUs first introduced nearly one year ago. With anywhere between 25% and 50% more resources than NVIDIA’s GeForce GTX 680, Titan is intended to be the ultimate single-GPU card for this generation.
Meanwhile with the launch of Titan NVIDIA has repositioned their traditional video card lineup to change who the ultimate video card will be chasing. With a price of $999 Titan is decidedly out of the price/performance race; Titan will be a luxury product, geared towards a mix of low-end compute customers and ultra-enthusiasts who can justify buying a luxury product to get their hands on a GK110 video card. So in many ways this is a different kind of launch than any other high performance consumer card that has come before it.
So where does that leave us? On Tuesday we could talk about Titan’s specifications, construction, architecture, and features. But the all-important performance data would be withheld another two days until today. So with Thursday finally upon us, let’s finish our look at Titan with our collected performance data and our analysis.
Titan: A Performance Summary
GTX Titan | GTX 690 | GTX 680 | GTX 580 | |
Stream Processors | 2688 | 2 x 1536 | 1536 | 512 |
Texture Units | 224 | 2 x 128 | 128 | 64 |
ROPs | 48 | 2 x 32 | 32 | 48 |
Core Clock | 837MHz | 915MHz | 1006MHz | 772MHz |
Shader Clock | N/A | N/A | N/A | 1544MHz |
Boost Clock | 876Mhz | 1019MHz | 1058MHz | N/A |
Memory Clock | 6.008GHz GDDR5 | 6.008GHz GDDR5 | 6.008GHz GDDR5 | 4.008GHz GDDR5 |
Memory Bus Width | 384-bit | 2 x 256-bit | 256-bit | 384-bit |
VRAM | 6GB | 2 x 2GB | 2GB | 1.5GB |
FP64 | 1/3 FP32 | 1/24 FP32 | 1/24 FP32 | 1/8 FP32 |
TDP | 250W | 300W | 195W | 244W |
Transistor Count | 7.1B | 2 x 3.5B | 3.5B | 3B |
Manufacturing Process | TSMC 28nm | TSMC 28nm | TSMC 28nm | TSMC 40nm |
Launch Price | $999 | $999 | $499 | $499 |
On paper, compared to GTX 680, Titan offers anywhere between a 25% and 50% increase in resource. At the starting end, Titan comes with 25% more ROP throughput, a combination of Titan’s 50% increase in ROP count and simultaneous decrease in clockspeeds relative to GTX 680. Shading and texturing performance meanwhile benefits even more from the expansion of the number of SMXes, from 8 to 14. And finally, Titan has a full 50% more memory bandwidth than GTX 680.
Setting aside the unique scenario of compute for a moment, this means that Titan will be between 25% and 50% faster than GTX 680 in GPU limited situations, depending on the game/application and its mix of resource usage. For an industry and userbase still trying to come to terms with the loss of nearly annual half-node jumps, this kind of performance jump on the same node is quite remarkable. At the same time it also sets expectations for how future products may unfold; one way to compensate for the loss of the rapid cadence in manufacturing nodes is to spread out the gains from a new node over multiple years, and this is essentially what we’ve seen with the Kepler family by launching GK104, and a year later GK110.
In any case, while Titan can improve gaming performance by up to 50%, NVIDIA has decided to release Titan as a luxury product with a price roughly 120% higher than the GTX 680. This means that Titan will not be positioned to push the price of NVIDIA’s current cards down, and in fact it’s priced right off the currently hyper-competitive price-performance curve that the GTX 680/670 and Radeon HD 7970GE/7970 currently occupy.
February 2013 GPU Pricing Comparison | |||||
AMD | Price | NVIDIA | |||
$1000 | GeForce Titan/GTX 690 | ||||
(Unofficial) Radeon HD 7990 | $900 | ||||
Radeon HD 7970 GHz Edition | $450 | GeForce GTX 680 | |||
Radeon HD 7970 | $390 | ||||
$350 | GeForce GTX 670 | ||||
Radeon HD 7950 | $300 |
This setup isn’t unprecedented – the GTX 690 more or less created this precedent last May – but it means Titan is a very straightforward case of paying 120% more for 50% more performance; the last 10% always costs more. What this means is that the vast majority of gamers will simply be shut out from Titan at this price, but for those who can afford Titan’s $999 price tag NVIDIA believes they have put together a powerful card and a convincing case to pay for luxury.
So what can potential Titan buyers look forward to on the performance front? As always we’ll do a complete breakdown of performance in the following pages, but we wanted to open up this article with a quick summary of performance. So with that said, let’s take a look at some numbers.
GeForce GTX Titan Performance Summary (2560x1440) | ||||||
vs. GTX 680 | vs. GTX 690 | vs. R7970GE | vs. R7990 | |||
Average | +47% | -15% | 34% | -19% | ||
Dirt: Showdown | 47% | -5% | 3% | -38% | ||
Total War: Shogun 2 | 50% | -15% | 62% | 1% | ||
Hitman: Absolution | 34% | -15% | 18% | -15% | ||
Sleeping Dogs | 49% | -15% | 17% | -30% | ||
Crysis | 54% | -13% | 21% | -25% | ||
Far Cry 3 | 35% | -23% | 37% | -15% | ||
Battlefield 3 | 48% | -18% | 52% | -11% | ||
Civilization V | 59% | -9% | 60% | 0 |
Looking first at NVIDIA’s product line, Titan is anywhere between 33% and 54% faster than the GTX 680. In fact with the exception of Hitman: Absolution, a somewhat CPU-bound benchmark, Titan’s performance relative to the GTX 680 is actually very consistent at a narrow 45%-55% range. Titan and GTX 680 are of course based on the same fundamental Kepler architecture, so there haven’t been any fundamental architecture changes between the two; Titan is exactly what you’d expect out of a bigger Kepler GPU. At the same time this is made all the more interesting due to the fact that Titan’s real-world performance advantage of 45%-55% is so close to its peak theoretical performance advantage of 50%, indicating that Titan doesn’t lose much (if anything) in efficiency when scaled up, and that the games we’re testing today favor memory bandwidth and shader/texturing performance over ROP throughput.
Moving on, while Titan offers a very consistent performance advantage over the architecturally similar GTX 680, it’s quite a different story when compared to AMD’s fastest single-GPU product, the Radeon HD 7970 GHz Edition. As we’ve seen time and time again this generation, the difference in performance between AMD and NVIDIA GPUs not only varies with the test and settings, but dramatically so. As a result Titan is anywhere between being merely equal to the 7970GE to being nearly a generation ahead of it.
At the low-end of the scale we have DiRT: Showdown, where Titan’s lead is less than 3%. At the other end is Total War: Shogun 2, where Titan is a good 62% faster than the 7970GE. The average gain over the 7970GE is almost right in the middle at 34%, reflecting a mix of games where the two are close, the two are far, and the two are anywhere in between. With recent driver advancements having helped the 7970GE pull ahead of the GTX 680, NVIDIA had to work harder to take back their lead and to do so in an concrete manner.
Titan’s final competition are the dual-GPU cards of this generation, the GK104 based GTX 690, and the officially unofficial Tahiti based HD 7990 cards, which vary in specs but generally have just shy of the performance of a pair of 7970s. As we’ve seen in past generations, when it comes to raw performance one big GPU is no match for two smaller GPUs, and the same is true with Titan. For frames per second and nothing else, Titan cannot compete with those cards. But as we’ll see there are still some very good reasons for Titan’s existence, and areas Titan excels at that even two lesser GPUs cannot match.
None of this of course accounts for compute. Simply put, Titan stands alone in the compute world. As the first consumer GK110 GPU based video card there’s nothing quite like it. We’ll see why that is in our look at compute performance, but as far as the competitive landscape is concerned there’s not a lot to discuss here.
337 Comments
View All Comments
UzairH - Thursday, February 21, 2013 - link
Ah ok, thanks for the explanation Ryan. Fair enough if the game is CPU bound, and your policy sounds fair as well. Please note however that at high resolutions enabling SSAO kills the performance, and enabling Transparency Anti-aliasing on top of that even more so, so even without mods Skyrim can still be pretty brutal on cards like the 670 and HD 7970.CeriseCogburn - Thursday, February 21, 2013 - link
LOL ignore the idiocy and buy the great nVidia card, you'll NEVER have to hear another years long screed from amd fanboys about 3G of ram being future-proof -ESPECIALLY WITH SKYRIM AND ADDONS!!!!As they screamed endlessly....
CeriseCogburn - Saturday, February 23, 2013 - link
It's a bunch of HOOEY no matter how reasonable "the policy" excuse sounds...http://www.bit-tech.net/hardware/2013/02/21/nvidia...
There's the Skyrim results, with TITAN 40+% ahead.
trajan2448 - Friday, February 22, 2013 - link
AMDs fps numbers are overstated. They figured out a trick to make runt frames, or frames which are not actually rendered to trigger the fps monitor as a real fully rendered frame. This is real problem for AMD much worse than the latency problem. Crossfire is a disaster which is why numerous reviewers including Tech Report have written that Crossfire produces higher fps but feels less smooth than Nvidia.Check this article out. http://www.pcper.com/reviews/Graphics-Cards/NVIDIA...
Ankarah - Thursday, February 21, 2013 - link
From a regular consumer's point of view, the hype it being the fastest 'single' graphics card doesn't really appeal that much - it doesn't make a difference to me how these video cards work in what configurations underneath its big case, as long as it does its job.So I really can't understand why any regular consumer would intentionally choose this over the 690GTX, which seems to be faster overall for the same price, unless you belong to perhaps 0.5% of their market share where you absolutely require FP64 executions for your work but don't really need the full power of Tesla.
And let's face it, if you are willing to shell out a grand for your graphics card for your PC, you aren't worried about the difference their TDP will make on your electric bills.
So I think it's just a marketing circus specifically engineered to draw in a lucky few, to whom money or price/performance ratio holds no value at all - there's nothing to see here for regular Joes like you and me.
Let's move along.
sherlockwing - Thursday, February 21, 2013 - link
This card is for people willing to spend at least $2K on their Graphic cards and don't want to deal with Quad GPU scaling while also having room for a Third. If you don't have that much cash you are not in its target audience.Ankarah - Thursday, February 21, 2013 - link
That makes sense,so this card, however we slice it, is only for about perhaps 1% of the consumer base if even that.
andrewaggb - Thursday, February 21, 2013 - link
pretty much. And bragging rights.CeriseCogburn - Thursday, February 21, 2013 - link
not for the crybabies we have here.Yet go to another thread and the screaming about the 7990 and the endless dual top end videocard setups with thousand dollar INTEL cpu's will be endless.
It all depends on whose diapers the pooing crybabies are soiling at the moment.
cmdrdredd - Thursday, February 21, 2013 - link
Plus people who want to break world records in benchmarking.