Meet The GeForce GTX 670

Because of the relatively low power consumption of GK104 relative to past high-end NVIDIA GPUs, NVIDIA has developed a penchant for small cards. While the GTX 680 was a rather standard 10” long, NVIDIA also managed to cram the GTX 690 into the same amount of space. Meanwhile the GTX 670 takes this to a whole new level.

We’ll start at the back as this is really where NVIDIA’s fascination with small size makes itself apparent. The complete card is 9.5” long, however the actual PCB is far shorter at only 6.75” long, 3.25” shorter than the GTX 680’s PCB. In fact it would be fair to say that rather than strapping a cooler onto a card, NVIDIA strapped a card onto a cooler. NVIDIA has certainly done short PCBs before – such as with one of the latest GTX 560 Ti designs – but never on a GTX x70 part before. But given the similarities between GK104 and GF114, this isn’t wholly surprising, if not to be expected.

In any case this odd pairing of a small PCB with a large cooler is no accident. With a TDP of only 170W NVIDIA doesn’t necessarily need a huge PCB, but because they wanted a blower for a cooler they needed a large cooler. The positioning of the GPU and various electronic components meant that the only place to put a blower fan was off of the PCB entirely, as the GK104 GPU is already fairly close to the rear of the card. Meanwhile the choice of a blower seems largely driven by the fact that this is an x70 card – NVIDIA did an excellent job with the GTX 560 Ti’s open air cooler, which was designed for the same 170W TDP, so the choice is effectively arbitrary from a technical standpoint (there’s no reason to believe $400 customers are any less likely to have a well-ventilated case than $250 buyers). Accordingly, it will be NVIDIA’s partners that will be stepping in with open air coolers of their own designs.

Starting as always at the top, as we previously mentioned the reference GTX 670 is outfitted with a 9.5” long fully shrouded blower. NVIDIA tells us that the GTX 670 uses the same fan as the GTX 680, and while they’re nearly identical in design, based on our noise tests they’re likely not identical. On that note unlike the GTX 680 the fan is no longer placed high to line up with the exhaust vent, so the GTX 670 is a bit more symmetrical in design than the GTX 680 was.


Note: We dissaembled the virtually identical EVGA card here instead

Lifting the cooler we can see that NVIDIA has gone with a fairly simple design here. The fan vents into a block-style aluminum heatsink with a copper baseplate, providing cooling for the GPU. Elsewhere we’ll see a moderately sized aluminum heatsink clamped down on top of the VRMs towards the front of the card. There is no cooling provided for the GDDR5 RAM.


Note: We dissaembled the virtually identical EVGA card here instead

As for the PCB, as we mentioned previously due to the lower TDP of the GTX 670 NVIDIA has been able to save some space. The VRM circuitry has been moved to the front of the card, leaving the GPU and the RAM towards the rear and allowing NVIDIA to simply omit a fair bit of PCB space. Of course with such small VRM circuitry the reference GTX 670 isn’t built for heavy overclocking – like the other GTX 600 cards NVIDIA isn’t even allowing overvolting on reference GTX 670 PCBs – so it will be up to partners with custom PCBs to enable that kind of functionality. Curiously only 4 of the 8 Hynix R0C GDDR5 RAM chips are on the front side of the PCB; the other 4 are on the rear. We typically only see rear-mounted RAM in cards with 16/24 chips, as 8/12 will easily fit on the same side.

Elsewhere at the top of the card we’ll find the PCIe power sockets and SLI connectors. Since NVIDIA isn’t scrambling to save space like they were with the GTX 680, the GTX 670’s PCIe power sockets are laid out in a traditional side-by-side manner. As for the SLI connectors, since this is a high-end GeForce card NVIDIA provides 2 connectors, allowing for the card to be used in 3-way SLI.

Finally at the front of the card NVIDIA is using the same I/O port configuration and bracket that we first saw with the GTX 680. This means 1 DL-DVI-D port, 1 DL-DVI-I port, 1 full size HDMI 1.4 port, and 1 full size DisplayPort 1.2. This also means the GTX 670 follows the same rules as the GTX 680 when it comes to being able to idle with multiple monitors.

NVIDIA GeForce GTX 670 Meet The EVGA GeForce GTX 670 Superclocked
Comments Locked

414 Comments

View All Comments

  • will54 - Thursday, May 10, 2012 - link

    Where can you find a 670 at 1300 -1400 mhz overclock. I think maybe you are reading the cuda cores at 1344 since they are just below the core and boost clocks (on newegg at least). Sorry if I'm wrong but the highest I saw was 1006 core and 1058 mhz boost for the galaxy at 439.99
  • ltcommanderdata - Thursday, May 10, 2012 - link

    I've mentioned this is in a few article comments now, but I'm wondering if the new OpenCL accelerated WinZip 16.5 would make a good compute benchmark? (No I don't work for WinZip). I'm assuming AMD's involvement in the development didn't result in a vendor specific OpenCL program. Seeing file compression/decompression is such a common use case, this could become a broad consumer use of GPGPU.

    http://www.geeks3d.com/20120506/intel-hd-graphics-...

    BTW, Intel has released beta Windows 8 drivers (v1729) which in fact work with Windows 7 and add full OpenGL 4.0 and OpenCL 1.1 support for Ivy Bridge. It would be great to run relevant OpenCL compute benchmarks as well as Unigine Heaven OpenGL tessellation to see how Ivy Bridge compares to Llano and discrete low/mid-range GPUs.
  • Ryan Smith - Thursday, May 10, 2012 - link

    According to WinZip it only supports AMD GPUs, which is why we're not using it in NVIDIA reviews at this time.
  • nexus2905 - Thursday, May 10, 2012 - link

    Yet u used a benchmark that only supports nVidia cards in the article, doesn't change the fact the 670 is a great card, but your reply doesn't add up. And why are games like stalker and alan wake not including?
  • Ryan Smith - Thursday, May 10, 2012 - link

    To be clear, if this were an AMD card review, we wouldn't use the CUDA Folding@Home benchmark. But we would likely use WinZip since it works on AMD cards.
  • ltcommanderdata - Thursday, May 10, 2012 - link

    I see. I couldn't actually confirm that myself since I don't currently have a nVidia GPU. It's disappointing that after all the complaints about vendor specific APIs, namely CUDA, and talking up OpenCL as the ideal cross-vendor, cross-platform approach to GPGPU, AMD then turns around and helps develop a vendor specific OpenCL program.
  • CeriseCogburn - Thursday, May 10, 2012 - link

    As I've said, they've been lying to their insane fanboy contingent for quite some time. The 3G of ram goes right along this line, it adds zero performance, it actually slows the cards down, but fanboys can have a field day with their insane speculations and cockeyed illusions.
    AMD cried for years against cuda and physx, leaving their fanboys grinding their teeth and cursing nVidia in the dark - destitute and uncovered, while they arguably lost less money playing that raging fanboy PR game, they blew their cover with the 7970 rape price launch, and now their proprietary whoring with winzip.
    They are EVIL - as in not practicing what they preach and achieving one hundred percent hypocrisy - and only the fanboys haven't known that for years - and now, their eyes should finally be opened, but frankly, I doubt a mack truck 45mph over the speed limit on grooved concrete could open those thick craniums.
    Bottom line - amd business practices are evil and they suck - and their drivers suck too.
  • anubis44 - Saturday, May 12, 2012 - link

    Yes, and since nVidia didn't f*ck off with their proprietary tactics after repeated requests to stop, AMD did what they had to. If you're going to whine about that, you're being completely unreasonable.
  • CeriseCogburn - Sunday, May 13, 2012 - link

    No, I'm not whining about it. I don't mind at all. If nVidia wants to have a fast winzip, they will have to pay up and or do the hard driver and collusion work.

    Pretty simple. Very fair. Something amd decided it did not want to do for many years - as it and you people whined in hatred toward nVidia for years.

    All I've done is point out what you should have known for years already, amd does the same thing all the time and "worse".
    But to understand and know that, you'd have to have a mind and be an adult, not a brainwashed fanboy.
  • Morg. - Thursday, May 10, 2012 - link

    Bullcrap, sir.
    We all know Tahiti with it's unblocked FP totally wipes the floor with even the best Tesla out there.

    The benchmarks you picked fail to show that, because too many are CUDA, which is obviously not the future of GPGPU since the ARM crowd and google have gone OPENCL.

    As a summary, the usual Anandtech paid advertisement fails to deliver on the tech front but who cares, you and so many others already have a million nerds salivating at the thought of nvidia (this round, cuz I'm sure you'll get a call from AMD one of these days).

Log in

Don't have an account? Sign up now