TU117: Tiny Turing

Before we take a look at the Zotac card and our benchmark results, let’s take a moment to go over the heart of the GTX 1650: the TU117 GPU.

TU117 is for most practical purposes a smaller version of the TU116 GPU, retaining the same core Turing feature set, but with fewer resources all around. Altogether, coming from the TU116 NVIDIA has shaved off one-third of the CUDA cores, one-third of the memory channels, and one-third of the ROPs, leaving a GPU that’s smaller and easier to manufacture for this low-margin market.

Still, at 200mm2 in size and housing 4.7B transistors, TU117 is by no means a simple chip. In fact, it’s exactly the same die size as GP106 – the GPU at the heart of the GeForce GTX 1060 series – so that should give you an idea of how performance and transistor counts have (slowly) cascaded down to cheaper products over the last few years.

Overall, NVIDIA’s first outing with their new GPU is an interesting one. Looking at the specs of the GTX 1650 and how NVIDIA has opted to price the card, it’s clear that NVIDIA is holding back a bit. Normally the company launches two low-end cards at the same time – a card based on a fully-enabled GPU and a cut-down card – which they haven’t done this time. This means that NVIDiA is sitting on the option of rolling out a fully-enabled TU117 card in the future if they want to.

By the numbers, the actual CUDA core count differences between GTX 1650 and a theoretical fully-enabled GTX 1650 Ti are quite limited – to the point where I doubt a few more CUDA cores alone would be worth it – however NVIDIA also has another ace up its sleeve in the form of GDDR6 memory. If the conceptually similar GTX 1660 Ti is anything to go by, a fully-enabled TU117 card with a small bump in clockspeeds and 4GB of GDDR6 could probably pull far enough ahead of the vanilla GTX 1650 to justify a new card, perhaps at $179 or so to fill NVIDIA’s current product stack gap.

The bigger question is where performance would land, and if it would be fast enough to completely fend off the Radeon RX 570. Despite the improvements over the years, bandwidth limitations are a constant challenge for GPU designers, and NVIDIA’s low-end cards have been especially boxed in. Coming straight off of standard GDDR5, the bump to GDDR6 could very well put some pep into TU117’s step. But the price sensitivity of this market (and NVIDIA’s own margin goals) means that it may be a while until we see such a card; GDDR6 memory still fetches a price premium, and I expect that NVIDIA would like to see this come down first before rolling out a GDDR6-equipped TU117 card.

Turing’s Graphics Architecture Meets Volta’s Video Encoder

While TU117 is a pure Turing chip as far as its core graphics and compute architecture is concerned, NVIDIA’s official specification tables highlight an interesting and unexpected divergence in related features. As it turns out, TU117 has incorporated an older version of NVIDIA’s NVENC video encoder block than the other Turing cards. Rather than using the Turing block, it uses the video encoding block from Volta.

But just what does the Turing NVENC block offer that Volta’s does not? As it turns out, it’s just a single feature: HEVC B-frame support.

While it wasn’t previously called out by NVIDIA in any of their Turing documentation, the NVENC block that shipped with the other Turing cards added support for B(idirectional) Frames when doing HEVC encoding. B-frames, in a nutshell, are a type of advanced frame predication for modern video codecs. Notably, B-frames incorporate information about both the frame before them and the frame after them, allowing for greater space savings versus simpler uni-directional P-frames.

I, P, and B-Frames (Petteri Aimonen / PD)

This bidirectional nature is what make B-frames so complex, and this especially goes for video encoding. As a result, while NVIDIA has supported hardware HEVC encoding for a few generations now, it’s only with Turing that they added B-frame support for that codec. The net result is that relative to Volta (and Pascal), Turing’s NVENC block can achieve similar image quality with lower bitrates, or conversely, higher image quality at the same bitrate. This is where a lot of NVIDIA’s previously touted “25% bitrate savings” for Turing come from.

Past that, however, the Volta and Turing NVENC blocks are functionally identical. Both support the same resolutions and color depths, the same codecs, etc, so while TU117 misses out on some quality/bitrate optimizations, it isn’t completely left behind. Total encoder throughput is a bit less clear, though; NVIDIA’s overall NVENC throughput has slowly ratcheted up over the generations, in particular so that their GPUs can serve up an ever-larger number of streams when being used in datacenters.

Overall this is an odd difference to bake into a GPU when the other 4 members of the Turing family all use the newer encoder, and I did reach out to NVIDIA looking for an explanation for why they regressed on the video encoder block. The answer, as it turns out, came down to die size: NVIDIA’s engineers opted to use the older encoder to keep the size of the already decently-sized 200mm2 chip from growing even larger. Unfortunately NVIDIA isn’t saying just how much larger Turing’s NVENC block is, so it’s impossible to say just how much die space this move saved. However, that the difference is apparently enough to materially impact the die size of TU117 makes me suspect it’s bigger than we normally give it credit for.

In any case, the impact to GTX 1650 will depend on the use case. HTPC users should be fine as this is solely about encoding and not decoding, so the GTX 1650 is as good for that as any other Turing card. And even in the case of game streaming/broadcasting, this is (still) mostly H.264 for compatibility and licensing reasons. But if you fall into a niche area where you’re doing GPU-accelerated HEVC encoding on a consumer card, then this is a notable difference that may make the GTX 1650 less appealing than the TU116-powered GTX 1660.

The NVIDIA GeForce GTX 1650 Review: Featuring ZOTAC Meet the ZOTAC GeForce GTX 1650 OC
Comments Locked


View All Comments

  • philehidiot - Friday, May 3, 2019 - link

    Over here, it's quite routine for people to consider the efficiency cost of using AC in a car and whether it's more sensible to open the window... If you had a choice over a GTX1080 and Vega64 which perform nearly the same, assume they cost nearly the same, then you'd take into account one requires a small nuclear reactor to run whilst the other is probably more energy sipping than your current card. Also, some of us are on this thing called a budget. $50 saving is a weeks food shopping.
  • JoeyJoJo123 - Friday, May 3, 2019 - link

    Except your comment is exactly in line with what I said:
    "Lower power for the same performance at a similar enough price can be a tie-breaker between two competing options, but that's not the case here for the 1650"

    I'm not saying power use of the GPU is irrelevant, I'm saying performance/price is ultimately more important. The RX 570 GPU is not only significantly cheaper, but it outperforms the GTX 1650 is most scenarios. Yes, the RX 570 does so by consuming more power, but it'd take 2 or so years of power bills (at least according to avg American power bill per month) to split an even cost with the GTX 1650, and even at that mark where the cost of ownership is equivalent, the RX 570 still has provided 2 years of consistently better performance, and will continue to offer better performance.

    Absolutely, a GTX1080 is a smarter buy compared the to the Vega64 given the power consumption, but that's because power consumption was the tie breaker. The comparison wouldn't be as ideal for the GTX1080 if it costed 30% more than the Vega64, offered similar performance, but came with the long term promise of ~eventually~ paying for the upfront difference in cost with a reduction in power cost.

    Again, the sheer majority of users on the market are looking for best performance/price, and the GTX1650 outpriced itself out of the market it should be competing with.
  • Oxford Guy - Saturday, May 4, 2019 - link

    "it'd take 2 or so years of power bills (at least according to avg American power bill per month) to split an even cost with the GTX 1650, and even at that mark where the cost of ownership is equivalent, the RX 570 still has provided 2 years of consistently better performance, and will continue to offer better performance."


    Plus, if people are so worried about power consumption maybe they should get some solar panels.
  • Yojimbo - Sunday, May 5, 2019 - link

    Why in the world would you get solar panels? That would only increase the cost even more!
  • Karmena - Tuesday, May 7, 2019 - link

    So, you multiplied it once, why not multiply that value again. and make it 100$?
  • Gigaplex - Sunday, May 5, 2019 - link

    Kids living with their parents generally don't care about the power bill.
  • gglaw - Sunday, May 5, 2019 - link

    wrong on so many levels. If you find the highest cost electricity city in the US, plug in the most die hard gamer who plays only new games on max settings that runs GPU at 100% load at all times, and assume he plays more hours than most people work you might get close to those numbers. The sad kid who fits the above scenario games hard enough he would never choose to get such a bad card that is significantly slower than last gen's budget performers (RX 570 and GTX 1060 3GB). Kids in this scenario would not be calculating the nickels and dimes he's saving here and there - they'd would be getting the best card in their NOW budget without subtracting the quarter or so they might get back a week. You're trying to create a scenario that just doesn't exist. Super energy conscious people logging every penny of juice they spend don't game dozens of hours a week and would be nit-picky enough they would probably find settings to save that extra 2 cents a week so wouldn't even be running their GPU at 100% load.
  • PeachNCream - Friday, May 3, 2019 - link

    Total cost of ownership is a significant factor in any buying decision. Not only should one consider the electrical costs of a GPU, but indirect additional expenses such as air conditioning needs or reductions in heating costs offset by heat output along with the cost to upgrade at a later date based on the potential for dissatisfaction with future performance. Failing to consider those and other factors ignores important recurring expenses.
  • Geranium - Saturday, May 4, 2019 - link

    Then people need to buy Ryzen R7 2700X than i9 9900K. As 9900K use more power, runs hot so need more powerful cooler and powerful cooler use more current compared to a 2700X.
  • nevcairiel - Saturday, May 4, 2019 - link

    Not everyone puts as much value on cost as others. When discussing a budget product, it absolutely makes sense to consider, since you possibly wouldn't buy such a GPU if money was no object.

    But if someone buys a high-end CPU, the interests shift drastically, and as such, your logic makes no sense anymore. Plenty people buy the fastest not because its cheap, but because its the absolutely fastest.

Log in

Don't have an account? Sign up now