The NVIDIA GeForce GTX 1650 Super Review, Feat. Zotac Gaming: Bringing Balance To 1080pby Ryan Smith on December 20, 2019 9:00 AM EST
Meet The ZOTAC Gaming GeForce GTX 1650 Super
Since this latest GTX 1650 series card launch is a virtual launch like the others, the board partners are once again stepping up to the plate to provide samples. For the GTX 1650 Super launch, we received Zotac’s Gaming GeForce GTX 1650 Super card, which is a fairly straightforward entry-level card for the series.
|GeForce GTX 1650 Super Card Comparison|
|GeForce GTX 1650 Super
|Zotac Gaming GeForce GTX 1650 Super|
|Memory Clock||12Gbps GDDR6||12Gbps GDDR6|
|GPU Power Limit||100W||100W|
|Cooler Type||N/A||Open Air, Dual Fan|
For their sole GTX 1650 Super card, Zotac has opted to keep things simple, not unlike their regular GTX 1650 cards. In particular, Zotac has opted to design their card to maximize compatibility, even going as far as advertising the card as being compatible with 99% of systems. The end result of this being that rather than doing a large card that may not fit everywhere, Zotac has gone with a relatively small 6.2-inch long card that would be easily at home in a Mini-ITX system build.
Fittingly, there is no factory overclock to speak of here. With GPU and memory speeds identical to NVIDIA’s reference specifications, Zotac’s card is as close as you can get to an actual reference card. With is very fitting for our generalized look at the GeForce GTX 1650 Super as a whole.
Digging down, we start with Zotac’s cooler. The company often shifts between single fan and dual fan designs in this segment of the market, and for the GTX 1650 Super they’ve settled on a dual fan design. Given the overall small size of the card, the fans are equally small, with a diameter of just 65mm each. This is something to keep in mind for our look at noise testing, as small fans are often a liability there. Meanwhile the fans are fed by a single 2-pin power connector, so there isn’t any advanced PWM fan control or even RPM monitoring available for the fan. In this respect it’s quite basic, but typical for an NVIDIA xx50 series card.
Underneath the fan is an aluminum heatsink that runs most the length of the card. With a TDP of just 100 Watts – and no option to further increase the power limit – there’s no need for heatpipes or the like here. Though the heatsink’s base is big enough that Zotac has been able to cover both the GPU and the GDDR6 memory, bridging the latter via thermal pads. The fins are arranged vertically, so the card tends to push air out of the top and bottom.
The small PCB housing the GPU and related components is otherwise unremarkable. Zotac has done a good job here seating such a large GPU without requiring a larger PCB. As we usually see for such short cards, the VRM components have been moved up to the front of the board. The MOSFETs themselves are covered with a small aluminum heatsink, though with most of the airflow from the fans blocked by the primary heatsink, I don’t expect the VRMs are getting much in the way of airflow.
For power, the card relies on an 6-pin external PCIe power cable, as well as PCIe slot power. The power connector is inverted – that is, the tab is on the inside of the card – which helps to keep it clear of the shroud, but may catch system builders (or video card editors) off-guard the first time they install the card.
Finally for hardware features, for display I/O we’re looking at the same configuration we’ve seen in most GTX 1650 cards: a DisplayPort, an HDMI port, and a DL-DVI-D port. While DVI ports have long been banished from new products, there are still a lot of DVI monitors out there, particularly in China where NVIDIA’s xx50 cards tend to dominate. The tradeoff, as always, is that the DVI port is taking up space that could otherwise be filed by more DisplayPorts, so you’re only going to be able to drive up to two modern monitors with Zotac’s GTX 1650 Super. Of course, one could argue that a DL-DVI port shouldn’t even be necessary – this lower-end card isn’t likely to be driving a 1440p DL-DVI display – but I suspect this is a case where simplicity wins the day.
Post Your CommentPlease log in or sign up to comment.
View All Comments
Marlin1975 - Friday, December 20, 2019 - linkLittle disappointing. Lower performance than a 580 and higher priced than others in its price range.
If the price was quiet a bit lower then it might make a decent HTPC card for some.
HarryVoyager - Friday, December 20, 2019 - linkThe 8Gb 580's, both new and used, are going to be the elephant in the room for a while I suspect. A reasonably cared for miner card runs only $100 and will keep you going at least until AMD is competitive enough to drive nVidia's pricing down.
Yojimbo - Friday, December 20, 2019 - linkA reasonably cared for mining card...? How do you assure that?
The 580s have been so cheap because AMD made too many of them. Nvidia are not so interested in low end market share. They'll sell cards there if they can make money on them. I have a feeling GDDR6 dram has come down in price in the last year more than 7 nm fabrication cost has. Now they can sell in that market segment with good profits.
Retycint - Friday, December 20, 2019 - linkI would imagine most miners run their cards undervolted/underclocked for maximum efficiency. And it's a 24/7 load, unlike the periodic low->high->low load of gaming cards, which is actually better for the card in the long run. So miner cards actually tend to be in better condition (save for the fans, of course).
But then again, there's the risk of miner cards that have had modded BIOSes installed which might have damaged the card? So I suppose it's down to buying from a reputable seller
Yojimbo - Friday, December 20, 2019 - linkThey would have undervolted and then pushed it to get the most performance they could without crashes and run it 24/7. It would be sitting in a hot case in a hot room. I don't think it's a particularly desirable card. But my question is actually in response to what your conclusion was: How do we judge a mining card? What do you mean a reputable seller? A 3rd party that picked up the card fro some miner?
PeachNCream - Saturday, December 21, 2019 - linkGPU mining was typically done with open air brackets rather than in a densely packed case. However, GPU mining is a lot less commonplace these days given the low profitability. We are past that being a thing really so finding a modern ex-miner GPU is not as easy of a prospect as it was even a couple of years ago.
Kangal - Saturday, December 21, 2019 - linkThe thing is Mining Cards really are Gaming Cards.
There's less than 1% performance and thermal difference between the two. Even Linus TechTips did an experiment on this by comparing a mining card that was used for 4 years constantly, compared to the exact same card (same variant) which was still sealed in the box. No difference.
At worst case scenario, the used card is not going to have warranty, and you may need to "refurbish" it yourself: clean the case, clean the innards, reapply thermal paste, put in new fans. All up it's going to cost you $5-$20. So when you're saving yourself $50-$100 it's worth it, from a value point.
MamiyaOtaru - Sunday, December 22, 2019 - linknobody thinks there is a performance difference between an ex mining card and a new card. Performance doesn't do a slow fade. The worry is that the mining card could be that much closer to failure.
Kangal - Sunday, December 22, 2019 - linkActually, heaps of people think mining cards run hotter, use more power, and run slower compared to Brand New. It's actually a very widespread misconception.
Mining cards being closer to failure is actually a myth as well. If the card is running relatively normally when you buy it, a quick refurb will bring it back to New condition. The same applies to a Gaming Card, if you hear the fans are out of tune: replace fans. If it's running hotter than normal: reapply thermal paste. If it looks dusty: clean the innards and casing.
As was pointed out, Gaming Cards are usually at Idle, then at Maximum Performance, then Idle, then Max again. That process is actually harsher on the components: fan, tim, logic board, and uneven heat dissipation. A Mining Card is usually run cooler and at lower voltage, and its running smoothly and consistently which doesn't cause as much "micro-cracks" in the thermal paste, or the fan assembly. Not to mention, I think subjectively miners are likely to look after their hardware a little better (it's making $) and used in open air, there's some merit to it.
However, this stigma has been good for enthusiasts for generating a lot of great mining cards at stupid low prices. The old AMD HD7970, I've seen them go for $50 like 5 years ago. The AMD R9 290, was around $150 as well. Now, we've got plenty of ex-mining RX 470 and RX 580's, that's really killing it for people considering a RX 5500XT or GTX 1660. (Un)fortunately there wasn't too many Vega56 or Vega64 cards manufactured to affect the market too much. The Nvidia cards have seemed to keep their value much better (GTX 980, GTX 1070, etc).
MASSAMKULABOX - Friday, January 3, 2020 - linkSome mining cards had NO video outputs .. a slight handicap for gaming ?
Mining cards were in general looked after well ..but home miners ..not so much...
quite a lot of the miners factored in th resale value of the cards ,, the prices were high new, so s/h prices would also be high. Only when everyone wants to sell their cards at the same time prices drop.