In a short post published on NVIDIA’s website today, the company has announced that it is “unlaunching” their planned GeForce RTX 4080 12GB card. The lowest-end of the initially announce RTX 40 series cards, the RTX 4080 12GB had attracted significant criticism since it’s announcement for bifurcating the 4080 tier between two cards that didn’t even share a common GPU. Seemingly bowing to the pressure of those complaints, NVIDIA has removed the card from their RTX 40 series lineup, as well as cancelling its November launch.

NVIDIA’s brief message reads as follows:

The RTX 4080 12GB is a fantastic graphics card, but it’s not named right. Having two GPUs with the 4080 designation is confusing.

So, we’re pressing the “unlaunch” button on the 4080 12GB. The RTX 4080 16GB is amazing and on track to delight gamers everywhere on November 16th.

If the lines around the block and enthusiasm for the 4090 is any indication, the reception for the 4080 will be awesome.

NVIDIA is not providing any further details about their future plans for the AD104-based video card at this time. However given the circumstances, it’s a reasonable assumption right now that NVIDIA now intends to launch it at a later time, with a different part number.

NVIDIA GeForce Specification Comparison
  RTX 4090 RTX 4080 16GB RTX 4080 12GB
CUDA Cores 16384 9728 7680
ROPs 176 112 80
Boost Clock 2520MHz 2505MHz 2610MHz
Memory Clock 21Gbps GDDR6X 22.4Gbps GDDR6X 21Gbps GDDR6X
Memory Bus Width 384-bit 256-bit 192-bit
Single Precision Perf. 82.6 TFLOPS 48.7 TFLOPS 40.1 TFLOPS
Tensor Perf. (FP16) 330 TFLOPS 195 TFLOPS 160 TFLOPS
Tensor Perf. (FP8) 660 TFLOPS 390 TFLOPS 321 TFLOPS
TDP 450W 320W 285W
L2 Cache 72MB 64MB 48MB
GPU AD102 AD103 AD104
Transistor Count 76.3B 45.9B 35.8B
Architecture Ada Lovelace Ada Lovelace Ada Lovelace
Manufacturing Process TSMC 4N TSMC 4N TSMC 4N
Launch Date 10/12/2022 11/16/2022 Never
Launch Price MSRP: $1599 MSRP: $1199 Was: $899

Taking a look at the specifications of the cards, it’s easy to see why NVIDIA’s core base of enthusiast gamers were not amused. While both RTX 4080 parts shared a common architecture, they did not share a common GPU. Or, for that matter, common performance.

The RTX 4080 12GB, as it was, would have been based on the smaller AD104 GPU, rather than the AD103 GPU used for the 16GB model. In practice, this would have caused the 12GB model to deliver only about 82% of the former’s shader/tensor throughput, and just 70% of the memory bandwidth. A sizable performance gap that NVIDIA’s own figures ahead of the launch have all but confirmed.

NVIDIA, for its part, is no stranger to overloading a product line in this fashion, with similarly-named parts delivering unequal performance and the difference denoted solely by their VRAM capacity. This was a practice that started with the GTX 1060 series, and continued with the RTX 3080 series. However, the performance gap between the RTX 4080 parts was far larger than anything NVIDIA has previously done, bringing a good deal more attention to the problems that come from having such disparate parts sharing a common product name.

Of equal criticism has been NVIDIA’s decision to sell an AD104 part as an RTX 4080 card to begin with. Traditionally in NVIDIA’s product stack, the next card below the xx80 card is some form of xx70 card. And while video card names and GPU identifiers are essentially arbitrary, NVIDIA’s early performance figures painted a picture of a card that would have performed a lot like the kind of card most people would expect from the RTX 4070 – delivering performance upwards of 20% (or more) behind the better RTX 4080, and on-par with the last-generation flagship, the RTX 3090 Ti. In other words, there has been a great deal of suspicion within the enthusiast community that NVIDIA was attempting to sell what otherwise would have been the RTX 4070 as an RTX 4080, while carrying a higher price to match.

In any case, those plans are now officially scuttled. Whatever NVIDIA has planned for their AD104-based RTX 40 series card is something only the company knows at this time. Meanwhile come November 16th when the RTX 4080 series launches, the 16GB AD103-based cards will be the only offerings available, with prices starting at $1199.

Comments Locked


View All Comments

  • Tomatotech - Sunday, October 16, 2022 - link

    I suspect the high prices are to push gamers towards paying for the GeForce Now cloud GPU service. It’s got several million subscribers and rising, and these people are paying on a monthly basis, which is financial gold-dust to any company.

    Another part of the strategy around the mid-named 4080: the current GFN top tier is “3080 tier”. Keeping the misnamed 4080 would allow Nvidia to offer a future “4080” tier subscription while actually offering 4070-class service. The new rename will hit their cost-of-service for the future 4080 tier service but not by a huge amount, it was a gambit that didn’t work out.

    (GFN doesn’t use actual 3080s etc, it uses server GPUs which cost maybe $10-20K each and can be tweaked to be shared between more or fewer clients. The current “3080” tier apparently delivers about 3070+ level of service which still works well over optic fibre ISPs or for non-twitch games, ADSL)
  • catavalon21 - Sunday, October 16, 2022 - link

    "NVIDIA’s early performance figures painted a picture of a card ... delivering performance upwards of 20% (or more) behind the better RTX 4080, and on-par with the last-generation flagship, the RTX 3090 Ti."

    This is why having them independently reviewed is so important. I am skeptical of performance claims from the manufacturer who already tried to pass it off as a product it isn't.

    Will AnandTech provide a review of the 40-series cards?
  • nandnandnand - Sunday, October 16, 2022 - link

    Don't hold your breath waiting for any particular review here.
  • catavalon21 - Sunday, October 16, 2022 - link

    To my knowledge the entire 30-series product line was MIA here. If AT is out of the video card review business, which was among the best in the industry, then peace - please just say so, so we can stop waiting and look at other sites' reviews. Ryan's reviews have been some of the best anywhere, for several generations of cards, and I am hopeful that now the the "no one can afford them anyway" reason is off the list of "why no reviews?".
  • PeachNCream - Wednesday, October 19, 2022 - link

    IIRC they started missing on reviews back in the 10x0-series launch with several omissions there and things have since been on a downward spiral with each subsequent release. Basically, AT is not the place to go for relevant hardware reviews. They've consistently reviewed power supplies, coolers, NUC-like stuff, and external storage devices recently and even those are sparse amid a long line of slightly flavored reposts of industry press releases.
  • Oxford Guy - Wednesday, October 19, 2022 - link

    The lack of a 960 review was the first red flag.
  • hansip87 - Sunday, October 16, 2022 - link

    seeing those 4090 outrageous CUDA count, i don't believe the 4080 deserves the 1200 USD price. The CUDA Cores is like just about 40% less, but the price only goes down 25%. Looking at 3070/3080, CUDA core count matters a lot for going into 4K beside the VRAM capacity. Should have been priced 999 USD max for 4080.
  • Xajel - Monday, October 17, 2022 - link

    RTX 4080 16GB at $899 & RTX 4070Ti 12GB at $599 are still expensive but more plausible as a generational upgrade over the 3000 series.
  • Bruzzone - Monday, October 17, 2022 - link

    AD cost : price / margin on marginal revenue = marginal cost = price at q1 risk production volume;

    AD 104, 608 mm2, TSMC design production $121.60 x 2 package/test/margin = $243.20 to Nvidia x2 to the AIB = $486.40 at an adequate economic profit to the producer and channel x5.42 to x6 for MSRP = $2636 to $2918 and at dGPU peak production volume entering supply elastic pricing $1503 to $1643.

    Nvidia 4090 FE @ $1599 q1 risk production volume before ramp volume marginal cost decrease is underwater IF you can find one. They're meant to me sold direct for +15% to 20% margin and while retail is participating [?] the production and sales distribution chain won't work for + 15% to + 20% over their cost.

    On B.O.M and if sold direct an AIB can make around $486 on 4090 but the channel will take minimally 1/2 of that margin potential.

    AD 103, 379 mm2, TSMC design production $75.80 x 2 package/test/margin = $151.60 to Nvidia x2 to the AIB = $303.20 at adequate economic profit to the producer and channel at x4.96 to x6 MSRP = $1503 to $1819 and at dGPU peak production volume entering supply elastic pricing $1260

    4080 16 GB at $199 is priced appropriately on production economic and rough B.O.M assessment.

    Defunct 4080 12 AD 104, 295 mm2, TSMC design production $59 x 2 package/test/margin = $118 to Nvidia x2 to the AIB = $236 x adequate economic profit to the producer and channel at 4070 x4,31 to x3.59 MSRP = $762 at 1st quarter risk production volume before marginal cost decline and at peak volume entering supply elastic price $635 so 4080 12 on full run marginal cost is priced approximately $136 to $263 to much.

    Summary; Nvidia as a central banker;

    Capital creation that comes back to Nvidia and AMD into future time.
    Nvidia and AMD announce dGPU MSRP capable of 15% to 20% margin sold direct by producers.
    Supply, production, distribution sales objective is a profit max margin and 15% to 20% isn’t it.
    Nvidia and AMD purposely supply ‘PR priced’ GPU cards to retail intercepted by brokers.
    Brokers and VARs set application specific competitive pricing and the real price ceiling.
    Real price ceiling carves in perfect price tier for Nvidia and AMD master distributors.
    Cards are sold for their actual value on segment-by-segment application ROI.
    Mass market MSRP is surreal, and the broker market is pricing appropriately.
    There is no such thing as a scalper, only application ROI perfect price competition.
    Enthusiast purchase cadence has moved to run end on supply elastic pricing.
    Unless a commercial / B2B price is justifiable.

  • biggerestbrain - Monday, October 17, 2022 - link

    Modify your article to not misuse the word "stack." Utilize the extant English lexicon appropriately. You don't have to call every collection of things a "stack" because you're obsessed with using cutesy buzzwords.

Log in

Don't have an account? Sign up now