07:00PM EDT - Kicking off momentarily will be NVIDIA's 2018 SIGGRPAH keynote

07:00PM EDT - This is a bit of an unusual event for NVIDIA, as while they present at SIGGRAPH every year, it's normally a lower-key affair

07:01PM EDT - Instead, this year the man himself, NVIDIA CEO Jensen "the more you buy, the more you save" Huang is giving a full keynote address at the show

07:03PM EDT - This keynote is scheduled to last for roughly 2 hours, which means we're expecting a fairly prolific presentation

07:04PM EDT - Meanwhile if you'd like to watch the keynote yourself, NVIDIA is streaming it over on their Ustream channel: http://www.ustream.tv/nvidia

07:04PM EDT - Which is also how we're covering this, as AnandTech is not present at SIGGRAPH this year

07:04PM EDT - And of course, the stream, already struggling, has just died

07:05PM EDT - We're back. And here we go

07:06PM EDT - Starting with a traditional video roll

07:08PM EDT - This is basically a historical recap video, showing major graphics moments since the beginning of computer graphics

07:10PM EDT - (The stream is once again acting up)

07:10PM EDT - This is the 30th anniversary of Pixar's Renderman software

07:13PM EDT - Jensen is talking about the various Pixar rendering innovations over the years

07:14PM EDT - "4000 times more computation later"

07:15PM EDT - Artists will use as much computing resources as they are provided. The real limitation is how long they're willing to wait

07:17PM EDT - Now recapping NVIDIA's own GPU history. Riva TNT, GeForce 1, GeForce 3, etc

07:17PM EDT - Jensen is on another one of his Moore's Law spiels

07:18PM EDT - GPU performance has been growing faster than Moore's Law. And Jensen wants to keep it that way

07:19PM EDT - Now talking about photorealistic real-time rendering, physics simulations, and other tasks that need cutting-edge GPU performance

07:21PM EDT - Recounting all of the various graphical hacks used over the eras to reduce graphics rendering to something that contemporary processors can handle

07:21PM EDT - This leading into Jensen's other spiel as of late, ray tracing and how it's the holy grail of computer graphics

07:23PM EDT - Ray tracing is incredibly computationally expensive. Pretty. But expensive

07:24PM EDT - Now recapping NVIDIA's RTX ray tracing technology, which was first introduced at GDC 2018

07:25PM EDT - Rolling the Star Wars RTX demo

07:26PM EDT - 5 rays per pixel in that demo

07:27PM EDT - Suprise #1: What NV just demoed was running on just 1 card

07:27PM EDT - World's first ray tracing GPU

07:28PM EDT - Jensen is harassing the cameraman with the reflections off of the card

07:29PM EDT - The text on the card reads "Quadro RTX"

07:29PM EDT - "Up to 10 GigaRays per second"

07:30PM EDT - "New computer architecture"

07:30PM EDT - "Up to 16 TFLOPs"

07:31PM EDT - (Jensen is still messing with the cameraman. He seems to be genuinely enjoying it)

07:31PM EDT - Comes with NVLink

07:31PM EDT - 100GB/sec

07:32PM EDT - "500 trillion tensor ops per second"

07:32PM EDT - Brand new processor called the "RT Core"

07:33PM EDT - "Greatest leap since 2006", which was the launch of the G80 GPU (8800 GTX)

07:34PM EDT - "Motion adaptive shading"

07:34PM EDT - Architecture is confirmed. Say hello to Turing

07:36PM EDT - Jensen is now talking a bit more about the RT core

07:36PM EDT - It accelerates ray-triangle intersection and processing the Bounding Volume Hierarchy (BVH)

07:37PM EDT - Also talking about the various precision modes, from FP32 down to INT4

07:37PM EDT - Supports 8K displays

07:37PM EDT - 8K video encoding on the NVENC block as well

07:38PM EDT - 754mm2 GPU, 18.6B transistors

07:38PM EDT - 14Gbps memory clock

07:39PM EDT - This is a big chip. Not quite GV100 Volta big, but still very big by any kind of GPU standard

07:40PM EDT - Now diving into hybrid rendering, and the value of the tensor core

07:40PM EDT - NVIDIA will also be open sourcing their Material Description Language (MDL), which they've been releasing for the past couple of years

07:41PM EDT - NVIDIA will also be supporting Pixar's Universal Scene Description (USD) language

07:41PM EDT - (Was that someone mooing?)

07:43PM EDT - Now moving on to demos, starting with comparing different rendering types (rasterization, ray tracing, etc)

07:46PM EDT - Given the extreme die size, I'm going to be a bit surprised if we see this specific Turing GPU in consumer products

07:46PM EDT - Though anything is possible

07:50PM EDT - Compute is being used here to denoise the ray tracing output

07:52PM EDT - Now running another video clip. Everything here is being done in real time

07:55PM EDT - NVIDIA wants a bigger piece of the pie in the visual effects industry

07:58PM EDT - "Large render farms". Now imagine if they were full of Quadros instead of Xeons...

07:58PM EDT - Rolling another video, this time from Porsche

08:00PM EDT - It was also rendered in real-time

08:02PM EDT - (This is a surprisingly laid-back presentation for a Jensen Huang keynote. He's largely content gawking at graphics)

08:06PM EDT - A 6x speed-up in graphics over Pascal

08:07PM EDT - Using AI to get away with lower resolution rendering and then essentially upsample/anti-alias to get more quality

08:07PM EDT - Now discussing Deep Learning Anti-Aliasing

08:09PM EDT - As a result, the combination of ray tracing, faster shading, and DLAA, NVIDIA gets the 6x speed improvement

08:11PM EDT - Now demonstrating real-time scene editing with ray tracing, using RTX

08:13PM EDT - Using RTX for architectural engineering and design

08:15PM EDT - Now demonstratign film-quality rendering. It's moving at a couple of frames per second, which is incredibly fast for a single machine

08:16PM EDT - Now announcing the NVIDIA RTX Server

08:17PM EDT - 8 Quadro RTX 8000s in a single box

08:17PM EDT - General availability in Q1'2019

08:18PM EDT - $125,000

08:18PM EDT - 3250W power consumption

08:20PM EDT - NVIDIA has been working hard behind the scenes to get software and tool developers to adopt RTX, and it looks like they've had a fair bit of success

08:21PM EDT - Quadro RTX 5000: $2300

08:21PM EDT - RTX 6000: $6300. 24GB VRAM. 10 GigaRays/second

08:22PM EDT - RTX 8000: $10000. 48GB VRAM. 10 GigaRays/second

08:22PM EDT - "It's a steal"

08:22PM EDT - And there it is "the more you buy, the more you save"

08:24PM EDT - Now recapping the keynote thus far

08:24PM EDT - Turing and RTX. The most important NVIDIA GPU since G80/Tesla

08:24PM EDT - One more surprise

08:25PM EDT - NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More - https://www.anandtech.com/show/13214/nvidia-reveals-next-gen-turing-gpu-architecture

08:26PM EDT - (I'm getting the distinct impression that something was cut from this keynote at the last moment)

08:26PM EDT - Jensen is now thanking everyone, including the audience

08:29PM EDT - And that's a wrap. Be sure to check out our complete Turing article

08:29PM EDT - https://www.anandtech.com/show/13215/nvidia-siggraph-2018-keynote-live-blog-4pm-pacific

Comments Locked

16 Comments

View All Comments

  • Cygni - Monday, August 13, 2018 - link

    My stream isn't workin at all, guess I'll be reading your text play-by-play. :)

    (Assuming your vid doesnt die too!)
  • Hxx - Monday, August 13, 2018 - link

    lets see RTX and GTX cards :)
  • ikjadoon - Monday, August 13, 2018 - link

    RT = ray tracing! :D Maybe?
  • ikjadoon - Monday, August 13, 2018 - link

    Let's all pretend I was just testing everyone.

    * because this is definitely it, lmao: https://developer.nvidia.com/rtx
  • olafgarten - Monday, August 13, 2018 - link

    10 Gigarays per second = 10,000,000,000 Rays per second = 166,666,666 Rays per Frame @ 60 FPS = 33 MPix of the Star Wars Demo in real time?
  • mode_13h - Tuesday, August 14, 2018 - link

    Not all rays are primary. In fact, most rays are not. Features like area lights, depth of field, motion blur, reflections, and global illumination chew up quite a lot of rays.
  • mode_13h - Tuesday, August 14, 2018 - link

    All of that is to say that you need significantly more than 1 ray per rendered pixel.
  • Gothmoth - Monday, August 13, 2018 - link

    amazing today.... yeah and no real content for RT until 2020..... nice try.
  • Gothmoth - Monday, August 13, 2018 - link

    but nvidia can put one or two stupid RT features in AAA title and slow down 1080 TITAN and 1080 GTX cards so much that people will want 2080 RTX cards.... just as they have done for years with gamewoks, hairworks etc.... even when you barely notice these features in the games.
  • ikjadoon - Monday, August 13, 2018 - link

    But, it won't matter for their sales: the RTX 2080 (or whatever) will be better at typical raster-only games, too. NVIDIA can't gave up on rasterization.

    Crysis didn't push DX10 adoption through visuals, but because new DX10 GPUs were better at DX9, people bought more first-gen DX10 cards.

    But.... *some* card has to be the first card to explicitly support real-time ray-tracing, so I don't really blame them.

Log in

Don't have an account? Sign up now