On September 20, Nvidia's vice president of applied deep learning, Bryan Catanzaro, went to Twitter with a bold claim: In certain GPU-heavy games, like the classic first-person platformer Portal, seven out of eight pixels on the screen are generated by a new machine-learning algorithm. That's enough, he said, to accelerate rendering by up to 5 times.
This impressive feat is currently limited to a few dozen 3D games, but it's a hint at the gains that neural rendering will soon deliver. The technique will unlock new potential in everyday consumer electronics.
Catanzaro's claim is made by possible by DLSS 3, the latest version of Nvidia's DLSS (Deep Learning Super Sampling). It combines AI-powered image upscaling with a new feature exclusive to DLSS 3: optical multiframe generation. Sequential frames are combined with an optical flow field used to predict changes between frames. DLSS 3 then slots unique, AI-generated frames between traditionally rendered frames.
"When you're playing with DLSS super resolution on performance mode in 4K, seven out of every eight pixels are being run through a neural network," says Catanzaro. "I think that's one of the reasons why you see such a great speedup. In that mode, in games that are GPU-heavy like Portal RTX […] seven out of every eight pixels are being generated by AI, and as a result we're 530% faster."
From IEEE Spectrum
View Full Article
No entries found