Nvidia's Yassification filter for DLSS 5 is Hilarious, Nvidia CEO Responds

Is Nvidia's DLSS 5 doing too much?


As I'm sure we all know by now, DLSS stands for Deep-Learning Super-Sampling. It's a way for Nvidia GPUs to leverage its own internal AI features to sharpen a game's visuals. It starts by running a game in a much lower resolution than your screen supports, then using its AI to sharpen the images on screen. 

For most of us, DLSS is only for one thing: upscaling. DLSS5, however, is taking things in a wildly different direction.


Facebook slop-looking-ass AI

During Nvidia's GTC showcase on March 15th, Jensen Huang unveiled DLSS 5. Unlike its predecessors, this isn't an upscaler; it's a dynamic rendering engine that runs inside of the game.  It's some of the most revolutionary technology to be released on a GPU outside of Ray-Tracing. The problem is, it looks like MidJourney had a baby with the LooksMaxing community on X.

You know those shitty AI images you see online, where people de-uglify female characters in video games? Well, that's what was showcased. Despite Nvidia displaying some truly game-changing technology, that appears to be the entire problem. 

The internet was alight with backlash, calling it the "Yassification" filter, among other things. Immediately, people started comparing it to instagram filters, and AI slop. 

One major area of concern is whether or not this filter interferes with the design language of a game. The filter itself seems to have been taking creative liberties with the characters in many of the games on display, including Hogwarts Legacy, and Resident Evil. Grace Ashcroft presents an area of particular concern, as she is barely recognizable from the original design.

Some of the drastic changes the DLSS 5 filter adds to the characters don't look half bad, but it all falls apart the moment a character's face starts moving. What may look convincing in  still image immediately looks like a rubber mask crudely taped on a character's face as soon as they become expressive.

Nvidia CEO responds?


Jensen Huang responded to questions from Paul Alcourn, Editor-in-Chief at Tom's hardware. When asked about concerns that DLSS5 is making games look worse, or that it seems to be forcing art to fit Nvidia's vision of the future, Huang had this to say...

 Well, first of all, they're completely wrong... The reason for that is because, as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI.
Huang continued further, explaining that developers can fine tune the feature the way they want, to fit the developer's vision, implying that DLSS 5 is more of a rendering tool for developers, rather than a blanket filter for gamers.

Could this all have just been a misunderstanding? Was the GTC showcase just a bad example of what the new technology is capable of? 
Share on Google Plus

About Pr0litic

0 comments:

Post a Comment