sounds like another ultra wealthy guy getting old and losing it.
Man who is heavily funded by AI research and use continues pushing that AI is necessary.
If you are inferring 32 pixels from 1 pixel that is because the model has been trained on billions of computed pixels. You cannot infer data in vacuum. The statement is bullshit.
Yeah but it does it much faster and more efficiently (according to him).
oh boy, back to 160p but with ai upscaling
Don’t believe anything this goon says. Don’t believe the claims of those who stand to profit from those same claims.
We certainly can. NVIDIA’s CEO realizes that the next buzzword that sells their cards (8K, 240hz, RTX++) isn’t going to run at good framerates without it.
That’s not to say AI doesn’t have its place in graphics, but it’s definitely a crutch for extremely high-end rendering performance (see RT) and a nice performance and quality gain for weaker (hopefully cheaper) graphics cards which support it.
As a gamer and developer I sort of fear AI taking the charm away from rendered games as DLSS/FSR embeds itself in games. I don’t want to see a race to the bottom in terms of internal, pre-DLSS resolution.
I mean we could do things like Arkham Knight, Flight Simulator, The Last of Us 2 and so on. Do we really need to do everything realtime or could we continue baking GI?
AI models are already kind of baked. Just not into data files, but into a bigass mathematical model.
“im a fucking idiot and i want to put “Ai” on products to appeal to “new markets” because im greedy”
“we can’t draw pixels anymore without making graphics cards stupidly expensive because of Reasons ™”
Fify
Maybe I don’t know enough about computer graphics, but in what world would you have/want to display a group of 33 pixels (on computed, 32 inferred)?!
Are we inferring 5 to the left and right and the row above and below in weird 3 x 11 strips?
I would assume that they are saying in a bigger scope and just happen to divide down to a ratio of 1 to 32.
Like rendering in 480p (307k pixels) and then generating 4k (8.3M pixels). Which results in like 1:27, sorta close enough to what he’s saying. The AI upscale like dlss and fsr are doing just that at less extreme upscale.
Why not just make the first pixel bigger?
Man really wants that AI hype money.
Perhaps, we should be more concerned with maintaining and keeping relevant current hardware, over constant production of more powerful hardware just for the sake of doing it. We’ve hit a point of diminishing returns in terms of the value we’re actually being given by each new generation. Even the PS4 and Xbox One were able to produce gorgeous graphics.
On the flipside, you give real intelligence 32 pixels and it infers photorealistic images:
(The textures are 32x32 pixels. Yes, that’s technically 1024 pixels, but shhh. 🙃)