• fckreddit@lemmy.ml
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    edit-2
    4 days ago

    If you are inferring 32 pixels from 1 pixel that is because the model has been trained on billions of computed pixels. You cannot infer data in vacuum. The statement is bullshit.

  • Electric_Druid@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    4 days ago

    Don’t believe anything this goon says. Don’t believe the claims of those who stand to profit from those same claims.

  • dormedas@lemmy.dormedas.com
    link
    fedilink
    English
    arrow-up
    48
    ·
    4 days ago

    We certainly can. NVIDIA’s CEO realizes that the next buzzword that sells their cards (8K, 240hz, RTX++) isn’t going to run at good framerates without it.

    That’s not to say AI doesn’t have its place in graphics, but it’s definitely a crutch for extremely high-end rendering performance (see RT) and a nice performance and quality gain for weaker (hopefully cheaper) graphics cards which support it.

    As a gamer and developer I sort of fear AI taking the charm away from rendered games as DLSS/FSR embeds itself in games. I don’t want to see a race to the bottom in terms of internal, pre-DLSS resolution.

  • lustyargonian@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    ·
    3 days ago

    I mean we could do things like Arkham Knight, Flight Simulator, The Last of Us 2 and so on. Do we really need to do everything realtime or could we continue baking GI?

    • fckreddit@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      AI models are already kind of baked. Just not into data files, but into a bigass mathematical model.

  • YourPrivatHater@ani.social
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    2
    ·
    4 days ago

    “im a fucking idiot and i want to put “Ai” on products to appeal to “new markets” because im greedy”

  • PenisDuckCuck9001@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    4 days ago

    “we can’t draw pixels anymore without making graphics cards stupidly expensive because of Reasons ™”

    Fify

  • Phil_in_here@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    ·
    4 days ago

    Maybe I don’t know enough about computer graphics, but in what world would you have/want to display a group of 33 pixels (on computed, 32 inferred)?!

    Are we inferring 5 to the left and right and the row above and below in weird 3 x 11 strips?

    • Grumpy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      4 days ago

      I would assume that they are saying in a bigger scope and just happen to divide down to a ratio of 1 to 32.

      Like rendering in 480p (307k pixels) and then generating 4k (8.3M pixels). Which results in like 1:27, sorta close enough to what he’s saying. The AI upscale like dlss and fsr are doing just that at less extreme upscale.

  • Gabadabs@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    14
    ·
    4 days ago

    Perhaps, we should be more concerned with maintaining and keeping relevant current hardware, over constant production of more powerful hardware just for the sake of doing it. We’ve hit a point of diminishing returns in terms of the value we’re actually being given by each new generation. Even the PS4 and Xbox One were able to produce gorgeous graphics.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    4 days ago

    On the flipside, you give real intelligence 32 pixels and it infers photorealistic images:

    Screenshot from the game Dungeon Crawl Stone Soup

    (The textures are 32x32 pixels. Yes, that’s technically 1024 pixels, but shhh. 🙃)