The model, called GameNGen, was made by Dani Valevski at Google Research and his colleagues, who declined to speak to New Scientist. According to their paper on the research, the AI can be played for up to 20 seconds while retaining all the features of the original, such as scores, ammunition levels and map layouts. Players can attack enemies, open doors and interact with the environment as usual.

After this period, the model begins to run out of memory and the illusion falls apart.

  • dustyData@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    This is just a pile of garbage. Jim Sterling’s break down is the most complete argument. But this is just a plain ol bag of shit.

  • the_artic_one@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Thinking quickly, Generative AI constructs a playable version of Doom, using only some string, a squirrel, and a playable version of Doom.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      It’s a proof of concept demonstration not a final product. You might as well say the Wright brothers didn’t have anything other than their party trick.

      So many practical applications for being able to do this beyond just video games in fact video games are probably the least useful application for this technology.

        • JDPoZ@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Because “AI” isn’t actually “artificial intelligence.” It’s the marketing term that seems to have been adapted by every corporation to describe “LLMs…” which are more like extra fancy power guzzling parrots.

          Its why the best cases for them are mimicking things brainlessly, like voice cloning for celebrity impressions… but that doesn’t mean it can act or comprehend emotion, or know how many fingers a hand should have and why they constantly hallucinate contextless bullshit… because just like a parrot doesn’t actually know any meaning of what it is saying when it goes “POLLY WANT A CRACKER…” it just knows the tall thing will give it a treat if it makes this specific squawk with its beak.

      • fruitycoder@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        Honestly I thinkyour self driving example is something this could be really cool for. If the generation can exceed real time (I.e. 20 secs of future image prediction can happen in under 20 secs) then you can preemptively react with the self driving model and cache the results.

        If the compute costs can be managed maybe even run multiple models against each other to develop an array likely branch predictions (you know what I turned left)

        Its even cooler that player input helps predict the next image.

  • Drusenija@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Regardless of the technology, isn’t this essentially creating a facsimile of a game that already exists? So the tech isn’t really about creating a new game, it’s about replicating something that already exists in a fairly inefficient manner. That doesn’t really help you to create something new, like I’m not going to be able to come up with an idea for a new game, throw it at this AI, and get something playable out of it.

    That and the fact it “can be played for up to 20 seconds” before “the model begins to run out of memory” seems like, I don’t know, a fairly major roadblock?

    • locuester@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Perhaps you could be missing the trajectory of continuous improvement. How long until The Matrix?

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        It’s an exponential increase as well and humans are very bad at judging exponential increases they look at something like this and they see no promise in it because they can’t see that four or five iterations down the line (and in the world of AI that could very easily be 3 months) it will be hundreds of times better.

    • bob_lemon@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Yes, this does nothing for game dev. But I don’t think it was supposed to.

      The fact that this is a genAI Model generating a reasonable, context aware image a whopping 20 times a second is nonetheless pretty impressive.

      • Drusenija@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        That’s a fair point actually, I’m looking at it through a product lens, not a research one.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      So you think a project should be killed immediately upon inception because it’s not immediately perfect? That is a really really weird attitude.

      • Drusenija@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        I’m more taking issue with this quote from the article:

        “Researchers behind the project say similar AI models could be used to create games from scratch in the future, just as they create text and images today.”

        This doesn’t strike me as something that can create a game from scratch, it’s something that can take an existing game and replicate it without having access to the underlying source code, and use an immense amount of processing power to do it.

        Since it seems they’re using generative AI based technology underneath it, they’re effectively building a Doom model. You might be able to spin a Doom clone off from that but I don’t see it as something you could practically throw another game type at.

        That being said as I said in a different reply, I was viewing it through the lens of something more product based rather than that of a research project. As a field of research, it’s an interesting topic. But I’m not sure how you connect it to “create games from scratch” if you don’t already have an existing game available to train the model on.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Why do you think it needs an existing game to train the model on? They used Doom precisely because it already exists.

          The entire point to the research paper was to see if humans could tell the difference between the generated content and the real game, that way they have a measurable metric of how viable this technology is even if only in theory, that means that they have to make something that’s based off a real game.

          Obviously the technology isn’t commercially viable yet. But the fact that it looks even remotely like Doom shows that there is promise to the technology.

  • harsh3466@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Correct me if I’m wrong, but doesn’t there have to be a code layer somewhere in there?

    It’s like all those “no code” platforms that just obscure away the actual coding via a gui and blocks/elements/whataver.

    • hasnt_seen_goonies@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      In this case, no. This is just interpreting what the next frame should be by the previous one. Like how the sora videos work, but with input.

        • hasnt_seen_goonies@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          Or the code is the operating system that the application is running on, or the code is the firmware that is operating the GPU that is crunching the numbers to make the neural net, or the code is the friends we made along the way.

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Note that the image here isn’t from the AI project, it’s from actual Doom. Their own screenshots have weird glitches including a hit splat that looks like a butt in the image I’ve seen closest to this one.

    And when they say they’ve “run the game” they do not mean that there was a playable version that was publicly compared to the original. Rather they released short video clips of alleged gameplay and had their evaluators try to identify if they were from the AI recreation or from actual Doom.

    Even by the abysmal standards of generative AI projects this is a hell of a grift.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Even by the abysmal standards of generative AI projects this is a hell of a grift.

      But if you invest now, you can make a game-generating AI a reality! /s