• TimeSquirrel@kbin.melroy.org
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    This has been obvious for a while to those of us using GitHub Copilot for programming. Start a function, and then just keep hitting tab to let it autotype based on what it already wrote. It quickly devolves into strange and random bullshit. You gotta babysit it.

    • 0laura@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      very unlikely to stem from model collapse. why would they use a worse model? it’s probably because they neutered it or gave it less resources.

      • TimeSquirrel@kbin.melroy.org
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        It learns from your own code as you type so it can offer more relevant suggestions unlike the web-based LLMs. So you can make it feed back on itself.

    • NekuSoul@lemmy.nekusoul.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Same thing with Stable Diffusion if you’ve ever used a generated image as an input and repeated the same prompt. You basically get a deep-fried copy.