• Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Yeah, had that on my very first attempt at using it.

    It used a component that didn’t exist. I called it out and it went “you are correct, that was removed in <older version>. Try this instead.” and created an entirely new set of bogus components and functions. This cycle continued until I gave up. It knows what code looks like, and what the excuses look like and that’s about it. There’s zero understanding.

    It’s probably great if you’re doing some common homework (Javascript Fibonacci sequence or something) or menial task, but for anything that might reach the edges of its “knowledge”, it has no idea where those edges may lie so just bullshits.

    • planish@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      No?

      An anthropomorphic model of the software, wherein you can articulate things like “the software is making up packages”, or “the software mistakenly thinks these packages ought to exist”, is the right level of abstraction for usefully reasoning about software like this. Using that model, you can make predictions about what will happen when you run the software, and you can take actions that will lead to the outcomes you want occurring more often when you run the software.

      If you try to explain what is going on without these concepts, you’re left saying something like “the wrong token is being sampled because the probability of the right one is too low because of several thousand neural network weights being slightly off of where they would have to be to make the right one come out consistently”. Which is true, but not useful.

      The anthropomorphic approach suggests stuff like “yell at the software in all caps to only use python packages that really exist”, and that sort of approach has been found to be effective in practice.

    • db0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      “Hallucinate” is the standard term used to explain the GenAI models coming up with untrue statements

      • Cyrus Draegur@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        in terms of communication utility, it’s also a very accurate term.

        when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

        when AIs hallucinate, it’s due to its predictive model generating results that do not align with reality because it instead flew off the rails presuming what was calculated to be likely to exist rather than referencing positively certain information.

        it’s the same song, but played on a different instrument.

  • RustyNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    *bad Devs

    Always look on the official repository. Not just to see if it exists, but also to make sure it isn’t a fake/malicious one

    • maynarkh@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      *bad Devs

      Or devs who don’t give a shit. Most places have a lot of people who don’t give a shit because the company does not give a shit about them either.

  • krakenfury@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    One of the first things I noticed when I asked ChatGPT to write some terraform for me a year ago was that it uses modules that don’t exist.