Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • Eheran@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    No, hallucination is a really good term. It can be super confident and seemingly correct but still completely made up.

    • kbin_space_program@kbin.run
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      It is, but it isnt applicable in at least the glue-pizza situation as the probable source comment has been found on reddit.

      A better use of the term might be how when you try to get Bing’s image creator to make “Battletech” art, you just mostly get really obvious Warhammer 40k Space Marines and occasionally Iron Maiden album art.

    • yukijoou@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      for it to “hallucinate” things, it would have to believe in what it’s saying. ai is unable to think - so it cannot hallucinate

      • Jrockwar@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Hallucination is a technical term. Nothing to do with thinking. The scientific community could have chosen another term to describe the issue but hallucination explains really well what’s happening.

        • yukijoou@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          because it’s a text generation machine…? i mean, i wouldn’t say i can prove it, but i don’t think anyone can prove it’s capable of thinking, much less of reasoning

          like, it can string together a coherent sentence thanks to well crafted equations, sure, but i wouldn’t qualify that as “thinking”, though i guess the definition of “thinking” is debatable

          • Eheran@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            It can tell you how to stack things on top of each other the best way to get a high tower. Etc.

            Those are not random sentences. If you can not define thinking in a way this machine fails at, then stop saying it does not think.

            • Aceticon@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              A parrot can be trained to tell you how to stack things on top of each other the best way to get a high tower.

              This is just an electronic parrot, millions of times faster to train than the biological parrot, specialized in repetition alone (can’t really do anything else a parrot can) and which has been trained on billions of texts.

              You’re confusing one specific form in which humans externally express cogniscence with the actual cogniscence itself: just because intelligence can produce some forms of textual communication doesn’t mean that the relationship holds in the opposite direction and such forms of textual communication require intelligence, or if you will, just because you can photograph a real pizza to get a picture of a pizza doesn’t mean a picture of a pizza is actually of a real pizza and not something with glue to make it look like it has stringy melted cheese.

              • Eheran@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 month ago

                Again, it is absolutely capable to come up with it’s own logical stuff, hence my example. Stop saying it just copies existing stuff, that is simply wrong.

                • yukijoou@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 month ago

                  it is absolutely capable to come up with it’s own logical stuff

                  interesting, in my experience, it’s only been good at repeating things, and failing on unexpected inputs - it’s able to answer pretty accurately if a small number is even or odd, but not if it’s a large number, which indicates it’s not reasoning but parroting answers to me

                  do you have example prompts where it showed clear logical reasoning?

                  • Eheran@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    ·
                    edit-2
                    1 month ago

                    Examples showing that it comes up with it’s own solutions to an answer? Just ask it something that could not have been on the Internet before. Professor talking about AGI in GPT 4

                    Personal examples would be to code python to solve a 2D thermal heat flux problem given some context and constraints.

    • Emmie@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      You just described entirety of reddit and last I checked we didn’t call that hallucinating

      • TheBlackLounge@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        So is bullshitting. More so, only human minds can bullshit.

        We anthropomorphize machines all the time, it’s fine.

        I’d prefer we’d start calling all genai output hallucinations again. It used to be like 10 years ago, but somewhere along the line marketing decided hallucinated truths aren’t “hallucinations”.