Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    “Hallucination” is a technical term in machine learning. These are not hallucinations.

    It’s like being annoyed by mosquitos and so going to a store to ask for bird repellant. Mosquitos are not birds, despite sharing some characteristics, so trying to fight off birds isn’t going to help you.

    • OpenStars@discuss.online
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I am not sure what you mean. e.g. https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) says:

      In natural language processing, a hallucination is often defined as “generated content that appears factual but is ungrounded”. The main cause of hallucination from data is source-reference divergence… When a model is trained on data with source-reference (target) divergence, the model can be encouraged to generate text that is not necessarily grounded and not faithful to the provided source.

      e.g., I continued your provided example of when “socks are edible” is a band name, but the output ended up in a cooking context.

      There is a section on https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)#Terminologies but the issue seems far from settled that hallucinations is somehow a bad word. And it is not entirely illogical, since AI, like humans, necessarily has a similar tension between novelty and creativity - i.e. going beyond either of our training to deal with new circumstances.

      I suspect that the term is here to say. But I am nowhere close to an authority and could definitely be wrong:-). Mostly I am saying that you seem to be arguing a niche viewpoint, not entirely without merit obviously but one that we here in the Fediverse may not be as equipped to banter back and forth on except in the most basic of capacities.:-)

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        No, my example is literally telling the AI that socks are edible and then asking it for a recipe.

        In your quoted text:

        When a model is trained on data with source-reference (target) divergence, the model can be encouraged to generate text that is not necessarily grounded and not faithful to the provided source.

        Emphasis added. The provided source in this case would be telling the AI that socks are edible, and so if it generates a recipe for how to cook socks the output is faithful to the provided source.

        A hallucination is when you train the AI with a certain set of facts in its training data and then its output makes up new facts that were not in that training data. For example if I’d trained an AI on a bunch of recipes, none of which included socks, and then I asked it for a recipe and it gave me one with socks in it then that would be a hallucination. The sock recipe came out of nowhere, I didn’t tell it to make it up, it didn’t glean it from any other source.

        In this specific case what’s going on is that the user does a websearch for something, the search engine comes up with some web pages that it thinks are relevant, and then the content of those pages is shown to the AI and it is told “write a short summary of this material.” When the content that the AI is being shown literally has a recipe for socks in it (or glue-based pizza sauce, in the real-life example that everyone’s going on about) then the AI is not hallucinating when it gives you that recipe. It is generating a grounded and faithful summary of the information that it was provided with.

        The problem is not the AI here. The problem is that you’re giving it wrong information, and then blaming it when it accurately uses the information that it was given.

        • OpenStars@discuss.online
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          Now who is anthropomorphizing? It’s not about “blame” so much as needing words to describe the event. When the AI cannot be relied upon, bc it was insufficiently trained to be able to distinguish truth from reality, which btw many humans struggle with these days too, that is not its fault but it would be our fault if we in turn relied upon it as a source of authoritative knowledge, merely bc it was presented in a confident sounding manner.

          No, my example is literally telling the AI that socks are edible and then asking it for a recipe.

          Wait… while true that that sounds like not hallucination then, what does that have to do with this discussion? The OP wasn’t about running an AI model in this direct manner, it was about doing Google searches, where the results are already precomputed. It does not become a “hallucination” until whoever asked for the socks to be considered as edible tries to pass those results off in a wider context - where they are generally speaking considered inedible - as being applicable, when they would not be.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            1 month ago

            Wait… while true that that sounds like not hallucination then, what does that have to do with this discussion?

            Because that’s exactly what happened here. When someone Googles “how can I make my cheese stick to my pizza better?” Google does a web search that comes up with various relevant pages. One of the pages has some information in it that includes the suggestion to use glue in your pizza sauce. The Google Overview AI is then handed the text of that page and told “write a short summary of this information.” And the Overview AI does so, accurately and without hallucination.

            “Hallucination” is a technical term in LLM parliance. It means something specific, and the thing that’s happening here does not fit that definition. So the fact that my socks example is not a hallucination is exactly my point. This is the same thing that’s happening with Google Overview, which is also not a hallucination.