• Mountain_Mike_420@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    You just haven’t gaslighted your ai into saying the glue thing. If you keep trying by saying things like “what about non-toxic glue” or “aren’t there glues designed for humans” the ai will finally give in and recommend the glue. Don’t give up. Glue is good for us.

  • istanbullu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    These are statistical models, meaning that you’ll get a different answer each time, also different answers based on context.

    • BradleyUffner@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Not exactly. The answers would be exactly the same given the exact same inputs if they didn’t intentionally and purposefully inject some random jitter into the algorithm each time specifically to avoid getting the same answer each time

      • EmoDuck@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        That jitter is automatically present because different people will get different search results, so it’s not really intentional or purposeful

        • Turun@feddit.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Yes it is intentional.

          Some interferences even expose a way to set the “temperature” - higher values of that mean more randomized (feels creative) output, lower values mean less randomness. A temperature of 0 will make the model deterministic.

      • pup_atlas@pawb.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        It’s not just random jitter, it also likely adds context, including the device you’re using, other recent queries, and your relative location (like what state you’re in).

        I don’t work for Google, but I am somewhat close to a major AI product, and it’s pretty much the industry standard to give some contextual info to the model in addition to your query. It’s also generally not “one model”, but a set of models run in sequence— with the LLM (think chatGPT) only employed at the end to generate a paragraph from a conclusion and evidence found by a previous model.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    That’s because this isn’t something coming from the AI itself. All the people blaming the AI or calling this a “hallucination” are misunderstanding the cause of the glue pizza thing.

    The search result included a web page that suggested using glue. The AI was then told “write a summary of this search result”, which it then correctly did.

    Gemini operating on its own doesn’t have that search result to go on, so no mention of glue.

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Not quite, it is an intelligent summary. More advanced models would realize that is bad advice and not give it. However for search results, google uses a lightweight, dumber model (flash) which does not realize this.

      I tested with rock example, albiet on a different search engine (kagi). The base model gave the same answer as google (ironically based on articles about google’s bad results, it seems it was too dumb to realize that the quotations in the articles were examples of bad results, not actual facts), but the more advanced model understood and explained how the bad advice had been spreading around and you should not follow it.

      It isn’t a hallucination though, you’re right about that

  • Tetsuo@jlai.lu
    cake
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Big AI trying very hard to hide the truth about glue in pizza.

  • IsThisAnAI@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Y’all losing your mind intentionally misunderstanding what happened with the glue. Y’all are becoming anti ai lemons just looking for rage bait.

    The AI doesn’t need to be perfect. Just better than the average person. That why the shitty Tesla said driving has such good accident rates despite the fuck ups everyone loves to rage about in the news cycle.

    • Franklin@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Main issue is one is using it’s training data and the version answering you search is summarising search results, which can vary in quality and since it’s just a predictive text tree it can’t really fact check.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Yeah when you use Gemini, it seems like sometimes it’ll just answer based on its training, and sometimes it’ll cite some source after a search, but it seems like you can’t control that. It’s not like Bing that will always summarize and link where it got that information from.

        I also think Gemini probably uses some sort of knowledge graph under the hoods, because it has some very up to date information sometimes.

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          I think copilot is way more usable than this hallucination google AI…

    • efstajas@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      You can’t just “update” models to not say a certain thing with pinpoint accuracy like that. Which is why it’s so challenging to make AI not misbehave.

  • Retiring@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Ask it five times if it is sure. You can usually get it to say outrageous things this way