• Franklin@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 months ago

    Main issue is one is using it’s training data and the version answering you search is summarising search results, which can vary in quality and since it’s just a predictive text tree it can’t really fact check.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      Yeah when you use Gemini, it seems like sometimes it’ll just answer based on its training, and sometimes it’ll cite some source after a search, but it seems like you can’t control that. It’s not like Bing that will always summarize and link where it got that information from.

      I also think Gemini probably uses some sort of knowledge graph under the hoods, because it has some very up to date information sometimes.

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        I think copilot is way more usable than this hallucination google AI…