• CubitOom@infosec.pub
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    3 months ago

    I wonder where the line is drawn between an emergent behavior and a hallucination.

    If someone expects factual information and gets a hallucination, they will think the llm is dumb or not helpful.

    But if someone is encouraging hallucinations and wants fiction, they might think it’s an emergent behavior.

    In humans, what is the difference between an original thought, and a hallucination?

    • Umbrias@beehaw.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 months ago

      Hallucinations are unlike Human creative output. For one, ai hallucinations are unintentional. There’s plenty of reasons if you actually think about the question why they are not the same. They are at best dreamlike, but dreams are an intentional process.

      • CubitOom@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Sure there is intentional creative thought. But there are also unintentional creative thoughts. Moments of clarity, eureka moments, and strokes of inspiration. How do we differentiate these?

        If we were to say that it is because of our subconscious is intentionally promoting these thoughts. Then we would need a method to test that, because otherwise the difference is moot.

        Similar to how one might define the I in AGI it’s hard to form a consensus on general and often vague definitions like these.

        • Umbrias@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          You are assigning far more vague grandeur to ai hallucinations than what they are in practice.