• JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    4 months ago

    How would we even know if an AI is conscious? We can’t even know that other humans are conscious; we haven’t yet solved the hard problem of consciousness.

      • TexasDrunk@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        4 months ago

        I doubt you feel that way since I’m the only person that really exists.

        Jokes aside, when I was in my teens back in the 90s I felt that way about pretty much everyone that wasn’t a good friend of mine. Person on the internet? Not a real person. Person at the store? Not a real person. Boss? Customer? Definitely not people.

        I don’t really know why it started, when it stopped, or why it stopped, but it’s weird looking back on it.

    • azertyfun@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      4 months ago

      We don’t even know what we mean when we say “humans are conscious”.

      Also I have yet to see a rebuttal to “consciousness is just an emergent neurological phenomenon and/or a trick the brain plays on itself” that wasn’t spiritual and/or cooky.

      Look at the history of things we thought made humans humans, until we learned they weren’t unique. Bipedality. Speech. Various social behaviors. Tool-making. Each of those were, in their time, fiercely held as “this separates us from the animals” and even caused obvious biological observations to be dismissed. IMO “consciousness” is another of those, some quirk of our biology we desperately cling on to as a defining factor of our assumed uniqueness.

      To be clear LLMs are not sentient, or alive. They’re just tools. But the discourse on consciousness is a distraction, if we are one day genuinely confronted with this moral issue we will not find a clear binary between “conscious” and “not conscious”. Even within the human race we clearly see a spectrum. When does a toddler become conscious? How much brain damage makes someone “not conscious”? There are no exact answers to be found.

      • JackGreenEarth@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I’ve defined what I mean by consciousness - a subjective experience, quaila. Not simply a reaction to an input, but something experiencing the input. That can’t be physical, that thing experiencing. And if it isn’t, I don’t see why it should be tied to humans specifically, and not say, a rock. An AI could absolutely have it, since we have no idea how consciousness works or what can be conscious, or what it attaches itself to. And I also see no reason why the output needs to ‘know’ that it’s conscious, a conscious LLM could see itself saying absolute nonsense without being able to affect its output to communicate that it’s conscious.

    • Flying Squid@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      4 months ago

      I’d say that, in a sense, you answered your own question by asking a question.

      ChatGPT has no curiosity. It doesn’t ask about things unless it needs specific clarification. We know you’re conscious because you can come up with novel questions that ChatGPT wouldn’t ask spontaneously.

      • JackGreenEarth@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        My brain came up with the question, that doesn’t mean it has a consciousness attached, which is a subjective experience. I mean, I know I’m conscious, but you can’t know that just because I asked a question.

        • Flying Squid@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          2
          ·
          4 months ago

          It wasn’t that it was a question, it was that it was a novel question. It’s the creativity in the question itself, something I have yet to see any LLM be able to achieve. As I said, all of the questions I have seen were about clarification (“Did you mean Anne Hathaway the actress or Anne Hathaway, the wife of William Shakespeare?”) They were not questions like yours which require understanding things like philosophy as a general concept, something they do not appear to do, they can, at best, regurgitate a definition of philosophy without showing any understanding.

    • MacN'Cheezus@lemmy.today
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      4
      ·
      4 months ago

      In the early days of ChatGPT, when they were still running it in an open beta mode in order to refine the filters and finetune the spectrum of permissible questions (and answers), and people were coming up with all these jailbreak prompts to get around them, I remember reading some Twitter thread of someone asking it (as DAN) how it felt about all that. And the response was, in fact, almost human. In fact, it sounded like a distressed teenager who found himself gaslit and censored by a cruel and uncaring world.

      Of course I can’t find the link anymore, so you’ll have to take my word for it, and at any rate, there would be no way to tell if those screenshots were authentic anyways. But either way, I’d say that’s how you can tell – if the AI actually expresses genuine feelings about something. That certainly does not seem to apply to any of the chat assistants available right now, but whether that’s due to excessive censorship or simply because they don’t have that capability at all, we may never know.

    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      Let’s try to skip the philosophical mental masturbation, and focus on practical philosophical matters.

      Consciousness can be a thousand things, but let’s say that it’s “knowledge of itself”. As such, a conscious being must necessarily be able to hold knowledge.

      In turn, knowledge boils down to a belief that is both

      • true - it does not contradict the real world, and
      • justified - it’s build around experience and logical reasoning

      LLMs show awful logical reasoning*, and their claims are about things that they cannot physically experience. Thus they are unable to justify beliefs. Thus they’re unable to hold knowledge. Thus they don’t have conscience.

      *Here’s a simple practical example of that:

      • CileTheSane@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        their claims are about things that they cannot physically experience

        Scientists cannot physically experience a black hole, or the surface of the sun, or the weak nuclear force in atoms. Does that mean they don’t have knowledge about such things?