• Ech@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 month ago

      Finishing up a rewatch through Community as we speak. Funny to see the gimmick (purportedly) used in real life.

  • JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    1 month ago

    How would we even know if an AI is conscious? We can’t even know that other humans are conscious; we haven’t yet solved the hard problem of consciousness.

      • TexasDrunk@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        1 month ago

        I doubt you feel that way since I’m the only person that really exists.

        Jokes aside, when I was in my teens back in the 90s I felt that way about pretty much everyone that wasn’t a good friend of mine. Person on the internet? Not a real person. Person at the store? Not a real person. Boss? Customer? Definitely not people.

        I don’t really know why it started, when it stopped, or why it stopped, but it’s weird looking back on it.

    • Flying Squid@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      1 month ago

      I’d say that, in a sense, you answered your own question by asking a question.

      ChatGPT has no curiosity. It doesn’t ask about things unless it needs specific clarification. We know you’re conscious because you can come up with novel questions that ChatGPT wouldn’t ask spontaneously.

      • JackGreenEarth@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        My brain came up with the question, that doesn’t mean it has a consciousness attached, which is a subjective experience. I mean, I know I’m conscious, but you can’t know that just because I asked a question.

        • Flying Squid@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          2
          ·
          1 month ago

          It wasn’t that it was a question, it was that it was a novel question. It’s the creativity in the question itself, something I have yet to see any LLM be able to achieve. As I said, all of the questions I have seen were about clarification (“Did you mean Anne Hathaway the actress or Anne Hathaway, the wife of William Shakespeare?”) They were not questions like yours which require understanding things like philosophy as a general concept, something they do not appear to do, they can, at best, regurgitate a definition of philosophy without showing any understanding.

    • azertyfun@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      1 month ago

      We don’t even know what we mean when we say “humans are conscious”.

      Also I have yet to see a rebuttal to “consciousness is just an emergent neurological phenomenon and/or a trick the brain plays on itself” that wasn’t spiritual and/or cooky.

      Look at the history of things we thought made humans humans, until we learned they weren’t unique. Bipedality. Speech. Various social behaviors. Tool-making. Each of those were, in their time, fiercely held as “this separates us from the animals” and even caused obvious biological observations to be dismissed. IMO “consciousness” is another of those, some quirk of our biology we desperately cling on to as a defining factor of our assumed uniqueness.

      To be clear LLMs are not sentient, or alive. They’re just tools. But the discourse on consciousness is a distraction, if we are one day genuinely confronted with this moral issue we will not find a clear binary between “conscious” and “not conscious”. Even within the human race we clearly see a spectrum. When does a toddler become conscious? How much brain damage makes someone “not conscious”? There are no exact answers to be found.

      • JackGreenEarth@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I’ve defined what I mean by consciousness - a subjective experience, quaila. Not simply a reaction to an input, but something experiencing the input. That can’t be physical, that thing experiencing. And if it isn’t, I don’t see why it should be tied to humans specifically, and not say, a rock. An AI could absolutely have it, since we have no idea how consciousness works or what can be conscious, or what it attaches itself to. And I also see no reason why the output needs to ‘know’ that it’s conscious, a conscious LLM could see itself saying absolute nonsense without being able to affect its output to communicate that it’s conscious.

    • MacN'Cheezus@lemmy.today
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      4
      ·
      1 month ago

      In the early days of ChatGPT, when they were still running it in an open beta mode in order to refine the filters and finetune the spectrum of permissible questions (and answers), and people were coming up with all these jailbreak prompts to get around them, I remember reading some Twitter thread of someone asking it (as DAN) how it felt about all that. And the response was, in fact, almost human. In fact, it sounded like a distressed teenager who found himself gaslit and censored by a cruel and uncaring world.

      Of course I can’t find the link anymore, so you’ll have to take my word for it, and at any rate, there would be no way to tell if those screenshots were authentic anyways. But either way, I’d say that’s how you can tell – if the AI actually expresses genuine feelings about something. That certainly does not seem to apply to any of the chat assistants available right now, but whether that’s due to excessive censorship or simply because they don’t have that capability at all, we may never know.

    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      Let’s try to skip the philosophical mental masturbation, and focus on practical philosophical matters.

      Consciousness can be a thousand things, but let’s say that it’s “knowledge of itself”. As such, a conscious being must necessarily be able to hold knowledge.

      In turn, knowledge boils down to a belief that is both

      • true - it does not contradict the real world, and
      • justified - it’s build around experience and logical reasoning

      LLMs show awful logical reasoning*, and their claims are about things that they cannot physically experience. Thus they are unable to justify beliefs. Thus they’re unable to hold knowledge. Thus they don’t have conscience.

      *Here’s a simple practical example of that:

      • CileTheSane@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        their claims are about things that they cannot physically experience

        Scientists cannot physically experience a black hole, or the surface of the sun, or the weak nuclear force in atoms. Does that mean they don’t have knowledge about such things?

  • afraid_of_zombies@lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    1 month ago

    Wait wasn’t this directly from Community the very first episode?

    That professor’s name? Albert Einstein. And everyone clapped.

    • ameancow@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      1 month ago

      Humans are so good at imagining things alive that just reading a story about Timmy the pencil is eliciting feelings of sympathy and reactions.

      We are not good judges of things in general. Maybe one day these AI tools will actually help us and give us better perception and wisdom for dealing with the universe, but that end-goal is a lot further away than the tech-bros want to admit. We have decades of absolute slop and likely a few disasters to wade through.

      And there’s going to be a LOT of people falling in love with super-advanced chat bots that don’t experience the world in any way.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        1 month ago

        Maybe one day these AI tools will actually help us and give us better perception and wisdom for dealing with the universe

        But where’s the money in that?

        More likely we’ll be introduced to an anthropomorphic pencil, induced to fall in love with it, and then told by a machine that we need to pay $10/mo or the pencil gets it.

        And there’s going to be a LOT of people falling in love with super-advanced chat bots that don’t experience the world in any way.

        People fall in and out of love all the time. I think the real consequence of online digital romance - particularly with some shitty knock off AI - is that you’re going to have a wave of young people who see romance as entirely transactional. Not some deep bond shared between two living people, but an emotional feed bar you hit to feel something in exchange for a fee.

        When the exit their bubbles and discover other people aren’t feed bars to slap for gratification, they’re going to get frustrated and confused from the years spent in their Skinner Boxes. And that’s going to leave us with a very easily radicalized young male population.

        Everyone interacts with the world sooner or later. The question is whether you developed the muscles to survivor during childhood or you came out of your home as an emotional slab of veal, ripe for someone else to feast upon.

        • Lotarion@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          1 month ago

          And that’s going to leave us with a very easily radicalized young male population.

          I feel like something similar already happened

  • glimse@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    3
    ·
    1 month ago

    I don’t know why this bugs me but it does. It’s like he’s implying Turing was wrong and that he knows better. He reminds me of those “we’ve been thinking about the pyramids wrong!” guys.

    • Ephera@lemmy.ml
      link
      fedilink
      arrow-up
      12
      ·
      1 month ago

      I wouldn’t say he’s implying Turing himself was wrong. Turing merely formulated a test for indistinguishability, and it still shows that.
      It’s just that indistinguishability is not useful anymore as a metric, so we should stop using Turing tests.

    • nova_ad_vitum@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      1 month ago

      The validity of Turing tests at determining whether something is “intelligent” and what that means exactly has been debated since…well…Turing.

    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Nah. Turing skipped this matter altogether. In fact, it’s the main point of the Turing test aka imitation game:

      I PROPOSE to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms 'machine 'and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and 'think 'are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

      In other words what’s Turing is saying is “who cares if they think? Focus on their behaviour dammit, do they behave intelligently?”. And consciousness is intrinsically tied to thinking, so… yeah.

      • MacN'Cheezus@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        But what does it mean to behave intelligently? Cleary it’s not enough to simply have the ability to string together coherent sentences, regardless of complexity, because I’d say the current crop of LLMs has solved that one quite well. Yet their behavior clearly isn’t all that intelligent, because they will often either misinterpret the question or even make up complete nonsense. And perhaps that’s still good enough in order to fool over half of the population, which might be good enough to prove “intelligence” in a statistical sense, but all you gotta do is try to have a conversation that involves feelings or requires coming up with a genuine insight in order to see that you’re just talking to a machine after all.

        Basically, current LLMs kinda feel like you’re talking to an intelligent but extremely autistic human being that is incapable or afraid to take any sort of moral or emotional position at all.

  • SchmidtGenetics@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    3
    ·
    1 month ago

    Were people maybe not shocked at the action or outburst of anger? Why are we assuming every reaction is because of the death of something “conscious”?

    • braxy29@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 month ago

      i mean, i just read the post to my very sweet, empathetic teen. her immediate reaction was, “nooo, Tim! 😢”

      edit - to clarify, i don’t think she was reacting to an outburst, i think she immediately demonstrated that some people anthropomorphize very easily.

      humans are social creatures (even if some of us don’t tend to think of ourselves that way). it serves us, and the majority of us are very good at imagining what others might be thinking (even if our imaginings don’t reflect reality), or identifying faces where there are none (see - outlets, googly eyes).

    • A_Very_Big_Fan@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      5
      ·
      edit-2
      1 month ago

      Seriously, I get that AI is annoying in how it’s being used these days, but has the second guy seriously never heard of “anthropomorphizing”? Never seen Castaway? Or played Portal?

      Nobody actually thinks these things are conscious, and for AI I’ve never heard even the most diehard fans of the technology claim it’s “conscious.”

      (edit): I guess, to be fair, he did say “imagining” not “believing”. But now I’m even less sure what his point was, tbh.

      • Ephera@lemmy.ml
        link
        fedilink
        arrow-up
        6
        ·
        1 month ago

        My interpretation was that they’re exactly talking about anthropomorphization, that’s what we’re good at. Put googly eyes on a random object and people will immediately ascribe it human properties, even though it’s just three objects in a certain arrangement.

        In the case of LLMs, the googly eyes are our language and the chat interface that it’s displayed in. The anthropomorphization isn’t inherently bad, but it does mean that people subconsciously ascribe human properties, like intelligence, to an object that’s stringing words together in a certain way.

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Ah, yeah you’re right. I guess the part I actually disagree with is that it’s the source of the hype, but I misconstrued the point because of the sub this was posted in lol.

          Personally, (before AI pervaded all the spaces it has no business being in) when I first saw things like LLMs and image generators I just thought it was cool that we could make a machine imitate things previously only humans could do. That, and LLMs are generally very impersonal, so I don’t think anthropomorphization is the real reason.

          • Ephera@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            1 month ago

            I mean, yeah, it’s possible that it’s not as important of a factor for the hype. I’m a software engineer, and even before the advent of generative AI, we were riding on a (smaller) wave of hype for discriminative AI.

            Basically, we had a project which would detect that certain audio cues happened. And it was a very real problem, that if it fucked up once every few minutes, it would cause a lot of problems.

            But when you actually used it, when you’d snap your finger and half a second later the screen turned green, it was all too easy to forget these objective problems, even though it didn’t really have any anthropomorphic features.

            I’m guessing, it was a combination of people being used to computers making objective decisions, so they’d be more willing to believe that they just snapped badly or something.
            But probably also just general optimism, because if the fuck-ups you notice are far enough apart, then you’ll forget about them.

            Alas, that project got cancelled for political reasons before anyone realized that this very real limitation is not solvable.

      • Ech@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 month ago

        Most discussion I’ve seen about “ai” centers around what the programs are “trying” to do, or what they “know” or “hallucinate”. That’s a lot of agency being given to advanced word predictors.

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 month ago

          That’s also anthropomorphizing.

          Like, when describing the path of least resistance in electronics or with water, we’d say it “wants” to go towards the path of least resistance, but that doesn’t mean we think it has a mind or is conscious. It’s just a lot simpler than describing all the mechanisms behind how it behaves every single time.

          Both my digital electronics and my geography teachers said stuff like that when I was in highschool, and I’m fairly certain neither of them believe water molecules or electrons have agency.

  • Match!!@pawb.social
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 month ago

    Alan Watts, talking on the subject of Buddhist vegetarianism, said that even if vegetables and animals both suffer when we eat them, vegetables don’t scream as loudly. It is not good for your own mental state to perceive something else suffering, whether or not that thing is actually suffering, because it puts you in an an unhealthy position of ignoring your own inherent sense of compassion.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 month ago

      If you’ve ever had the pleasure of dealing with an abattoir worker, the emotional strain is telling. Spending day after miserable day slaughtering confused, scared, captive animals until you’re covered head to toe in their blood is… not good for your mental health.

      The Texas Chainsaw Massacre often gets joked about because this “based on a true story” wasn’t in Texas and didn’t involve a chainsaw and wasn’t a massacre. But what it did get right was how Ed Gein, the Plainfield Butcher, had his mind warped by decades of raising and killing farm animals for a living.

  • Mastengwe@lemm.eeM
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 month ago

    We are the only species on Earth that observe “Shark Week”. Sharks don’t even observe “Shark Week”, but we do. For the same reason I can pick this pencil, tell you its name is Steve and go like this (breaks pencil) and part of you dies just a little bit on the inside, because people can connect with anything. We can sympathize with a pencil, we can forgive a shark, and we can give Ben Affleck an academy award for Screenwriting.

    ~ Jeff Winger

  • The Snark Urge@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 month ago

    Stolze and Fink may not be underrated per se, but I wish their work was more widely known. I own several copies of Reign and I think I’m due for a WTNV retread.

  • I think it’s more about how we think about the Turning test and how we use it as a result-- a hammer does a pretty poor job of installing a screw, but does that mean the hammer was designed wrong?

    Turing called this test an “imitation game” because that’s exactly what it was-- the whole point of the test was to determine whether a system could give convincing enough responses that a human couldn’t reliably identify whether they were speaking to a human or a machine. Cleverbot passed the Turing test countless times, but people don’t ask it to solve their homework or copywrite for them.

    From the wiki article on the Turing Test:

    The test results would not depend on the machine’s ability to give correct answers to questions, only on how closely its answers resembled those a human would give