• funkless_eck@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    “ooh it’s more advanced but don’t worry- it’s not conscious”

    is as much a marketing tactic as “how it feels to chew 5 gum” or buzzfeedesque “top 10 celebrity mistakes - number 3 will blow your mind”

    it’s a tech product that runs a series of complicated loops against a large series of texts and returns the closest comparison, as it stands it’s never going to be dangerous in and of itself.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      Generative AI and LLMs is not what people mean when they’re talking about the dangers of AI. What we worry about doesn’t exist yet.

      • funkless_eck@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        I dont think AI sentience as danger is going to be an issue in our lifetimes - we’re 123 years in January since the first well known story featuring this trope (Karel Čapek’s Rossumovi Univerzáiní Robotī)

        We are a long way off from being able to copy virtual perception, action and unified agency of even basic organisms right now.

        Therefore all claims about the “dangers” of AI are only dangers of humans using the tool (akin to the dangers of driving a car vs the dangers of cars attacking their owners without human interaction) and thus are just marketing hyperbole

        in my opinion of course

        • Thorny_Insight@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Well yeah perhaps, but isn’t that kind of like knowing that an asteroid is heading towards earth and feeling no urgency about it? There’s non-zero chance that we’ll create AGI withing the next couple years. The chances may be low but consequences have the potential to literally end humanity - or worse.