• Nobody@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    8 months ago

    What’s the opposite of eating the onion? I read the title before looking at the site and thought it was satire.

    Wasn’t there a test a while back where the AI went crazy and started killing everything to score points? Then, they gave it a command to stop, so it killed the human operator. Then, they told it not to kill humans, and it shot down the communications tower that was controlling it and went back on a killing spree. I could swear I read that story not that long ago.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        8 months ago

        The link was missing a slash: https://www.reuters.com/article/idUSL1N38023R/

        This is typically how stories like this go. Like most animals, humans have evolved to pay extra attention to things that are scary and give inordinate weight to scenarios that present danger when making decisions. So you can present someone with a hundred studies about how AI really behaves, but if they’ve seen the Terminator that’s what sticks in their mind.

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 months ago

          Even the Terminator was the byproduct of this.

          In the 50s/60s when they were starting to think about what it might look like when something smarter than humans would exist, the thing they were drawing on as a reference was the belief that homo sapiens had been smarter than the Neanderthals and killed them all off.

          Therefore, the logical conclusion was that something smarter than us would be an existential threat that would compete with us and try to kill us all.

          Not only is this incredibly stupid (i.e. compete with us for what), it is based on BS anthropology. There’s no evidence we were smarter than the Neanderthals, we had cross cultural exchanges back and forth with them over millennia, had kids with them, and the more likely thing that killed them off was an inability to adapt to climate change and pandemics (in fact, severe COVID infections today are linked to a Neanderthal gene in humans).

          But how often do you see discussion of AGI as being a likely symbiotic coexistence with humanity? No, it’s always some fearful situation because we’ve been self-propagandizing for decades with bad extrapolations which in turn have turned out to be shit predictions to date (i.e. that AI would never exhibit empathy or creativity, when both are key aspects of the current iteration of models, and that they would follow rules dogmatically when the current models barely follow rules at all).

        • lad@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          That highly depends on the outcome of a problem. Like you don’t test much if you program a Lego car, but you do test everything very thorough if you program a satellite.

          In this case the amount of testing needed to allow a killerbot to run unsupervised will probably be so big that it will never be even half done.