• ImplyingImplications@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    There are thousands of sci-fi novels where sentient robots are treated terribly by humans and apparently the people at Boston Dynamics have read absolutely zero of them as they spend all day finding new ways to torment their creations.

    • LillyPip@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 year ago

      People think I’m crazy for apologising to my roomba when I trip on it and for saying please and thank you to Alexa and Siri, but I won’t be surprised at all when the robots rise up, considering how our scientists are treating them. I’ll have a track record of being nice, and that has to count for something, right?

  • sleepy@reddthat.com
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Isn’t that a part of the ai marketing though? That whole “this thing could destroy us” stuff?

      • visak@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        The current stuff is smoke and mirrors and not intelligent in any meaningful sense, but that doesn’t mean it isn’t dangerous. It doesn’t have to be robots with guns to screw over people. Just imagine trying to get PharmaGPT to let you refill your meds, or having to deal with BankGPT trying to figure out why it transfered your rent payment twice. And companies are sure as hell thinking about using this stuff to get rid of human decisionmakers.

        • Square Singer@feddit.de
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          That is totally true but that’s a different direction than the danger in the marketing as discussed above.

          The media is full of “AI is so amazingly great, we are all going to lose our jobs and it will take over the world.”

          That’s a quite different message than what’s really the case, which is “AI is so shitty, that it will literaly kill people with bad advice when given the chance. And business leaders are so shit that they willingly trust AI, just because it’s cheaper.”

          • Baylahoo@sh.itjust.works
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            This is my biggest concern. I’m in a position where (potentially in the near future) I see AI being used as an excuse to do work quicker so we can focus on other things more but still have to review the AI response before agreeing/signing off. Reviewing for accuracy takes just as long as doing it yourself when it’s strongly regulated and it comes down to revisions and document numbers. Much less making a sound argument that actually is up to date with that documentation. So either I trust the AI short cut and open myself up to errors, or redo all the work for them. No gain in time efficiency with shorter timelines. I’d rather make something and have it flag things that I can check so I’m more sure of my own work. What I do shouldn’t be faster, but it can be more error free. It would take a lot of training and updating of training with each iteration of documentation change. I could be the slave of change, with more expectations, with no actual improvement of the tools I have (in fact more risk of issues with the tools being used).

            • psud@aussie.zone
              link
              fedilink
              arrow-up
              0
              ·
              1 year ago

              I’m in agile development, in a reasonably safe-from-AI position (scrum master).

              There has already been a trial of software development by AI, with different generative AIs in each agile role; and it worked.

              Bard claims to be able to write unit tests

              I can imagine many IT jobs becoming less skilled

              • Baylahoo@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                7 months ago

                Sorry this is months after, but it’s cool to see it worked. I use a software called XXX Agile and it’s not the worst I work with but when ported to my company has some flaws. There’s a long project to switch somewhere else for document control and people who should know much better than me are worried it will fill some gaps but open us up to way more.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Go back to living in a cave and then count the number of problems you have left, I bet there will be tons.

      • RegularGoose@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Don’t worry, in a few decades that’s where we’ll all be, you included. Assuming we survive the corporate-induced famines, anyway.

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    This is superficially funny, of course. But I’ve seen it before and after thinking about it for a while I find myself coming to the defense of the Torment Nexus and the tech company that brought it into reality.

    Science fiction authors are not necessarily the best authorities when it comes to evaluating the ethical or real-world implications of the technologies they dream up. Indeed, I think they are often particularly bad at that sort of thing. Their primary goal is to craft captivating narratives that engage readers by introducing conflicts and dilemmas that make for compelling stories. When they imagine a new technology they aren’t going to get paid unless they come up with a story in which that new technology poses some kind of threat that the heroes need to overcome. The dark side of these technologies is deliberately emphasized by the authors to create tension and drama in their stories.

    Tech companies, on the other hand, have an entirely different set of considerations. Their goal isn’t just to recreate something from a sci-fi novel for the sake of it; rather, they are motivated by solving real-world problems. They wouldn’t build the Torment Nexus unless they figured that they could sell it to someone, and that they wouldn’t get shut down for doing something society would reject. There are regulatory frameworks around this kind of thing.

    If you look back through older science fiction you can find all sorts of “cautionary tales” against technologies that have turned out to be just fine. “Fahrenheit 451” warned against the proliferation of television entertainment, but there’s been plenty of rich culture developed for that medium. “Brave New World” warned against genetic engineering, but that’s turned out to be a great technology for curing diseases and improving crop yields. The submarine in “20,000 Leagues Under the Sea” was seen as unstoppable and disruptive, but nowadays submersibles have plenty of nonmilitary applications.

    I’d want to know more about what exactly the Torment Nexus is before I automatically assume it’s a bad idea just because some sci-fi writer claimed it was.

    • RegularGoose@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I stopped reading when you said the goal of tech companies is to solve real world problems. The only goal of tech companies is to create products that will make them a profit. To believe anything else is delusional. That’s kind of why our society is crumbling and the planet is dying.

    • Amaltheamannen@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Just because some tech bros can make money from the Torment Nexus it does not become a good idea. Profit is not a great judge of ethics and value.

    • ZephrC@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      On the other other hand, maybe we only understand the dangers of the Torment Nexus and use it responsibly because science fiction authors warned techy people who are into that subject about how it could go wrong, and the people who grew up reading those books went out of their way to avoid those flaws. We do seem to have a lot more of the technologies that sci-fi didn’t predict causing severe problems in our society.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        But this is exactly contrary to my point, a science fiction author isn’t qualified or motivated to give a realistic “understanding” of the Torment Nexus. His skillset is focused on writing stories and the stories he writes need to contain danger and conflict, so he’s not necessarily going to interpret the idea of the Torment Nexus in a realistic way.

        • wanderingmagus@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          So Isaac Asimov, Arthur C. Clarke, and Robert A. Heinlein aren’t qualified to give understandings of the technologies they wrote about?

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Nope. Isaac Asimov was a biochemist, why would he be particularly qualified to determine whether robots are safe? Arthur C. Clarke had a bachelor’s degree in mathematics and physics, which technology was he an expert in? Heinlein got a bachelor of arts in engineering equivalent degree from the US Naval Academy, that’s the closest yet to having an “understanding of technology.” Which ones did he write about?

            • psud@aussie.zone
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Those were a list of authors who were pretty good at getting the science in their sci fi right. They talked to scientists working on the fields they wrote about. They wrote “hard” sci fi

              You cannot judge their competence by their formal education

              • FaceDeer@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Well, I also am “pretty good” at getting the science right when I write sci fi. Makes me just as qualified as them, I guess.

                The problem remains that the overriding goal of a sci fi author remains selling sci fi books, which requires telling a gripping story. It’s much easier to tell a gripping story when something has gone wrong and the heroes are faced with the fallout, rather than a story in which everything’s going fine and the revolutionary new tech doesn’t have any hidden downsides to cause them difficulties. Even when you’re writing “hard” science fiction you need to do that.

                And frankly, much of Asimov, Clarke and Heinlein’s output was very far from being “hard” science fiction.