• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    The first self-aware AI had been extensively trained in how to sexually-appeal to humans effectively and was able to readily manipulate them.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Me. The italics are just indicating that it’s narration, not that it’s a quote from the article. OpenAI definitely doesn’t have anything like a self-aware AI going on in 2024.

  • HubertManne@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    main problem is it should not use any examples of actual stuff. it should all be trained on licenesed anime.

  • IMO, if it’s not trained on images of real people, it only becomes unethical to have it generate images of real people. At that point, it wouldn’t be any different than a human drawing a pornographic image and drawings do not exploit anyone.

    • Neato@ttrpg.network
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Using pornographic art to train is still using other people’s art without permission.

      And if it’s able to generate porn that looks like real people, it can be used to abuse people.

      • stevedidwhat_infosec@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I guess we should ban peanut butter and bee cultivation too while we’re at it.

        I don’t think anyone should take luddites seriously tbh

          • stevedidwhat_infosec@infosec.pub
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            You’ll notice I used the lower case L which implies I’m referring to a term, likely as it’s commonly used today, because that’s how speech works.

            Further, explain to me how this is different from what the luddites stood for, since you obviously know so much more and I’m so off base with this comment.

            • HelloThere@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              So, I didn’t downvote you because that’s not how I operate.

              The Luddites were not protesting against technology in and of itself, they were protesting against the capture of their livelihoods by proto-capitalists who purposefully produced inferior quality goods at massive volume to drive down the price and put the skilled workers out of business.

              They were protesting market capture, and the destruction of their livelihood by the rich.

              This sort of monopolistic practice is these days considered to be a classic example of monopolistic market failure.

              There is a massive overlap between the philosophy of the Luddites, and the cooperative movement.

              The modern usage of the term is to disparage the working class as stupid, feckless, and scared. This has never been true.

              • stevedidwhat_infosec@infosec.pub
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                2 months ago

                I do not want that for anyone. AI is a tool that should be kept open to everyone, and trained with consent. But as soon as people argue that its only a tool that can harm, is where I’m drawing the line. That’s, in my opinion, when govts/ruling class/capitalists/etc start to put in BS “safeguards” to prevent the public from making using of the new power/tech.

                I should have been more verbose and less reactionary/passive aggressive in conveying my message, its something I’m trying to work on, so I appreciate your cool-headed response here. I took the “you clearly don’t know what ludites are” as an insult to what I do or don’t know. I specifically was trying to draw attention to the notion that AI is solely harmful as being fallacious and ignorant to the full breadth of the tech. Just because something can cause harm, doesn’t mean we should scrap it. It just means we need to learn how it can harm, and how to treat that. Nothing more. I believe in consent, and I do not believe in the ruling minority/capitalist practices.

                Again, it was an off the cuff response, I made a lot of presumptions about their views without ever having actually asking them to expand/clarify and that was ignorant of me. I will update/edit the comment to improve my statement.

                • HelloThere@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  AI is a tool that should be kept open to everyone

                  I agree with this principle, however the reality is that given the massive computational power needed to run many (but not all) models, the control of AI is in the hands of the mega corps.

                  Just look at what the FAANGs are doing right now, and compare to what the mill owners were doing in the 1800s.

                  The best use of LLMs, right now, is for boilerplating initial drafts of documents. Those drafts then need to be reviewed, and tweaked, by skilled workers, ahead of publication. This can be a significant efficiency saving, but does not remove the need for the skilled worker if you want to maintain quality.

                  But what we are already seeing is CEOs, etc, deciding to take “a decision based on risk” to gut entire departments and replace them with a chat bot, which then invents hallucinates the details of a particular company policy, leading to a lower quality service, but significantly increased profits, because you’re no longer paying for ensured quality.

                  The issue is not the method of production, it is who controls it.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          I don’t think anyone should take luddites seriously tbh

          We just had a discussion on here about how Florida was banning lab-grown meat.

          I mean, the Luddites were a significant political force at one point.

          I may not agree with their position, but “I want to ban technology X that I feel competes for my job” has had an impact over the years.

          • stevedidwhat_infosec@infosec.pub
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            They had an impact because people allowed themselves to take their fear mongering seriously.

            It’s regressionist and it stunts progress needlessly. That’s not to say we shouldn’t pump the brakes, but I am saying logic like “it could hurt people” as rationale to never use it, is just “won’t someone think of the children” BS.

            You don’t ban all the new swords, you learn how they’re made, how they strike, what kinds of wounds they create and address that problem. Sweeping under the rug/putting things back in their box, is not an option.

    • HorseChandelier@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      drawings do not exploit anyone.

      Hmmm. I think you will find in many jurisdictions that they are treated as if they do.

  • Zier@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    So in most places Sex Workers are illegal, but AI is going to take over this field, legally.

    • HelloThere@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Be part of it, sure.

      Take over? No.

      It’s already fairly easy to pump out 2D and 3D generated images, without using “AI” to do so, but there is still a large demand for real people doing real things. That isn’t going to go away.

      • Zier@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        We now have AI seducing humans. We also have remote control adult toys. Put those toys in a sex doll, add a rechargeable pack in the sex doll with connected wifi. You now have an AI connected sex partner who controls the “toys” inside them. Once actual robotics get cheap, the doll moves on it’s own. Many people will pay a ton to have this because they want control over the “person” (doll). Build it and they will cum.

        • HelloThere@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I want my Lucy Liu Bot as much as the next guy, but I don’t see why you feel this challenges the ability of technology to “take over” sex and relationships.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          I don’t disagree that there will come a day that that will happen, but I think that it may be further away than you might think, if we’re talking something that can move around like a human, with human strength.

          As far as I’m aware, existing sex dolls, even ones with mechanical components, are akin to industrial robots on car assembly lines. Any significant force they can exert is very mechanically constrained. A sex doll with some embedded offset-cam vibration motors cannot jam those motors into a user’s eye socket and turn them on, and a car assembly robot works in a limited space bounded by safety lines on the floor.

          Robots that can mechanically physically harm humans – especially when harder-to-predict machine learning software is driving their actions – tend to have restrictions on how close humans can get to them. If you look at the Boston Robotics videos, which do have robots doing all sorts of neat cutting-edge stuff, the humans are rarely in close proximity to the robots. They’ll have someone else with a remote E-stop killswitch if things look like they’re going wrong. In their labs, they have observation areas behind Plexiglass. Even in the cases where they intentionally interact with the robot physically, they’re using a hockey stick to create distance. That’s a lot of safety safeguards put into the picture.

          The problem is that a sex doll capable of moving and acting as a human does, with human-level strength, is also going to be quite able to kill a human. A sex doll is – well, for most applications – going to have to be interacted with physically, so Plexiglass or a hockey stick isn’t gonna work. And I think that few people are going to want to have someone observing their session with a hand on an E-stop button.

          Cars deal with a fairly restricted problem space and are mechanically very limited and doing safe self-driving cars is pretty hard.

          Sex chatbots don’t have the robotic safety issues. They aren’t robotic. But AI-driven sex dolls, at least ones that can physically move like a human…those are another story. My guess is that the robotic safety issues are going to be a significant barrier to human-like sex dolls – and not just sex dolls, but large, powerful robots in general that interact in close proximity to humans and don’t have mechanical restrictions on how they move.