The aircraft flew up to speeds of 1,200mph. DARPA did not reveal which aircraft won the dogfight.

  • antidote101@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 months ago

    Well, that’s all very idealistic, but it’s likely not going to happen.

    Israel already used AI to pick bombing sites, those bombs and missiles would have been programmed with altitudes and destinations (armed) then dropped. The pilots only job these days is to avoid interception, fly over the bombing locations, tag the target when acquired, and drop them. Most of this is already done in software.

    Eventually humans will leave the loop because unlike self-driving cars, these technologies won’t risk the lives of the aggressor’s citizens.

    If the technology is seen as unstoppable enough, there may be calls for warnings to be given, but I suspect that’s all the mercy that will be shown…

    … especially if it’s a case of a country with automated technologies killing one without or with stochastically meaningless defenses (eg. Defenses that modelling and simulations show won’t be able to prevent such attacks).

    No, in all likelihood the US will tell the country the attack sites, the country either will or will not have the technical level to prevent an amount of damage, will evacuate all necessary personal, and whoever doesn’t get the message or get out in time will be automatically killed.

    Where defenses are partially successful, that information will go into the training data for the next model, or upgrade, and the war machine will roll on.

    • KeenFlame@feddit.nu
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      You described a scenarios where a human was involved in several stages of the killing so it’s no wonder those don’t hold up

    • KeenFlame@feddit.nu
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Sorry I was stressed when replying. Yeah in those cases humans have pulled the trigger. At several stages.

      When arming a murder bot ship and sending to erase an island of life, you then lose control. That person is not pulling loads and loads of triggers. The triggers are automatic by a machine making the decision to end these lives.

      And that is a danger, same as with engineered bio warfare. It just cannot be let out of the box even, or we all may die extremely quick.

      • antidote101@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        I imagine there would be overrides built in. Until the atom bombs were physically dropped a simple radio message could have called off the mission.

        Likewise the atom bombs were only armed/activated at a certain point during the flight to Nagasaki and Hiroshima… And I believe Nagasaki wasn’t even the original target, it was an updated target because the original city scheduled for bombing was clouded over that day.

        So we do build contingencies and overrides in.

        • KeenFlame@feddit.nu
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          The entire point of automating the killing is that it is no dead man’s switch or any other human interaction involved in the kill. It is moot if there is one such. Call offs or dead switch back doors safety contingencies are not a solution to rampant unwanted slaughter as it can fail in so many ways and when the wars escalate to the point where those need to be used it is too late because there are 5 different strains of murder bots and you can only stop the ones you have codes to and those codes are only given to like three people at top secret level 28

          • antidote101@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            The entire point of automating the killing is that it is no dead man’s switch or any other human interaction involved in the kill.

            Of course someone has to set the mission jack ass. You’re so stupid. What’s your issue?

            It is moot if there is one such. Call offs or dead switch back doors safety contingencies are not a solution to rampant unwanted slaughter as it can fail in so many ways and when the wars escalate to the point where those need to be used it is too late because there are 5 different strains of murder bots and you can only stop the ones you have codes to and those codes are only given to like three people at top secret level 28

            You really have no idea how technology is developed. You probably think tanks, guns, nuclear weapons were just made as end products… Just designed from scratch and popped into existence one day. No testing, no stages of refinement, no generation changes in protocol… No in your idiotic mind end products just pop out fully formed.

            This is why I told you I wouldn’t entertain your abstractions - because they’re idiotic. It’s just mental vomit from a moron. Bye.

            • KeenFlame@feddit.nu
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              Extremely childish to use personal attacks for me sharing my opinion

              Good luck with that kind of graceful life lol bye man, if you ever grow up we can continue discussing haha