• HobbitFoot @thelemmy.club
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I feel like self driving cars are going to end up being the vanguard of deciding this, and I basically see it as mirroring human liability with a high standard where gross negligence becomes criminal. If a self driving car can be proven to be safer than a sober human, it will serve the public interest to allow them to operate.

  • palordrolap@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    This has already been tried in at least one court.

    There was that story a while back about the guy who was told by an airline’s AI help-desk bot that he would get a ticket refund if turned out he was unable to fly, only for the airline to say they had no such policy when he came to claim.

    He had screenshots and said he wouldn’t have bought the tickets in the first place if he had been told the correct policy. The AI basically hallucinated a policy, and the airline was ultimately found liable. Guy got his refund.

    And the airline took down the bot.

      • palordrolap@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Interesting. A quick search around finds someone confusing a bot into selling them a Chevy Tahoe for $1 at the end of last year.

        Can’t tell whether that one went to court. I can see an argument that a reasonable person ought to think that something was wrong with the bot or the deal, especially since they deliberately confused the bot, making a strong case in favour of the dealership.

        Now, if they’d haggled it down to half price without being quite so obvious, that might have made an interesting court case.

  • superkret@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    I don’t see any legal issue here.
    When a person or a company publishes software that causes harm or damages, that person or company is fully liable and legally responsible.

    Whether they themselves understand what the software does is completely irrelevant. If they don’t have control over its output, they shouldn’t have published it.

  • BMTea@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    When it comes to deadly “mistakes” in a military context there should be strong laws preventing “appeal to AI fuckery”, so that militaries don’t get comfortable making such “mistakes.”

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I suspect that it’s going to go the same route as the ‘acting on behalf of a company’ bit.

    If I call Walmart, and the guy on the phone tells me that to deal with my COVID infection I want to drink half a gallon of bleach, and I then drink half a gallon of bleach, they’re going to absolutely be found liable.

    If I chat with a bot on Walmart, and it tells me the same thing, I’d find it shockingly hard to believe that the decisions from a jury would in any way be different.

    It’s probably even more complicated in that while a human has free will (such as it is), the bot is only going craft it’s response from the data it’s trained on, so if it goes off the rails and starts spouting dangerous nonsense, it’s probably an even EASIER case, because that means someone trained the bot that drinking bleach is a cure for COVID.

    I’m pretty sure our legal frameworks will survive stupid AI, because it’s already designed to deal with stupid humans.

    • Letstakealook@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Would a court find Walmart liable for your decision to take medical advice from a random employee? I’m sure Walmart could demonstrate that the employee was not acting in the capacity of their role and any reasonable person would not consider drinking bleach because an unqualified walmart employee told them so.

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I changed company names before posting and broke the clarity, sorry.

        Imagine I wasn’t a idiot and had said Walmart pharmacy, which is somewhere you’d expect that kind of advice.

        • Letstakealook@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          That would make it more plausible. I don’t think you’re an idiot, I was asking because I was curious if there was precedent for a jackass conspiracy minded employee handing out medical advice causing liability for a business. I wouldn’t think it is right, but I also don’t agree with other legal standards, lol.