A team of researchers from prominent universities – including SUNY Buffalo, Iowa State, UNC Charlotte, and Purdue – were able to turn an autonomous vehicle (AV) operated on the open sourced Apollo driving platform from Chinese web giant Baidu into a deadly weapon by tricking its multi-sensor fusion system, and suggest the attack could be applied to other self-driving cars.

    • Jesus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Wait until you see what my uncle Jerry can do with a 5th of vodka and his Highlander.

  • EvilBit@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    https://xkcd.com/1958/

    TL;DR: faking out a self-driving system is always going to be possible, and so is faking out humans. But doing so is basically attempted murder, which is why the existence of an exploit like this is not interesting or new. You could also cut the brake lines or rig a bomb to it.

    • Beryl@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 months ago

      You don’t even have to rig a bomb, a better analogy to the sensor spoofing would be to just shine a sufficiently bright light in the driver’s eyes from the opposite side of the road. Things will go sideways real quick.

      • EvilBit@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        It’s not meant to be a perfect example. It’s a comparable principle. Subverting the self-driving like that is more or less equivalent to any other means of attempting to kill someone with their car.

        • Beryl@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          I don’t disagree, i’m simply trying to present a somewhat less extreme (and therefore i think more appealing) version of your argument

      • Fedizen@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I think human responses vary too much: could you follow a strategy that makes 50% of human drivers crash reliably? probably. Could you follow a strategy to make 100% of autonomous vehicles crash reliably? Almost certainly.

      • Eggyhead@kbin.run
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        I was so close to finishing, too. Time to look for another doomsday thread, I guess.

    • Uriel238 [all pronouns]@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      More exciting would be an exploit that renders an unmoving car useless. But exploits like this absolutely will be used in cases were tire-slashing might be used, such as harassing genocidal vips or disrupting police services, especially if it’s difficult to trace the drone to its controller.

  • FiveMacs@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    And people are demanding for Chinese EVs…people don’t realize it’s not a car anymore, but a computer

  • Infynis@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    This is the real reason Elon Musk doesn’t want people tracking his plane. If we know where he is, Wile E Coyote could catch up to him and trick his car into crashing into a brick wall, by painting a tunnel on it