• WoodScientist@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    8
    ·
    1 month ago

    People, and especially journalists, need to get this idea of robots as perfectly logical computer code out of their heads. These aren’t Asimov’s robots we’re dealing with. Journalists still cling to the idea that all computers are hard-coded. You still sometimes see people navel-gazing on self-driving cars, working the trolley problem. “Should a car veer into oncoming traffic to avoid hitting a child crossing the road?” The authors imagine that the creators of these machines hand-code every scenario, like a long series of if statements.

    But that’s just not how these things are made. They are not programmed; they are trained. In the case of self-driving cars, they are simply given a bunch of video footage and radar records, and the accompanying driver inputs in response to those conditions. Then they try to map the radar and camera inputs to whatever the human drivers did. And they train the AI to do that.

    This behavior isn’t at all surprising. Self-driving cars, like any similar AI system, are not hard coded, coldly logical machines. They are trained off us, off our responses, and they exhibit all of the mistakes and errors we make. The reason waymo cars don’t stop at crosswalks is because human drivers don’t stop at crosswalks. The machine is simply copying us.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      1 month ago

      The machine can still be trained to actually stop at crosswalks the same way it is trained to not collide with other cars even though people do that.

    • Wolf314159@startrek.website
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      1 month ago

      Whether you call in it programming or training, the designers still designed a car that doesn’t obey traffic laws.

      People need to get it out of their heads that AI is some kind of magical monkey-see-monkey-do. AI isn’t magic, it’s just a statistical model. Garbage in = Garbage out. If the machine fails because it’s only copying us, that’s not the machine’s fault, not AI’s fault, not our fault, it’s the programmer’s fault. It’s fundamentally no different, had they designed a complicated set of logical rules to follow. Training a statistical model is programming.

      You’re whole “explanation” sounds like a tech-bro capitalist news conference sound bite released by a corporation to avoid guilt for running down a child in a crosswalk.

      • WoodScientist@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 month ago

        It’s not apologeia. It’s illustrating the foundational limits of the technology. And it’s why I’m skeptical of most machine learning systems. You’re right that it’s a statistical model. But what people miss is that these models are black boxes. That is the crucial distinction between programming and training that I’m trying to get at. Imagine being handed a 10 million x 10 million matrix of real numbers and being told, “here change this so it always stops at crosswalks.” It isn’t just some line of code that can be edited.

        The distinction between training and programming is absolutely critical here. You cannot hand waive away that distinction. These models are trained like we train animals. They aren’t taught through hard coded rules.

        And that is a fundamental limit of the technology. We don’t know how to program a computer how to drive a car. Instead we only know how to make a computer mimic human driving behavior. And that means the computer can ultimately never peform better than an attentive sober human with some increases reaction time and visibility. But if there is any common errors that humans frequently make, then it will be duplicated in the machine.

        • Wolf314159@startrek.website
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          1 month ago

          It’s obvious now that you literally don’t have any idea how programming or machine learning works, thus you think no one else does either. It is absolutely not some “black box” where the magic happens. That attitude (combined with your oddly misplaced condescension) is toxic and honestly kind of offensive. You can’t hand waive away responsibility like this when doing any kind of engineering. That’s like first day ethics-101 shit.

    • tiramichu@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      edit-2
      1 month ago

      I think the reason non-tech people find this so difficult to comprehend is the poor understanding of what problems are easy for (classically programmed) computers to solve versus ones that are hard.

      if ( person_at_crossing ) then { stop }
      

      To the layperson it makes sense that self-driving cars should be programmed this way. Aftter all, this is a trivial problem for a human to solve. Just look, and if there is a person you stop. Easy peasy.

      But for a computer, how do you know? What is a ‘person’? What is a ‘crossing’? How do we know if the person is ‘at/on’ the crossing as opposed to simply near it or passing by?

      To me it’s this disconnect between the common understanding of computer capability and the reality that causes the misconception.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        But for a computer, how do you know? What is a ‘person’? What is a ‘crossing’? How do we know if the person is ‘at/on’ the crossing as opposed to simply near it or passing by?

        Most walkways are marked. The vehicle is able to identify obstructions in the road and things on the side of the road that are moving towards the road just like cross street traffic.

        If (thing) is crossing the street then stop. If (thing) is stationary near a marked crosswalk, stop and go if they don’t move in (x) seconds. If they don’t move in a reasonable amount of time, then go.

        You know, the same way people are supposed to handle the same situation.

        • hissing meerkat@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          Most crosswalks in the US are not marked, and in all places I’m familiar with vehicles must stop or yield to pedestrians at unmarked crosswalks.

          At unmarked crosswalks and marked but uncontrolled crosswalks we have to handle the situation with social cues about which direction the pedestrian wants to cross the street/road/highway and if they will feel safer crossing the road after a vehicle has passed than before (almost always for homeless pedestrians and frequently for pedestrians in moderate traffic).

          If waymo can’t figure out if something intends or is likely to enter the highway they can’t drive a car. Those can be people at crosswalks, people crossing at places other than crosswalks, blind pedestrians crossing anywhere, deaf and blind pedestrians crossing even at controlled intersections, kids or wildlife or livestock running toward the road, etc.

          • snooggums@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 month ago

            Person, dog, cat, rolling cart, bicycle, etc.

            If the car is smart enough to recognize a stationary atop sign then it should be able to ignore a permantly mounted crosswalk sign or indicator light at a crosswalk and exclude those from things that might move into the street. Or it could just stop and wait a couple seconds if it isn’t sure.

            • Dragon Rider (drag)@lemmy.nz
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              1 month ago

              A woman was killed by a self driving car because she walked her bicycle across the road. The car hadn’t been programmed to understand what a person walking a bicycle is. Its AI switched between classifying her as a pedestrian, cyclist, and “unknown”. It couldn’t tell whether to slow down, and then it hit her. The engineers forgot to add a category, and someone died.

              • snooggums@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                1 month ago

                It shouldn’t even matter what category things are when they are on the road. If anything larger than gravel is in the road the car should stop.

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        You can use that logic to say it would be difficult to do the right thing for all cases, but we can start with the ideal case.

        • For a clearly marked crosswalk with a pedestrian in the street, stop
        • For a pedestrian in the street, stop.
    • Noxy@pawb.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      That all sounds accurate, but what difference does it make how the shit works if the real world results are poor?

    • Justin@lemmy.jlh.name
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      It’s telling that Tesla and Google, worth over 3 trillion dollars, haven’t been able to solve these issues.

    • tibi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 month ago

      Training self driving cars that way would be irresponsible, because it would behave unpredictably and could be really dangerous. In reality, self driving cars use AI for only some tasks for which it is really good at like object recognition (e.g. recognizing traffic signs, pedestrians and other vehicles). The car uses all this data to build a map of its surroundings and tries to predict what the other participants are going to do. Then, it decides whether it’s safe to move the vehicle, and the path it should take. All these things can be done algorithmically, AI is only necessary for object recognition.

      In cases such as this, just follow the money to find the incentives. Waymo wants to maximize their profits. This means maximizing how many customers they can serve as well as minimizing driving time to save on gas. How do you do that? Program their cars to be a bit more aggressive: don’t stop on yellow, don’t stop at crosswalks except to avoid a collision, drive slightly over the speed limit. And of course, lobby the shit out of every politician to pass laws allowing them to get away with breaking these rules.

  • acargitz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    2
    ·
    edit-2
    1 month ago

    I’m sure a strong legal case can be made here.

    An individual driver breaking the law is bad enough but the legal system can be “flexible” because it’s hard to enforce the law against a generalized (bad) social norm and then each individual law breaker can argue an individual case etc.

    But a company systematically breaking the law on purpose is different. Scale here matters. There are no individualized circumstances and no crying at a judge that the fine will put this single mother in a position to not pay rent this month. This is systematic and premeditated. Inexcusable in every way.

    Like, a single cook forgetting to wash hands once after going to the bathroom is gross but a franchise chain building a business model around adding small quantities of poop in food is insupportable.

    • Sauerkraut@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      1 month ago

      I really want to agree, but conservative Florida ruled that people don’t have the right to clean water so I doubt the conservative Supreme Court will think we have the right to safe crosswalks

      • acargitz@lemmy.ca
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        1 month ago

        I am not intimately familiar with your country’s legal conventions, but there is already a law (pedestrians having priority in crosswalks) that is being broken here, right?

  • BastingChemina@slrpnk.net
    link
    fedilink
    English
    arrow-up
    28
    ·
    edit-2
    1 month ago

    The recent Not Just Bike video about self driving cars is really good about this subject, very dystopic

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    3
    ·
    1 month ago

    And again… If I break the law, I get a large fine or go to jail. If companies break the law, they at worst will get a small fine

    Why does this disconnect exist?

    Am I so crazy to demand that companies are not only treated the same, but held to a higher standard? I don’t stop ar a zebra, that is me breaking the law once. Waymo programming their cars noy to do that is multiple violations per day, every day. Its a company deciding they’re above the law because they want more money. Its a company deciding to risk the lives of others to earn more money.

    For me, all managers and engineers that signed off on this and worked on this should he jailed, the company should be restricted from doing business for a month, and required to immediately ensure all laws are followed or else…

    This is the only way we get companies to follow the rules.

    Instead though, we just ask compi to treat laws as suggestions, sometimes requiring small payments if they cross the line too far.

    • trolololol@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      Funny that you don’t mention company owners or directors who are supposed to oversee what happens, in practice are the people putting pressure to make that happen, and are the ones liable in front of the law.

      • Phoenixz@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I thought that was obviously implied.

        If the CEO signed off on whatever is illegal, jail him or her too.

    • isolatedscotch@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      1 month ago

      Why does this disconnect exist?

      Because the companies pay the people who make the law.

      Stating the obvious here but it’s the sad truth

    • DavidDoesLemmy@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Do you have an example of a company getting a smaller fine than an individual for the same crime? Generally company fines are much larger.

  • Skvlp@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 month ago

    Being an Alphabet subsidiary I wouldn’t expect anything less, really.

  • Kitathalla@lemy.lol
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 month ago

    The reason is likely to compete with Uber, 🤦

    A few points of clarity, as I have a family member who’s pretty high up at waymo. First, they don’t want to compete with uber. Waymo isn’t really concerned with driverless cars that you or I would be owning/using, and they don’t want (at this point anyway) to try to start a new taxi service. Right now you order an uber and a waymo car might show up. . They want the commercial side of the equation. How much would uber pay to not have to pay drivers? How much would a shipping company fork over when they can jettison the $75k-150 drivers?

    Second, I know for a fact that the upper management was pushing for the cars to drive like this. I can nearly quote said family member opining that if the cars followed all the rules of the road, they wouldn’t perform well, couching it in the language of ‘efficiency.’ It was something like, “being polite creates confusion in other drivers. They expect you to roll through the stop sign or turn right ahead of them even if they have right of way.” So now the waymo cars do the same thing. Yay, “social norms.”

    A third point is that, as someone else mentioned, the cars are now trained, not ‘programmed’ with instructions to follow. Said family member spoke of when they switched to the machine learning model, and it was better than the highly complicated (and I’m dumbing down my description because I can’t describe it well) series of if-else statements. With that training comes the issue of the folks in charge of things not knowing exactly what is going on. An issue that was described to me was their cars driving right at the edge of the lane, rather than in the center of it, and they couldn’t figure out why or (at that point, anyway) how to fix it.

    As an addendum to that third point, the training data is us, quite literally. They get and/or purchase people’s driving. I think at one time it was actual video, not sure now. So if 90% of drivers blast through at the moment of the red light change if they can, it’s likely you’ll hear about it eventually from waymo. It’s a weakness that ties right into that ‘social norm’ thing. We’re not really training safer driving by having machine drivers, we’re just removing some of the human factors like fatigue or attention deficits. Again, as I get frustrated with the language of said family member (and I’m paraphrasing), ‘how much do we really want to focus on low percentage occurrences? Improving the ‘miles per collision’ is best at the big things.’

    • trolololol@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      Hmmm yeah no surprises there and I like how you articulated it all really well

      On the social norm thing, it’s still a conscious decision how much they’re investing in teaching their ai how to distinguish good vs bad behavior. In AI speak, you can totally mark adequate behavior with rewards and bad behavior with penalties. Then you get the car to shift its behavior in the right direction. You can’t predict how it fine tunes specific behavior like the line edge unless you are willing to start from scratch if necessary, but overall that’s how you teach it that crossing a red light is a big no no. Penalties, and if not enough, start over.

    • astronaut_sloth@mander.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      A third point is that, as someone else mentioned, the cars are now trained, not ‘programmed’ with instructions to follow.

      As an addendum to that third point, the training data is us, quite literally.

      Yeah, that makes sense. I was in SF a few months ago, and I was impressed with how the Waymos drove–not so much the driving quality (which seemed remarkably average) but how lifelike they drove. They still seemed generally safer than the human-driven cars.

      Improving the ‘miles per collision’ is best at the big things.

      Given the nature of reinforcement learning algorithms, this attitude actually works pretty well. Obviously, it’s not perfect, and the company should really program in some guardrails to override the decision algorithm if it makes an egregiously poor decision (like y’know, not stopping at crosswalks for pedestrians) but it’s actually not as bad or ghoulish as it sounds.

      • Kitathalla@lemy.lol
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        but it’s actually not as bad or ghoulish as it sounds

        We’ll have to agree to disagree on that one. I think decisions made solely for making the company’s cost as low as possible while actively choosing to not care about issues just because their chance is low (we’ve all seen fight club, right? [If A > B where B=cost of paying out * chance of occurrence and A=cost of recall, no recall]) even if devastating are ghoulish.

  • thann@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 month ago

    How do you admit to intentionally ignoring traffic laws and not get instantly shutdown by the NTSB?

  • Blackout@fedia.io
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    1 month ago

    Pedestrians have had it too easy long enough. If elected President I will remove the sidewalks and install moats filled with alligators and sharks with loose 2x4s to cross them. Trained snipers will be watching every crosswalk so if you want a shot at making it remember to serpentine. This is Ford™ country.

  • Not_mikey@slrpnk.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 month ago

    Speaking as someone who lives and walks in sf daily, they’re still more courteous to pedestrians then drivers and I’d be happy if they replaced human drivers in the city. I’d be happier if we got rid of all the cars but I’ll take getting rid of the psychopaths blowing through intersections.

  • tiredofsametab@fedia.io
    link
    fedilink
    arrow-up
    3
    ·
    1 month ago

    It is an offense in Japan to not stop if someone is waiting before entering the crosswalk (and technically to progress until they are fully off the entire street, though I’ve had assholes whip around me for not breaking the law). People do get ticketed for it (though not enough, honestly). I wonder what they would do here.

  • Venator@lemmy.nz
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    Looks like the revisionist history podcast might need to revise thier episode about waymo… 😅