A big biometric security company in the UK, Facewatch, is in hot water after their facial recognition system caused a major snafu - the system wrongly identified a 19-year-old girl as a shoplifter.

  • afraid_of_zombies@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The developers should be looking at jail time as they falsely accused someone of commiting a crime. This should be treated exactly like if I SWATed someone.

    • Guest_User@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I get your point but totally disagree this is the same as SWATing. People can die from that. While this is bad, she was excluded from stores, not murdered

      • FiniteBanjo@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        In the UK at least a SWATing would be many many times more deadly and violent than a normal police interaction. Can’t make the same argument for the USA or Russia, though.

      • cheesepotatoes@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        You lack imagination. What happens when the system mistakenly identifies someone as a violent offender and they get tackled by a bunch of cops, likely resulting in bodily injury.

          • blusterydayve26@midwest.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 month ago

            That’s not very reassuring, we’re still only one computer bug away from that situation.

            Presumably she wasn’t identified as a violent criminal because the facial recognition system didn’t associate her duplicate with that particular crime. The system would be capable of associating any set of crimes with a face. It’s not like you get a whole new face for each different possible crime. So, we’re still one computer bug away from seeing that outcome.

          • cheesepotatoes@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 month ago

            No, it wouldn’t be. The base circumstance is the same, the software misidentifying a subject. The severity and context will vary from incident to incident, but the root cause is the same - false positives.

            There’s no process in place to prevent something like this going very very bad. It’s random chance that this time was just a false positive for theft. Until there’s some legislative obligation (such as legal liability) in place to force the company to create procedures and processes for identifying and reviewing false positives, then it’s only a matter of time before someone gets hurt.

            You don’t wait for someone to die before you start implementing safety nets. Or rather, you shouldn’t.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      I’m not so sure the blame should solely be placed on the developers - unless you’re using that term colloquially.

      • IllNess@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Developers were probably the first people to say that it isn’t ready. Blame the sales people that will say anything for money.

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          They worked on it, they knew what could happen. I could face criminal charges if I do certain things at work that harm the public.

          • IllNess@infosec.pub
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            I have no idea where Facewatch got their software from. The developers of this software could’ve been told their software will be used to find missing kids. Not really fair to blame developers. Blame the people on top.

        • yetAnotherUser@feddit.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          It’s impossible to have a 0% false positive rate, it will never be ready and innocent people will always be affected. The only way to have a 0% false positive rate is with the following algorithm:

          def is_shoplifter(face_scan):
          return False

          • Zagorath@aussie.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago
            line 2
                return False
                ^^^^^^
            IndentationError: expected an indented block after function definition on line 1
            
            • yetAnotherUser@feddit.de
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              Weird, for me the indentation renders correctly. Maybe because I used Jerboa and single ticks instead of triple ticks?

              • Zagorath@aussie.zone
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                1 month ago

                Interesting. This is certainly not the first time there have been markdown parsing inconsistencies between clients on Lemmy, the most obvious example being subscript and superscript, especially when ~multiple words~ ^get used^ or you use ^reddit ^style ^(superscript text).

                But yeah, checking just now on Jerboa you’re right, it does display correctly the way you did it. I first saw it on the web in lemmy-ui, which doesn’t display it properly, unless you use the triple backticks.