• Lojcs@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 hours ago

      Except a computer isn’t making decisions here? An investigator is making decisions, the computer is sorting the investigatees. Even if that wasn’t the case it wouldn’t be ambiguous who to blame, it’s clearly on who decided what goes into the algorithm and how it should work

      • vonxylofon@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        55 minutes ago

        With a clear set of criteria, you can easily make this argument that the designer of the discrimination system is culpable because they input discriminatory criteria into the system, I’m with you there.

        However, with AI, it may easily happen that unforeseen discriminatory behaviour emerges, in which case I would argue it is indistinguishable in practice whether a computer is purely evaluating criteria or making a decision on its own for the purposes of calling decisions discriminatory.

        The same happens e.g. when discovering new proteins using AI. AI comes up with a protein, you confirm it’s better than the previous one, victory. There may be a better one, but that’s not really a concern here. Same can’t be said when targetting a group of people with repressive measures.