Meta is one of those companies wallowing in the idiotic belief that generative AI will “soon” reach intelligence and sentience and the ability to walk your dog, so odds are that it’s deploying them heavily for moderation duties. Except that the crap does not understand a single iot of the pictures and text that it analyses, so it’s bound to get huge amounts of false positives and false negatives.
Well, here’s an example of false positive. i.e. machine mod assuming that the poster is underageb&.
Protip: if you use “assumer machine” to handle people, you’re trash, your service is trash, and you both deserve to be treated as trash. Not this conclusion is surprising regarding Meta.
My hypothesis:
Meta is one of those companies wallowing in the idiotic belief that generative AI will “soon” reach intelligence and sentience and the ability to walk your dog, so odds are that it’s deploying them heavily for moderation duties. Except that the crap does not understand a single iot of the pictures and text that it analyses, so it’s bound to get huge amounts of false positives and false negatives.
Well, here’s an example of false positive. i.e. machine mod assuming that the poster is underageb&.
Protip: if you use “assumer machine” to handle people, you’re trash, your service is trash, and you both deserve to be treated as trash. Not this conclusion is surprising regarding Meta.