From what I have seen, most Threads users are safe-spacers who wanted a platform with heavy moderation. So I guess these are just the growing pains they’ll have to get used to in the pursuit of their circlejerk paradise, particularly since this is Meta we’re talking about who have never been reliable or effective when it comes to moderating content.
I used bad words against a nefarious political person on instagram and the comment got promptly removed. I then disputed the issue and they happily restored the comment lol.
My hypothesis:
Meta is one of those companies wallowing in the idiotic belief that generative AI will “soon” reach intelligence and sentience and the ability to walk your dog, so odds are that it’s deploying them heavily for moderation duties. Except that the crap does not understand a single iot of the pictures and text that it analyses, so it’s bound to get huge amounts of false positives and false negatives.
Well, here’s an example of false positive. i.e. machine mod assuming that the poster is underageb&.
Protip: if you use “assumer machine” to handle people, you’re trash, your service is trash, and you both deserve to be treated as trash. Not this conclusion is surprising regarding Meta.