I’ve started to realize that every social media platform, including Facebook, Telegram, Twitter, etc., has issues with bot spam and fake follower accounts. These platforms typically combat this problem by implementing various measures such as ban waves, behavior detection, and more.
What strategies/tools did Lemmy employ to address bots, and what additional measures could further improve these efforts?
Currently, it’s mostly manual removals which isn’t sustainable if the platform grows. Various instances are experimenting with their own moderation tools outside of Lemmy, and I don’t think Lemmy itself has any features to combat this. Moderation improvements is something that’s been talked about with Sublinks.
Having an ‘automod’, similar to but more advanced than Reddit, would help a lot as the first step. No one likes excess use of automod, but not having it at all will be much worse. Having an improved automod system with guides and tips on how to use it effectively, will go a long way towards making moderation easier.