I have noticed that lemmy so far does not have a lot of fake accounts from bots and AI slop at least from what I can tell. I am wondering how the heck do we keep this community free of that kind of stuff as continuous waves of redditors land here and the platform grows.

EDIT a potential solution:

I have an idea where people can flag a post or a user as a bot and if it’s found out to be a bot the moderators could have some tool where the bot is essentially shadow banned into an inbox that just gets dumped occasionally. I am thinking this because then people creating the bots might not realize their bot has been banned and try and create replacement bots. This could effectively reduce the amount of bots without bot creators realizing it or know if their bots have been blocked or not. The one thing that would also be needed is a way to request being un-bannned if they get hit as a false positive. these would have to be built into lemmy’s moderation tools and I don’t know if any of that exists currently.

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    1
    ·
    edit-2
    6 days ago

    My instance has “Rule 3: No AI Slop. This is a platform for humans to interact” and it’s enforced pretty vigorously.

    As far as “how”:

    1. Sometimes it’s obvious. In those cases, the posts are removed and the account behind it investigated. If the account has a pattern of it, they get a one way ticket to Ban City

    2. Sometimes they’re not obvious, but the account owner will slip up and admit to it in another post. Found a handful that way, and you guessed it, straight to Ban City.

    3. Sometimes t’s difficult on an individual post level unless there are telltale signs. Typically have to look for patterns in different posts by the same account and account for writing styles. This is more difficult / time consuming, but I’ve caught a few this way (and let some slide that were likely AI generated but not close enough to the threshold to ban).

    4. I hate the consumer AI crap (it has its place, but in every consumer product is not one of them), but sometimes if I’m desperate, I’ll try to get one of them to generate a similar post as one I’m evaluating. If it comes back very close, I’ll assume the post I’m evaluating was AI-generated and remove it while looking at other content by that user, changing their account status to Nina Ban Horn if appropriate.

    5. If an account has a high frequency of posts that seems unorganic, the Eye of Sauron will be upon them.

    6. User reports are extremely helpful as well

    7. I’ve even banned accounts that post legit news articles but use AI to summarize the article in the post body; that violates rule 3 (no AI slop) and Rule 6 (misinformation) since AI has no place near the news.

    If you haven’t noticed, this process is quite tedious and absolutely cannot scale under a small team. My suggestion: if something seems AI generated, do the legwork yourself (as described above) and report them; be as descriptive in the report as possible to save the mod/admin quite a bit of work.

    • drascus@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      9
      ·
      6 days ago

      That’s interesting I suppose everyone has their own moderation styles. To me I am not 100% opposed to all AI. I define AI slop more like really low effort posts and bulk posts. So a person who is just posting all AI generated content and cross posting to tons of community. Basically AI spam I guess you could say. If someone was to say generate an AI image and make a post talking about the prompt they used and maybe sharing what they like about the image and then commenters make derivatives or share their own results using a similar prompt I could see that sort of post being useful. Maybe there is a balance… but at the same time I can see that some people might prefer an instance that takes more of a hard line stance.

    • SorteKanin@feddit.dk
      link
      fedilink
      arrow-up
      1
      ·
      5 days ago

      Sometimes t’s difficult on an individual post level unless there are telltale signs. Typically have to look for patterns in different posts by the same account and account for writing styles.

      The problem is that this is only going to get harder. First of all, AI is going to get better and be able to produce more natural sounding stuff.

      But also, people will inevitably get affected by AI as well and people will drift towards sounding more like AI too. So both AI and humans will converge on each other and they’ll likely be impossible to tell apart in general in not too many years.

      I’m not sure how we solve this tbh.

      • mic_check_one_two@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 days ago

        But also, people will inevitably get affected by AI as well and people will drift towards sounding more like AI too.

        The “AI checkers” that schools/unis use has found a strong correlation between neurodiversity and sounding like AI. Basically, AI sounds autistic, so autistic people get flagged as AI.

    • jaybone@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      6 days ago

      That instance bans people for nothing, and has some automated ban sync system in place. It’s crazy.

      • Admiral Patrick@dubvee.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        5 days ago

        We’re not a general purpose instance, we have a defined mission statement, and the site info clearly states the rules apply to local and federated accounts. 🤷‍♂️ And the ban syncs are no longer needed as later versions of Lemmy server do the same thing automatically (our automod just implemented something almost identical prior to Lemmy adding that natively).

  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 days ago

    The only true solution to this is cryptographically signed identities.

    One method is identity verification tied to a public key, which can be done with claims aggregation (I am X on GitHub, and Y on LinkedIn, and Z on my national ID, etc), but this removes anonymous use.

    Another is a central resource to verify a user’s key is a real human, where only one entity controls the identity verification. While this allows pseudo anonymous use, it also requires everyone to trust one individilual entity, and that has other risks.

    We’ve been discussing this with FedID a lot, lately.

  • null_dot@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    6 days ago

    I don’t think there’s really a solution to this.

    Everyone is so fixated on getting more users but honestly I don’t think that will make it a better experience.

    • meyotch@slrpnk.net
      link
      fedilink
      arrow-up
      16
      ·
      6 days ago

      Growth for growth’s sake is the destruction of many good things.

      Keep Lemmy Obscure!

    • drascus@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      6
      ·
      6 days ago

      I kind of agree. It seems like there is some point at which it’s ideal and then after it grows to a certain size things become unhinged.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      5 days ago

      To me it would be worth it for Lemmy to get somewhat Eternal September’d if it meant Reddit being destroyed/replaced with something that isn’t a company.

      • null_dot@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        I respect your opinion, and can see some benefit to reddit’s demise, but I think I’m too cynical and jaded to hold that belief.

        It looks like bluesky will be twitter’s replacement, and it’s not clear that bluesky will be better.

        If reddit implodes there’s not really any likelihood that refugees will seek out lemmy.

        That said, at least lemmy is self hostable and federated. If the larger lemmy network did shit itself there would be smaller instances which are not federated with the majority of other servers so potentially they might be somewhat sheltered from bots and trolls.

        • chicken@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          1
          ·
          5 days ago

          If reddit implodes there’s not really any likelihood that refugees will seek out lemmy.

          Why not? Isn’t that the reason for the influxes of users that have happened so far?

          • null_dot@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 days ago

            Hmm, I don’t think the past is a good predictor of the future in this case.

            Maybe everyone likely to leave reddit for lemmy already has?

            With the influxes that have occurred in the past I think Lemmy has retained about a third of the MAUs contained in the spike. It’s not nothing but I think it really underlines my point that Lemmy just isn’t a viable alternative for a lot of reddit users. The network effect might be responsible for some of that, but not all.

            Also, as time goes by there are more corporate backed alternatives, like threads.

  • FenderStratocaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    6 days ago

    Keeping bots and AI-generated content off Lemmy (an open-source, federated social media platform) can be a challenge, but here are some effective strategies:

    1. Enable CAPTCHA Verification: Require users to solve CAPTCHAs during account creation and posting. This helps filter out basic bots.

    2. User Verification: Consider account age or karma-based posting restrictions. New users could be limited until they engage authentically.

    3. Moderation Tools: Use Lemmy’s moderation features to block and report suspicious users. Regularly update blocklists.

    4. Rate Limiting & Throttling: Limit post and comment frequency for new or unverified users. This makes spammy behavior harder.

    5. AI Detection Tools: Implement tools that analyze post content for AI-generated patterns. Some models can flag or reject obvious bot posts.

    6. Community Guidelines & Reporting: Establish clear rules against AI spam and encourage users to report suspicious content.

    7. Manual Approvals: For smaller communities, manually approving new members or first posts can be effective.

    8. Federation Controls: Choose which instances to federate with. Blocking or limiting interactions with known spammy instances helps.

    9. Machine Learning Models: Deploy spam-detection models that can analyze behavior and content patterns over time.

    10. Regular Audits: Periodically review community activity for trends and emerging threats.

    Do you run a Lemmy instance, or are you just looking to keep your community clean from AI-generated spam?

  • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    6 days ago

    Shadow ban doesn’t do anything because the people running the bot could just create a script to check if the comments is visible from another account (or logged out). And if it isn’t visible, they’ll know there’s a shadowban.

  • JeSuisUnHombre@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    6 days ago

    I don’t think it really answers your question. But I have been blocking every AI comm that comes up on my feed. Except for c/fuckai

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    1
    ·
    5 days ago

    Cunningham’s law helps. You can make a stand-alone website that’s slop and hope an individual user doesn’t notice the hallucinations, but on Lemmy people can reply and someone’s going to raise the alarm.

  • nutsack@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    5 days ago

    I hate memes and images so I don’t look at any of them on this platform so I don’t know what you’re talking about. You’re welcome