As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.

What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 month ago

    Not even the biggest tech companies have an answer sadly… There are bots everywhere and social media is failing to stop them. The only reason there aren’t more bots in the Fediverse is because we’re not a big enough target for them to care (though we do have occasional bot spam).

    I guess the plan is to wait until there’s an actual way to detect bots and deal with them.

    • rglullis@communick.news
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 month ago

      Not even the biggest tech companies have an answer sadly…

      They do have an answer: add friction. Add paywalls, require proof of identity, start using client-signed certificates which needs to be validated by a trusted party, etc.

      Their problem is that these answers affect their bottom line.

      I think (hope?) we actually get to the point where bots become so ubiquitous that the whole internet will become some type of Dark Forest and people will be forced to learn how to deal with technology properly.

      • simple@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        Their problem is that these answers affect their bottom line.

        It’s more complicated than that. Adding friction and paywalls will quickly kill their userbase, requiring a proof of identity or tracking users is a privacy disaster and I’m sure many people (especially here) would outright refuse to give IDs to companies.

        They’re more like a compromise than a real solution. Even then, they’re probably not foolproof and bots will still manage.

        • rglullis@communick.news
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 month ago

          requiring a proof of identity or tracking users is a privacy disaster and I’m sure many people (especially here) would outright refuse to give IDs to companies.

          The Blockchain/web3/Cypherpunk crowd already developed solutions for that. ZK-proofs allow you to confirm one’s identity without having to reveal it to public and make it impossible to correlate with other proofs.

          Add other things like reputation-based systems based on Web-Of-Trust, and we can go a long way to get rid of bots, or at least make them as harmless as email spam is nowadays.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            4
            ·
            30 days ago

            It’s unfortunate that there’s such a powerful knee-jerk prejudice against blockchain technology these days that perfectly good solutions are sitting right there in front of us but can’t be used because they have an association with the dreaded scarlet letters “NFT.”

            • atrielienz@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              30 days ago

              I don’t like or trust NFT’s and honestly, I don’t think anybody else should for the most part. I feel the same about a lot of new crypto. But I don’t necessarily distrust blockchain because of that. I think it has its own set of problems, in that where the record is kept is important and therefore a target. We already have problems with leaks of PII. Any blockchain database that stores the data to ID people will be a target too.

          • ericjmorey@discuss.online
            link
            fedilink
            English
            arrow-up
            1
            ·
            30 days ago

            ZK-proofs

            This is a solution in the same way that PGP-keys are a solution. There’s a big gulf between the theory and implementation.

            • rglullis@communick.news
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              30 days ago

              Right, but the problem with them is “bad usability”, which amounts to “friction”.

              Like I said in the original comment, I kinda believe that things will get so bad that we will eventually have to accept that the internet can only be used if we use these tools, and that “the market” starts focusing on building the tools to lower these barriers of entry, instead of having their profits coming from Surveillance Capitalism.

  • Blaze (he/him)@feddit.org
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 month ago

    I saw a comment the other day saying that “the line between the most advanced bot and the least talkative human is getting more and more thinner”

    Which made me think: what if bots are setup to pretend to be actual users? With a fake life that they could talk about, fake anecdotes, fake hobbies, fake jokes but everything would seem legit and consistent. That would be pretty weird, but probably impossible to detect.

    And then when that roleplaying bot once in a while recommends a product, you would probably trust them, after all they gave you advice for your cat last week.

    Not sure what to do in that scenario, really

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 month ago

      I’ve just accepted that if a bot interaction has the same impact on me as someone who is making up a fictional backstory, I’m not really worried wheter it is a bot or not. A bot shilling for Musk or a person shilling for Musk because they bought the hype are basically the same thing.

      In my opinion the main problem with bots is not individual acccounts pretending to be people, but the damage they can do en masse through a firehose of spam posts, comments, and manipulating engagement mechanics like up/down votes. At that point there is no need for an individual account to be convincing because it is lost in the sea of trash.

      • ericjmorey@discuss.online
        link
        fedilink
        English
        arrow-up
        2
        ·
        30 days ago

        A bot shilling for Musk or a person shilling for Musk because they bought the hype are basically the same thing.

        It’s the scale that changes. One bot can be replicated much easier than a human shill.

  • disguised_doge@kbin.earth
    link
    fedilink
    arrow-up
    2
    ·
    29 days ago

    There was already a wave of bots identified iirc. They were identified only because:

    1 the bots had random letters for usernames

    2 the bots did nothing but downvote, instantly downvoting every post by specific people who held specific opinions

    Turned into a flamware, by the time I learned about it I think the mods had deleted a lot of the discussion. But, like the big tech platforms, the plan for bots likely is going to be “oh crap, we have no idea how to solve this issue.” I don’t intend to did the admins, bots are just a pain in the ass to stop.

  • TimLovesTech (AuDHD)(he/him)@badatbeing.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    30 days ago

    For commercial services like Twitter or Reddit the bots make sense because it lets the platforms have inflated “user” numbers while also more random nonsense to sell ads against.

    But for the fediverse, the goals would be, post random stuff into the void and profit?? Like I guess you could long game some users into a product that they only research on the fediverse, but seems more cost effective for the botnets to attack the commercial networks first.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    30 days ago

    As far as I’m aware, there are no ways implemented. Got no idea because I’m not smart enough for this type of thing. The only solution I could think of is to implement a paywall (I know, disgusting) to raise the barrier to entry to try and keep bots out. That, and I don’t know if it’s currently possible, but making it so only people on your instance can comment, vote, and report posts on an instance.

    I personally feel that depending on the price of joining, that could slightly lessen the bot problem for that specific instance since getting banned means you wasted money instead of just time. Though, it might also alienate it from growing as well.