!world@quokk.au

Not going to lie, I got banned so I made my own World News Community. This community differs because there’s no silly bot, I’ll happily listen to the communities voice, and we’re a bit more lax on rules policing.

Feel free to come on by and comment. I would love to foster a News community that’s active in discussion.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 month ago

    I suppose that there’s also a broader technical issue here. Like, Deceptichum’s a real user, a regular on various communities I use. He comments, contributes. I don’t much agree with him on, say, Palestine, but on the other hand, we both happily post images to !imageai@sh.itjust.works. I figure that he probably got in a spat with the !world@lemmy.world mods, was pissed, wanted to help get a little more suction to draw users. That’s relatively harmless as the Threadiverse goes. This is some community drama.

    But you gotta figure that if it’s possible to have an instance reporting bogus vote totals, that it’s possible for someone to have bogus vote totals at greater scale. So you start adding instances to the mix. Maybe generating users. Like, there are probably a lot of ways to manipulate the view of the thing.

    And that’s an attack that will probably come, if the Threadiverse continues to grow. Like, think of all the stuff that happens on Reddit. People selling and buying accounts to buy reputability, whole websites dedicated to that, stuff like that. There’s money in eyeball time. There are a lot more routes to attack on the Threadiverse.

    I don’t know if that’s a fundamental vulnerability in ActivityPub. Maybe it could be addressed with cryptographically-signed votes and some kind of web of trust or…I don’t know. Reddit dealt with it by (a) not being a federated system and (b) mechanisms to try to detect bot accounts. But those aren’t options for the Threadiverse. It’s gotta be distributed, and it’s gonna be hard to detect bots. So, I figure this is just the start. Maybe there has to be some sort of “reputability” metric associated with users that is an input to how their voting is reported to other users, though that’s got its own set of issues.

    • PhilipTheBucket@ponder.cat
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      Maybe it could be addressed with cryptographically-signed votes

      That is how it works, I believe. Each vote has to be signed by the actor of the user that voted.

      There have been people who did transparent vote-stuffing by creating fake accounts en masse and get detected, because they were using random strings of letters for the usernames. Probably it’s happened more subtly than that and not been detected sometimes, too, but it’s not quite as simple as just reporting a high number.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago

        I believe that the basic metric of trust is instance-level. That is, it’s the TLS certificates and whether-or-not an instance is federated that is the basis of trust. I don’t think that users have individual keys – I mean, it’d be meaningless to generate one rather than just trusting a home instance without client-side storage, and that definitely doesn’t exist.

        Having client-side keys would potentially, with other work, buy some neat things, like account portability across instances.

        But the problem is that, as you point out, any solution on vote trust can’t just be user-level keys, unless every admin is gonna police who they federate with and maintain only a network of instances that they consider legit. Once I federate with an instance, I grant it the right to create as many accounts as it wants and vote how it wants. And keep in mind that ownership of an instance could change. Like, an admin retires, a new one shows up, stuff like that.

        • PhilipTheBucket@ponder.cat
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          Your actor (https://lemmy.today/u/tal)'s public key is:

           -----BEGIN PUBLIC KEY-----                                      
           MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA1VR4k0/gurS2iULVe7D6
           xwlQNTeEsn0EOVuGC2e9ZBPHv4b02Z8mvuJmWIcLxWmaL+cgHu2cJCWx2lxNYyfQ
           ivorluJHQcwPtkx9B0gFBR5SHmQzMuk6cllDMhfqUBCONiy5cpYRIs4LBpChV4vg
           frSquHPl+5LvEs1jgCZnAcTtJZVKBRISNhSp560ftntlFATMh/hIFG2Sfdi3V3+/
           0nf0QDPm77vqykj2aUk8RnnkMG2KfPwSdJMUhHQ6HQZS+AZuZ7Q+t5bs8bISFeLR
           6uqJHcrXtvOIXuFe7d/g/MKjqURaSh/Pqet8dVIwvLFFr5oNkcKhWG1QXL1k62Tr
           owIDAQAB                                                        
           -----END PUBLIC KEY-----                                        
          

          All ActivityPub users have their own private keys. I’m not completely sure, and I just took a quick look through the code and protocols and couldn’t find the place where vote activity signatures are validated. But I swear I thought that all ActivityPub activities including votes were signed with the key of the actor that did them.

          Regardless, I know that when votes federate, they do get identified according to the person who did the vote.

          In practice, you are completely correct that the trust is per-instance, since the instance DB keeps all the actor private keys anyway, so it’s six of one vs. half dozen of the other whether you have 100 fake votes from bad.instance signed with that instance’s TLS key, or 100 fake votes signed with individual private keys that bad.instance made up. I’m just nitpicking about how it works at a protocol level.

          • tal@lemmy.today
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            Ah, thank you for that, then; that makes sense. And yeah, if there is a per-user key, then I’d expect it to be signing votes.

    • geekwithsoul@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 month ago

      Good points. I also think the fediverse and Lemmy, in particular, could be attractive to certain bad actors in terms of misinformation and astroturfing, and vote manipulation would certainly help with that. I think some people think we’re safer here from that because of smaller size, etc. - but I think Lemmy users are more likely to be willing to engage (as we wouldn’t be here without willing to take leave of places like Reddit), and influencing the conversations on Lemmy could be a significant boost to someone looking to share misinformation or make a difference in very tight elections.

      On the whole, I think that’s one of the reasons Lemmy needs better built-in moderation tools than what might otherwise be thought appropriate based on its size. And an overall maturity of the platform to protect against that kind of manipulation.

      • PhilipTheBucket@ponder.cat
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        It’s very obvious that someone is doing deliberate astroturfing on Lemmy. How much is an open question, but some amount of it is definitely happening.

        The open question, to me, is why the .world moderation team seems so totally uninterested in dealing with the topic. For example, they’re happy for UniversalMonk to spam for Jill Stein in a way that openly violates the rules, that almost every single member of the community is against, and that objectively makes the community worse. Why that is happening is a baffling and interesting question to me.

        • geekwithsoul@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 month ago

          I agree. In terms of the .world mods and some of the specific cases you mentioned, I think at least part of the problem is that they are often looking at stuff at a per-comment or per-post basis and sometimes missing more holistic issues.

          My guess is that a good portion of that comes down to the quality and breadth (or lack thereof) of the Lemmy built-in moderation tools. Combined with volunteer moderation and a presidential election year in the US, and I’m sure the moderation load is close to overwhelming and they don’t really have the tools they need to be more sophisticated or efficient about it. Generally I’ve actually been impressed with a lot of the work they do, though there have been obvious missteps too.

          Everyone talks about Lemmy needing to grow in terms of users and activity, but without better moderation tools and likely some core framework changes, I think that would be a disaster. We have all the same complexities of some place like Reddit, but with the addition of different instances all with different rules, etc (not to mention different approaches to moderation).

          • PhilipTheBucket@ponder.cat
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            My guess is that a good portion of that comes down to the quality and breadth (or lack thereof) of the Lemmy built-in moderation tools. Combined with volunteer moderation and a presidential election year in the US, and I’m sure the moderation load is close to overwhelming and they don’t really have the tools they need to be more sophisticated or efficient about it.

            I completely agree. I have a whole mini-essay that I’ve been meaning to write about this, about problems of incentives and social contracts on Lemmy-style servers in the fediverse that I think lead to a lot of these issues that keep cropping up.