I saw this article, which made me think about it…

Kids under 16 to be banned from social media after Senate passes world-first laws


Seeing what kind of brainrot kids are watching, makes me think it’s a good idea. I wouldn’t say all content is bad, but most kids will get hooked on trash content that is intentionally designed to grab their attention.

What would be an effective way to enforce a restriction with the fewest possible side effects? And who should be the one enforcing that restriction in your opinion?

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    28 days ago

    There is no real need to regulate kids on devices … leave that up to the parents to figure out.

    What we need is to regulate every major corporately owned social media company. Regulate and control them like they do for newspapers, magazines or television. Put them under complete regulatory control across the board so that we can regain some normalcy in public perception of reality and politics everywhere.

    It’s a pipe dream I know … but in the meantime, no matter what anyone says or does … if social media companies are not regulated, everything and everyone is going to hell in a hand basket.

  • Dave@lemmy.nz
    link
    fedilink
    arrow-up
    23
    arrow-down
    2
    ·
    edit-2
    28 days ago

    I can’t remember which article I was reading, probably one on Lemmy, but it said that we know social media algorithms are bad for people and their mental and physical health, that they are divisive, drive extremism, and just in general are not safe for society.

    Drugs are regulated to ensure they are safe, so why aren’t social media algorithms regulated the same way? Politicians not understanding the technical details of algorithms is not an excuse - politicians also don’t understand the technical details of drugs, so they have a process involving experts that ensures they are safe.

    I think I’m on the side of that article. Social media algorithms are demonstrably unsafe in a range of ways, and it’s not just for under 16s. So I think we should be regulating the algorithms, requiring companies wishing to use them to prove they are safe before they do so. You could pre-approve certain basic ones (rank by date, rank by upvotes minus downvotes with time decay like lemmy, etc). You could issue patents to them like we do with drugs. But all in all, I think I am on the side of fixing the problem rather than pretending to care in the name of saving the kids.

    • orcrist@lemm.ee
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      27 days ago

      I recall that some years ago Facebook was looking into their algorithm and they found that it was potentially leading to overuse, which might be what you’re thinking of, but what actually happened is that they changed it so that people wouldn’t be using Facebook as much. Of course people who are opposed to social media ignored the second half of the above statement.

      Anyway, when you say the algorithms are demonstrably unsafe, you know you’re wrong because you didn’t demonstrate anything, and you didn’t cite anyone demonstrating anything. You can say you think they’re unsafe, but that’s a matter of opinion and we all have our own opinions.

      • Dave@lemmy.nz
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        27 days ago

        I recall that some years ago Facebook was looking into their algorithm and they found that it was potentially leading to overuse, which might be what you’re thinking of,

        No, it was recent, and it was an opinion style piece not news.

        but what actually happened is that they changed it so that people wouldn’t be using Facebook as much.

        Can you back this up? Were they forced to by a court, or was this before the IPO when facebook was trying to gain ground and didn’t answer to the share market? I can’t imagine they would be allowed to take actions that reduce profits, companies are legally required to maximise value to shareholders.

        Anyway, when you say the algorithms are demonstrably unsafe, you know you’re wrong because you didn’t demonstrate anything, and you didn’t cite anyone demonstrating anything. You can say you think they’re unsafe, but that’s a matter of opinion and we all have our own opinions.

        I mean it doesn’t take long to find studies like A nationwide study on time spent on social media and self-harm among adolescents or Does mindless scrolling hamper well-being? or How Algorithms Promote Self-Radicalization but I think this misses the point.

        You’ve grabbed the part where I made a throwaway comment but missed the point of my post. Facebook is one type of social media, and they use a specific algorithm. Ibuprofen is a specific type of drug. Sometimes ibuprofen can be used in a way that is harmful, but largely it is considered safe. But the producers still had to prove it was safe.

        • orcrist@lemm.ee
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          27 days ago

          Here’s one example of Facebook adjusting its algorithm several years ago. You can remark that it ought to do more, and I may agree with you, but that’s totally different from saying it doesn’t do anything positive. https://www.washingtonpost.com/technology/interactive/2021/how-facebook-algorithm-works/

          If your argument is that there can be drawbacks to using social media, I think everyone agrees. But remember, we were told horror stories about pinball, pool, comic books, chewing gum, Dungeons and Dragons, the list goes on and on. So with that in mind, I hope you can understand why I’m not convinced by a few studies that social media is net negative in value.

          And the reason we have laws requiring careful drug testing is because of damage that was done in the past, proven damage that actually happened, people whose lives ended short because they were doing things like imbibing radioactive chemicals. Your suggestion that we ought to treat social media the same is putting the cart before the horse. The burden of proof is on you, not on social media companies.

          • Dave@lemmy.nz
            link
            fedilink
            arrow-up
            1
            ·
            26 days ago

            I think we ultimately have different beliefs about how things should work. I think companies should prove their products are safe, you think things should be allowed unless you can prove it’s not safe.

            I get it, and I think it’s OK to have different opinions on this.

  • freethemedia@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    28 days ago

    Controversial opinion:

    In the future we are going to look back on seeing children use iPads that directly connect them to the most sophisticated engagement and manipulation algorithms ever as something as horrid as a child smoking a cigarette, or doing any other drug

    Now obviously this is an issue, but many of the suggested solutions are lacking.

    Remember: the phones in our pocket are turing complete, any software solution can be undone by another software solution

    Hardware flaws baked into chipsets will be inevitably exploited by the worst of humanity

    What we need is a LEGAL framework to this issue

    We need to see that allowing a full 5g 2.5ghZ portal to the unknown is simply absolutely harmful for a child to get there hands on without parental or educational supervision

    I suspect it really should work like regulating a drug, allow more and more unsupervised compute and networking as the child ages

    That way kids can still have dumb phones for basic safety and comms.

    I suspect laws will be applied like alcohol within the home, to allow for family owned game systems and such

    But lapses that lead to actual demonstrated harm such as mental illness leading to bodily harm or violence due to radicalization need to be treated as if a parent just fed their child alcohol without care. Or at least enabled them to it if it’s evident that they didn’t even try

    Straight up it’s also a cultural shift 13-16 yr olds gaming at home under parental guidance, but not being bought significant personal compute since it would not be sold to them or for the purpose of giving to them

    Usage in school where they get education on information technology and the harms and good it can do all fine and good , but seeing babies with iPads at the mall seen as badly as letting them smoke (and the secondhand smoke from all the brainrot leading to brainrotted adult)

    • freethemedia@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      28 days ago

      I really am curious if anyone could demonstrate a link to the amount of access to compute and network bandwidth as a child ages, and the incidence of anxiety, social, or mood disorders.

      One of the things I feel really thankful for is that the available compute and network I had access to grew up with me essentially, allowing me to generally see the harms of full scale manipulating social algorithms and avoid them.

      I feel like my mental health has been greatly benefitted by staying away from such platforms.

      • freethemedia@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        28 days ago

        This isn’t even like a social media only thing. There’s so many worse things a kid could get their eyes and ears on with the compute we just hand them Willy nilly

  • shortwavesurfer@lemmy.zip
    link
    fedilink
    arrow-up
    9
    ·
    28 days ago

    Absolutely not. Anything you put in is likely going to have privacy issues for both adults and children, and you forget how smart children are. I know we had firewalls and all kinds of shit when I was in school, and I was the person who taught everybody else how to bypass them in like five minutes. There is not a filter in the world you can put up that are going to keep children from the content that they actually want to look at.

  • stinky@redlemmy.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    28 days ago

    Me: there should be an agency like the FDA that brands news and other media with veracity labels according to guidelines we as voters agree on to prevent fake news and misinformation

    Them: YOU CAN’T BECAUSE OF FREE SPEECH DIE HEATHEN DIE

    Me: ok what about banning kids from social media?

    Them: that’s fine :)

    Hypocrites.

  • Kichae@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    27 days ago

    All prohibitions do is create a space where kids are doing it, but without any discussion about the risks. It’s the abstainance only education model, or the “war on drugs” model.

    It doesn’t work, especially when the “authorities” are doing it anyway, and they’re not even quiet about it.

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    28 days ago

    I don’t think kids should be barred from social media, since at its core, social media is just people talking and sharing things with each other.

    The problem is not with the medium or generally who is using it, it’s with the rate of consumption, poor parenting and poor moderation.

    I also think it is an even larger problem to enforce in the first place, since it will destroy one of the good things about the Internet: anonymity. Seeing as the only way to truly enforce an age restriction is to require ID to be given to verify a user’s identity. I’m not as super hardcore about my privacy like some parts of Lemmy are, but this is one thing I absolutely do not want to see happen.

  • Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    7
    ·
    28 days ago

    I don’t think that kids should be banned from social media. Instead they should be taught how to handle it in an individually and socially healthy way. Namely:

    • how to spot misinformation
    • how to spot manipulation
    • how to protect yourself online
    • how to engage constructively with other people
    • etc.

    This could be taught by parents, school, or even their own peers. But I think that all three should play a role.

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    28 days ago

    The biggest reason why not is that it requires the implementation of centralized tracking systems for everyone to confirm ID for accessing these services, which is a privacy nightmare and takes way too much agency away from individuals. If Reddit or something bans me for a stupid reason or because their broad brush modbots malfunctioned, I should be able to evade that ban with enough care and effort, and the government shouldn’t help them make sure I can’t. I should also have the ability to use social media pseudonymously without being subject to corporate tracking.

    The other reason, of course, is that banning children from social media cuts them off from participating in society or having any sort of a public voice. That’s fucked up too.

  • BougieBirdie@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    28 days ago

    I don’t think a ban is coming at the issue from the right angle. Social media misuse is fundamentally a problem of addiction, and we have a checkered past of causing harm when banning things. For a historical analogue, look at the Prohibition era of the United States.

    Ultimately, bans for these things don’t work because people will get around it anyway. And that’s exactly when dangerous things happen. Using the Prohibition example again, people poisoned themselves trying to make illegal hooch because they were determined to drink anyway.

    I think education is the answer. And I mean honestly, isn’t education always the answer? But you’ve got to educate your kids about the content they’re using. We’ve got to educate the parents about the dangerousness of unlimited access to screens. If people don’t understand the danger, then they don’t recognize the danger, and suddenly they’ve stumbled on danger.

    I’m sure everyone has heard a story about a straight-laced kid who grew up with strict parents, and then at the first opportunity to party in college goes on a bender to destroy their life. Those kids’ parents really did them a disservice by not preparing them for reality. If their only education on drugs and alcohol is “don’t do them,” then the child isn’t really aware of the risks. They just see that everyone else is doing them and having fun, and then they go off the deep end before they realize how bad things are getting.

    Social media’s the same thing. The day your kid turns seventeen they’ll have every chance to succumb to brainrot on their own volition. Without being informed of how or why that happens, there’s nothing stopping someone from falling into any internet rabbithole.

  • Cochise@lemmy.eco.br
    link
    fedilink
    arrow-up
    3
    ·
    28 days ago

    We can’t regulate half a dozen corporations, prohibiting algorithmic feed and targeted ads, so we will ban millions from using the apps with these features.