You’re on it right now.
It was called newspaper back in the day. Printing something was expensive so quality must have been good, that people were willing to read it. And social part was provided by posting letter to the newspaper adress with a hope to be printed.
As someone who has lived through some of the time with newspapers and without social media, no, quality was pretty bad back then too.
I think it’s possible, but it needs to strike lightning to be at the right place and the right time in a proverbial sense, for it to be successful longer term. Everybody’s trying to meet a metric in this world where clicks and views and conversions are easy to measure but something like quality is difficult to define at its best and impossibly subjective at its worst.
Yes, check out tildes.net.
Define quality.
I’m trying to build such a thing as well, but it always comes down to this. Options:
- users self-moderate - they’ll work themselves into echo chambers
- community moderators - will likely create echo chambers
- corporate moderators - motivated by money, so expect ads and probably echo chambers
I think the first is the best option, so I’m looking at algorithmic solutions based on user behavior, but it’s likely to end up in the same spot.
I think you are not seeing the whole scope of the problem. Echo chambers are only one of the problems, lowest common denominator posts are another issue of self-moderation/voting.
That’s why there needs to be a difference between agree/disagree and relevant/spam. I’m planning to have both, and hopefully people use them to good effect.
I am not even necessarily talking about relevant/spam. Some content might just naturally lose out because e.g. an interesting mathematical proof has less mass appeal than a cute cat picture even though the former might be higher quality and effort.
Sure, not all content is relevant to all people. That’s why Lemmy organizes things into communities, and self moderation can also differ by community. A good resource on experimental math may not be as good of a resource on cute cat pics.
The Somethingawful forums did exactly this with a $9.95 one time membership fee.
How did it work out?
They ran for years with minimal shit content and trolls.
No, because a focus on quality would require defining quality and then curating the content through some kind of process that would not end up being ‘social media’.
Quality will never be defined by popularity, which is the entire focus of social apps.
But there are ways to better incentivse it.
Ie. the default lemmy sort “active” takes replies, upvotes, and downvotes as “activity” and promotes posts that get a lot of any of them. This tends to promote controversial content.
If you sort by top, its instead only based on upvotes and the sort promotes less divisive and controversial stuff and more “quality” stuff.
I wouldn’t say that upvotes always mean quality, they could also just indicate mass appeal while quality but niche content is hidden that way.
Sure. But will it be profitable and will enough people want to use it? I think most likely the answer is no.
Professional communities with invite-only registration, where invites are only distributed to people with high ratings. Also you can add higher barriers, like a requirement to write a valuable on-topic to get rating above a certain level, regardless of the comment rating level. Basically a self-moderated narrowly focused community with invite only registration.
What’s meritable often isn’t popular. By what metric should comments be rated?
Many will rate high. By what means can the set be further narrowed?
I wonder if that is one of the areas where AI might be useful in the future. LLMs could potentially be useful to identify non-trivial statements that are not just a rephrased version of statements that have already been made in other comments.
In the future?
Well, as far as I know it nobody has done that yet and current LLMs seem to focus more on general applications than on being efficient for specialized use cases like this.
An LLM?
Edit: Everything is of far less significance relative IRL relationships. The overriding goal of ML analysis model with a subordinated LLM hasn’t been to create a space for the best mental masturbation, instead to better focus subsequent human efforts in organizational recruitment for education and praxis.
Of course it is (as long as it’s “tries to promote”, with no expectation it will always succeed). But no one’s interested because it won’t make as much money as the current outrage farming.