Originality.AI looked at 8,885 long Facebook posts made over the past six years.
Key Findings
- 41.18% of current Facebook long-form posts are Likely AI, as of November 2024.
- Between 2023 and November 2024, the average percentage of monthly AI posts on Facebook was 24.05%.
- This reflects a 4.3x increase in monthly AI Facebook content since the launch of ChatGPT. In comparison, the monthly average was 5.34% from 2018 to 2022.
Take note this does not appear to be an independent study. Tell me I’m wrong?
You know what they say about Al…
this is ai gen so stop it
how tf did it take 6 years to analyze 8000 posts
I pretty sure they selected posts from a 6 year period, not that they spent six years on the analysis.
In that case, how/why did they only choose 8000 posts over 6 years? Facebook probably gets more than 8000 new posts per minute.
I was wondering how far I’d have to scroll before getting to someone who doesn’t understand statistics complaining about the sample size…
There’s likely been trillions of posts on Facebook during that time frame. Is a sample size of 8000 really sufficient for a corpus that large?
Every study uses sampling. They don’t have the resources to check everything. I have to imagine it took a lot of work to verify conclusively whether something was or was not generated. It’s a much larger sample size than a lot of studies.
I have to imagine it took a lot of work to verify conclusively whether something was or was not generated
The study is by a company that creates software to detect AI content, so it’s literally their whole job
(it also means there’s a conflict of interest, since they want to show how much content their detector can detect)
It’s a much larger sample size than a lot of studies.
It’s an extremely small proportion of the total number of Facebook posts though. Nowhere near enough for statistical significance.
I wouldn’t be surprised, but I’d be interested to see what they used to make that determination. All of the AI detection I know of are prone to a lot of false-positives.
When I was looking for a job, I ran into a guide to make money using AI:
-
Choose a top selling book.
-
Ask Chat GPT to give a summary for each chapter.
-
Paste the summaries into Google docs.
-
Export as PDF.
-
Sell on Amazon as a digital “short version” or “study guide” for the original book.
-
Repeat with other books.
Blew my mind how much hot stinking garbage is out there.
-
Thanks.
Now do Reddit comments.There’s an AI reply option now. Interested to know how far that is off just being part of the regular comments.
Deleted my account a little while ago but for my feed I think it was higher. You couldn’t block them fast enough, and mostly obviously AI pictures that if the comments are to be believed as being actual humans…people believed were real. It was a total nightmare land. I’m sad that I have now lost contact with the few distant friends I had on there but otherwise NOTHING lost.
and, is the jury already in on which ai is most fuckable?
I’d tell you, but my area network appears to have already started blocking DeepSeek.
Deekseek that was not encrypting data
https://www.theregister.com/2025/01/30/deepseek_database_left_open/
According to Wiz, DeepSeek promptly fixed the issue when informed about it.
:-/
I was wondering who Facebook was for, good to know AI has low standards
Dead internet theory
And 58.82% are likely generated by human junk then.
If you want to visit your old friends in the dying mall. Go to feeds then friends. Should filter everything else out.
That’s an extremely low sample size for this
If you could reliably detect “AI” using an “AI” you could also use an “AI” to make posts that the other “AI” couldn’t detect.
Sure, but then the generator AI is no longer optimised to generate whatever you wanted initially, but to generate text that fools the detector network, thus making the original generator worse at its intended job.
I see no reason why “post right wing propaganda” and "write so you don’t sound like “AI” " should be conflicting goals.
The actual argument why I don’t find such results credible is that the “creator” is trained to sound like humans, so the “detector” has to be trained to find stuff that does not sound like humans. This means, both basically have to solve the same task: Decide if something sounds like a human.
To be able to find the “AI” content, the “detector” would have to be better at deciding what sounds like a human than the “creator”. So for the results to have any kind of accuracy, you’re already banking on the “detector” company having more processing power / better training data / more money than, say, OpenAI or google.
But also, if the “detector” was better at the job, it could be used as a better “creator” itself. Then, how would we distinguish the content it created?
FB has been junk for more than a decade now, AI or no.
I check mine every few weeks because I’m a sports announcer and it’s one way people get in contact with me, but it’s clear that FB designs its feed to piss me off and try to keep me doomscrolling, and I’m not a fan of having my day derailed.
I deleted facebook in like 2010 or so, because i hardly ever used it anyway, it wasn’t really bad back then, just not for me. 6 or so years later a friend of mine wanted to show me something on fb, but couldn’t find it, so he was just scrolling, i was blown away how bad it was, just ads and auto played videos and absolute garbage. And from what i understand, it just got worse and worse. Everyone i know now that uses facebook is for the market place.
It’s such a cesspit.
I’m glad we have the Fediverse.