- 133 Posts
- 9 Comments
prototype_g2@lemmy.mlto Fuck AI@lemmy.world•‘Don’t ask what AI can do for us, ask what it is doing to us’: are ChatGPT and co harming human intelligence?0·2 months agoMicrosoft did a study on this and they found that those who made heavy usage of AI tools said they felt dumber:
“Such consternation is not unfounded. Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved. As Bainbridge [7] noted, a key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.”
Cognitive ability is like a muscle. If it is not used regularly, it will decay.
It also said it made people less creative:
“users with access to GenAI tools produce a less diverse set of outcomes for the same task, compared to those without. This tendency for convergence reflects a lack of personal, contextualised, critical and reflective judgement of AI output and thus can be interpreted as a deterioration of critical thinking.”
How does this surprise anyone?
LLMs are just pattern recognition machines. You give them a sequence of words and they tell you what is the most statistically likely word to follow based solely on probability, no logic or reasoning.
prototype_g2@lemmy.mlto Mildly Infuriating@lemmy.world•Romance scammers are now in the fediverseEnglish2·5 months agoWell, it was only a matter of time till the bot got to me!
veeeeery powerful tool for self-education
You know these are pattern recognition machines, right? And not an actually reliable source of information, right? These are hallucination machine which sometimes gets right simply by virtue of such truthful statements being repeated often enough in the data for the machine to pick it up. It can give contradicting answers depending on what keywords you use. As such it is of little education value.
If anything, it makes self-education harder, as search results will be filled with perfectly SEOed LLM article with no quality standards and complete disregard for the truthfulness of the text written, poisoning the well of knowledge the internet was supposed to be. And as a bit of irony, these poor quality pattern-based texts will later be used in future machine learning databases, thus lowering the quality of said data, causing the machine’s own decay.
prototype_g2@lemmy.mlto Fuck AI@lemmy.world•Thoughts on AI generated illustrations for articles4·5 months agoAI just screams “lazy” and “lack of care”. If they don’t care that every article they put out has a completely unnecessary AI image in it, what guarantee do I have they care about any of the content on their website?
If they can’t afford an image then it’s better to have none than to give money to a company that will DDOS their servers with web-scrappers.
In my eyes, AI = Complete disregard for quality control.
Why don’t you asked them yourself? !askchapo@hexbear.net
Probably a bad idea to ask about a Marxist instance on a .world community, since .world is known to be quite biased against Marxism.
I could spend an hour writing a long winded explanation of why capitalists “earn” their wealth through wage theft… but Comrade Hakim already made a video that explains this concept pretty well so here.
prototype_g2@lemmy.mlto memes@lemmy.world•Starter Kit to browse Web in 2024/2025 (Must-have)1·8 months agoReally? I didn’t know that! What is it called? How can I use it?
The classic Ad Hominem. Instead of actually refuting the arguments, you instead attack the ones making them.
So, tell me, which part of "As Bainbridge [7] noted, a key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.” is affected by the conflict of interests with the company? This is a note made by Bainbridge. The argument is as follows
If you use the machine to think for you, you will stop thinking.
Not thinking leads to a degradation of thinking skills
Therefore, using machine to think for you will lead to a degradation of thinking skills.
It is not too hard to see that if you stop doing something for a while, your skill to do that thing will degrade overtime. Part of getting better is learning from your own mistakes. The AI will rob you those learning experiences.
What is the problem with the second quote? It is not an opinion, it is an observation.
Other’s have noticed this already:
https://www.darrenhorrocks.co.uk/why-copilot-making-programmers-worse-at-programming/
https://www.youtube.com/watch?v=8DdEoJVZpqA
https://nmn.gl/blog/ai-illiterate-programmers
https://www.youtube.com/watch?v=cQNyYx2fZXw
This, of course, only happens if you use the AI to think for you.