- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
Yeah…what we have today as “AI” makes a ton of mistakes. Well, maybe not a ton, but enough that it cannot be relied on without a human to correct it.
I use it as a foundation at work.
ChatGPT, write me a script that does this, this, and that.
I often, like 98% of the time, won’t get what I asked for or will get something that it interpreted incorrectly. It’s common sense to me but maybe not to others to not run whatever it spits out blindly. Review what it outputted, then test it somewhere. I often create a similar file structure somewhere else and test there and then after a few tests and reviewing and making modifications, then I feel comfortable running whatever it spit out to me.
But I don’t think I’ll ever not double check whatever any type of AI spits out as a response to me for whatever I’ve asked. Humans should always have the last word before action, especially when it comes to healthcare.
Oh hey, this same quote is relevant yet again:
In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate.
But that’s not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is “companies will buy our products so they can do more with less.” It’s not “business customers will buy our products so their products will cost more to make, but will be of higher quality.”