I’m a data analyst moving into data science. I have been ranting about this since the beginning but everyone’s too obsessed with the new shiny thing.
It’s just an optimisation algorithm that’s learned certain words have different probabilities of occurring depending on context.
I had to remind my friend, “when it tells you something, it has no idea what it’s just told you” because that’s all it really does; it spits out text on a guessed context but has no precognition of that context.
Reminds me of an article I read a few years ago about an Israeli start up aiming to use AI to analyse images of people and detect what they are likely to work with. Doctor, teacher, terrorist, etc.
Basically phrenology. They were making a digital racist uncle.
If the model is accurate though, I’d be okay with it. I understand it’ll never be perfect and laws should be put in place banning people being arrested or interrogated just for that kind of suspicion, but still
Yes, yes. We do this already and we are probably safer for it. If we had a model we could point to evidence that, in general, this helps. It should never be admissible evidence in a court, but it would help police figure out who to keep eyes on
Wow, they’re using Artificial Intelligence to do the same thing predictive analytics has been doing for over 50 years, what a time to be alive!
It’s almost like they just started calling the old thing by a new name!
Block chain machine learning synergy
I’m a data analyst moving into data science. I have been ranting about this since the beginning but everyone’s too obsessed with the new shiny thing.
It’s just an optimisation algorithm that’s learned certain words have different probabilities of occurring depending on context.
I had to remind my friend, “when it tells you something, it has no idea what it’s just told you” because that’s all it really does; it spits out text on a guessed context but has no precognition of that context.
Reminds me of an article I read a few years ago about an Israeli start up aiming to use AI to analyse images of people and detect what they are likely to work with. Doctor, teacher, terrorist, etc.
Basically phrenology. They were making a digital racist uncle.
If the model is accurate though, I’d be okay with it. I understand it’ll never be perfect and laws should be put in place banning people being arrested or interrogated just for that kind of suspicion, but still
Except that is literally judging you from how you look. And you don’t think actual bad actors will try to trick the system?
Yes, yes. We do this already and we are probably safer for it. If we had a model we could point to evidence that, in general, this helps. It should never be admissible evidence in a court, but it would help police figure out who to keep eyes on
Well, sorry if it’s something I’m super skeptical towards as a minority
The meme on every statisticians LinkedIn group is thar AI and ML is just general regression models with a fancy hat.