The narrative that OpenAI, Microsoft, and freshly minted White House “AI czar” David Sacks are now pushing to explain why DeepSeek was able to create a large language model that outpaces OpenAI’s while spending orders of magnitude less money and using older chips is that DeepSeek used OpenAI’s data unfairly and without compensation. Sound familiar?
Both Bloomberg and the Financial Times are reporting that Microsoft and OpenAI have been probing whether DeepSeek improperly trained the R1 model that is taking the AI world by storm on the outputs of OpenAI models.
It is, as many have already pointed out, incredibly ironic that OpenAI, a company that has been obtaining large amounts of data from all of humankind largely in an “unauthorized manner,” and, in some cases, in violation of the terms of service of those from whom they have been taking from, is now complaining about the very practices by which it has built its company.
OpenAI is currently being sued by the New York Times for training on its articles, and its argument is that this is perfectly fine under copyright law fair use protections.
“Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness,” OpenAI wrote in a blog post. In its motion to dismiss in court, OpenAI wrote “it has long been clear that the non-consumptive use of copyrighted material (like large language model training) is protected by fair use.”
OpenAI argues that it is legal for the company to train on whatever it wants for whatever reason it wants, then it stands to reason that it doesn’t have much of a leg to stand on when competitors use common strategies used in the world of machine learning to make their own models.
It is effing hilarious. First, OpenAI & friends steal creative works to “train” their LLMs. Then they are insanely hyped for what amounts to glorified statistics, get “valued” at insane amounts while burning money faster than a Californian forest fire. Then, a competitor appears that has the same evil energy but slightly better statistics… bam. A trillion of “value” just evaporates as if it never existed.
And then suddenly people are complaining that DeepSuck is “not privacy friendly” and stealing from OpenAI. Hahaha. Fuck this timeline.
You can also just run deepseek locally if you are really concerned about privacy. I did it on my 4070ti with the 14b distillation last night. There’s a reddit thread floating around that described how to do with with ollama and a chatbot program.
And how does that help with the privacy?
If you’re running it on your own system it isn’t connected to their server or sharing any data. You download the model and run it on your own hardware.
From the thread I was reading people tracked packets outgoing and it seemed to just be coming from the chatbot program as analytics, not anything going to deepseek.
How do you know it isn’t communicating with their servers? Obviously it needs internet connection to work, so what’s stopping it from sending your data?
That is true, and running locally is better in that respect. My point was more that privacy was hardly ever an issue until suddenly now.
Absolutely! I was just expanding on what you said for others who come across the thread :)
You know what else isn’t privacy friendly? Like all of social media.