Office space meme:
“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”
Even worse is calling a proprietary, absolutely closed source, closed data and closed weight company “OpeanAI”
Especially after it was founded as a nonprofit with the mission to push open source AI as far and wide as possible to ensure a multipolar AI ecosystem, in turn ensuring AI keeping other AI in check so that AI would be respectful and prosocial.
Sorry, that was a PR move from the get-go. Sam Altman doesn’t have an altruistic cell in his whole body.
It’s even crazier that Sam Altman and other ML devs said that they reached the peak of what current Machine Learning models were capable of years ago
But that doesn’t mean shit to the marketing departments
I like how when America does it we call it AI, and when China does it it’s just an LLM!
I’m including Facebook’s LLM in my critique. And I dislike the current hype on LLMs, no matter where they’re developed.
And LLMs are not “AI”. I’ve called them “so-called ‘AIs’” waaay before.
There are lots of problems with the new lingo. We need to come up with new words.
How about “Open Weightings”?
That’s fat shaming
Weights available?
Yeah, this shit drives me crazy. Putting aside the fact that it all runs off stolen data from regular people who are being exploited, most of this “AI” shit is basically just freeware if anything, it’s about as “open source” as Winamp was back in the day.
Judging by OP’s salt in the comments, I’m guessing they might be an Nvidia investor. My condolences.
Nah, just a 21st century Luddite.
Arguably they are a new type of software, which is why the old categories do not align perfectly. Instead of arguing over how to best gatekeep the old name, we need a new classification system.
… Statistical engines are older than personal computers, with the first statistical package developed in 1957. And AI professionals would have called them trained models. The interpreter is code, the weights are not. We have had terms for these things for ages.
There were e|forts. Facebook didn’t like those. (Since their models wouldn’t be considered open source anymore)
I don’t care what Facebook likes or doesn’t like. The OSS community is us.
Open weights
Yes please, let’s use this term, and reserve Open Source for it’s existing definition in the academic ML setting of weights, methods, and training data. These models don’t readily fit into existing terminology for structure and logistic reasons, but when someone says “it’s got open weights” I know exactly what set of licenses and implications it may have without further explanation.
I mean that’s all a model is so… Once again someone who doesn’t understand anything about training or models is posting borderline misinformation about ai.
Shocker
A model is an artifact, not the source. We also don’t call binaries “open-source”, even though they are literally the code that’s executed. Why should these phrases suddenly get turned upside down for AI models?
A model can be represented only by its weights in the same way that a codebase can be represented only by its binary.
Training data is a closer analogue of source code than weights.
Yet another so-called AI evangelist accusing others of not understanding computer science if they don’t want to worship their machine god.
Praise the Omnisiah! … I’ll see myself out.
Do you think your comments here are implying an understanding of the tech?
It’s not like you need specific knowledge of Transformer models and whatnot to counterargue LLM bandwagon simps. A basic knowledge of Machine Learning is fine.
And you believe you’re portraying that level of competence in these comments?
I at least do.
I mean if you both think this is overhyped nonsense, then by all means buy some Nvidia stock. If you know something the hedge fund teams don’t, why not sell your insider knowledge and become rich?
Or maybe you guys don’t understand it as well as you think. Could be either, I guess.
Because over-hyped nonsense is what the stock market craves… That’s how this works. That’s how all of this works.
I didn’t say it is all overhyped nonsense, my only point is that I agree with the opinion stated in the meme, and I don’t think people who disagree really understand AI models or what “open source” means.
Yeah, let’s all base our decisions and definitions on what the stock market dictates. What could possibly go wrong?
/s 🙄
I have spent a very considerable amount of time tinkering with using ai models of all sorts.
Personally, I don’t know shit. I learned about… Zero entropy loss functions (?) The other day. That was interesting. I don’t know a lick of calculus and was able to grok what was going on thanks to a very excellent YouTube video. Anyway, I guess my point is that suddenly everyone is an expert.
I’m not. But I think it’ neat.
Like. I’ve spent hundreds or possibly thousands of hours learning as much as I can about AI of all sorts (as a hobby) and I still don’t know shit. I trained a gan once. On reddit porn. Terrible results. Great learning.
Its a cool state to be in cuz there’s so much out there to learn about.
I’m not entirely sure what my point is here beyond the fact that most people I’ve seen grandstanding about this stuff online tend to get schooled by an actual expert.
I love it when that happens.
Seems kinda reductive about what makes it different from most other LLM’s. Reading the comments i see the issue is that the training data is why some consider it not open source, but isn’t that just trained from the other AI? It’s not why this AI is special. And the way it uses that data, afaik, is open and editable, and the license to use it is open. Whats the issue here?
Seems kinda reductive about what makes it different from most other LLM’s
The other LLMs aren’t open source, either.
isn’t that just trained from the other AI?
Most certainly not. If it were, it wouldn’t output coherent text, since LLM output degenerates if you human-centipede its’ outputs.
And the way it uses that data, afaik, is open and editable, and the license to use it is open.
From that standpoint, every binary blob should be considered “open source”, since the machine instructions are readable in RAM.
-
Well that’s the argument.
-
Ai condensing ai is what is talked about here, from my understanding deepseek is two parts and they start with known datasets in use, and the two parts bounce ideas against each other and calculates fitness. So degrading recursive results is being directly tackled here. But training sets are tokenized gathered data. The gathering of data sets is a rights issue, but this is not part of the conversation here.
-
It could be i don’t have a complete concept on what is open source, but from looking into it, all the boxes are checked. The data set is not what is different, it’s just data. Deepseek say its weights are available and open to be changed (https://api-docs.deepseek.com/news/news250120) but the processes that handle that data at unprecedented efficiency us what makes it special
The point of open source is access to reproducability the weights are the end products (like a binary blob), you need to supply a way on how the end product is created to be open source.
So its not how it tokenized the data you are looking for, it’s not how the weights are applied you want, and it’s not how it functions to structure the output you want because these are all open… it’s the entirety of the bulk unfiltered data you want. Of which deepseek was provided from other ai projects for initial training, can be changed to fit user needs, and doesnt touch on at all how this LLM is different from other LLM’s? This would be as i understand it… like saying that an open source game emulator can’t be open source because Nintendo games are encapsulated? I don’t consider the training data to be the LLM. I consider the system that manipulated that data to be the LLM. Is that where the difference in opinion is?
it’s the entirety of the bulk unfiltered data you want
Or more realistically: a description of how you could source the data.
doesnt touch on at all how this LLM is different from other LLM’s?
Correct. Llama isn’t open source, either.
like saying that an open source game emulator can’t be open source because Nintendo games are encapsulated
Not at all. It’s like claiming an emulator is open source, because it has a plugin system, but you need a closed source build dependency that the developer doesn’t disclose to the puplic.
Source build dependency… so you don’t have a problem with the LLM at all! You have a problem with the data collection process or the pre-training! So an emulator can’t be open source if the methodology on how the developers discovered how to read Nintendo ROM’s was not disclosed? Or which games were dissected in order to reverse engineer that info? I don’t consider that a prerequisite to say an emulator is open
So if i say… remove the data set from deepseek what remains would be considered open source by you?
So an emulator can’t be open source if the methodology on how the developers discovered how to read Nintendo ROM’s was discovered?
No. The emulator is open source if it supplies the way on hou to get the binary in the end. I don’t know how else to explain it to you: No LLM is open source.
A closer analogy would be only providing the binary output of the emulator build and calling it open source. If you can’t reproduce building the output from what they provide in what way is it reproducible? The model is the output, the training data and algorithm to build the model based on the training data are the input.
Edit: Say I have a Java project I want to open source. Normally (oversimplifying a bit) it goes .java source files used with a compiler to build intermediate bytecode in .class files, then there’s a just in time (JIT) compilation to create the binary code as it runs in the JVM. It’s not open source if I only share the class files, even if I can use them to recreate source files that can be recompiled into the same class files. Starting at an intermediate step of the process isn’t the source.
Would it? Not sure how that would be a better analogy. The argument is that it’s nearly all open… but it still does not count because the data set before it’s manipulated by the LLM (in my analogy the data set the emulator is using would be a Nintendo ROM) is not open. A data set that if provided would be so massive, it would render the point of tokenization pointless and be completely unusable by literally ANYONE without multiple data centers redlining for WEEKS. Under that standard of scrutiny not only could there never be an LLM that would qualify, but projects that are considered open source would not be. Thus making the distinction meaningless.
An emulator without a ROM mounted is still an emulator, even if not usable.
I don’t understand your objections. Even if the amount of data is rather big, it doesn’t change that this data is part of the source, and leaving it out makes the whole project non-open-source.
Under that standard of scrutiny not only could there never be an LLM that would qualify, but projects that are considered open source would not be. Thus making the distinction meaningless.
What? No? Open-source projects literally do meet this standard.
-
Open sources will eventually surpass all closed-source softwares in some day, no matter how many billions of dollars are invested in them.
Just look at blender vs maya for example.
Or as a human without all the previous people’s examples we learned from without paying them, aka normal life.
Would you accept a Smalltalk image as Open Source?
Meta’s “open source AI” ad campaign is so frustrating.
Source - it’s about open source, not access to the database
So, where’s the source, then?
Its not open so it doesnt matter.
It’s constantly referred to as “open source”.
Yeah - but it isnt
Great, so we agree. ᕕ(ᐛ)ᕗ
k