I’ve recently noticed this opinion seems unpopular, at least on Lemmy.
There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples’ works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate “new” content based on probabilities.
My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai
I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people’s “likeness.” I understand the hate for AI generated shit (because it is shit). I really don’t understand where all this hate for using public data for building a “statistical” model to “learn” general patterns is coming from.
I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don’t think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that’s really just a problem with capitalism, and productivity increases are generally considered good.
I agree with some other comments that this is a question of public domain vs. copyright. However, even copyright has exceptions, notably fair use in the US.
One of the chief AI critics, Sarah Andersen, made a claim 9 months ago that when AI generated the following output for “Sarah Andersen comic”, it clearly imitated her style, and if any AI company is to be believed, it’s going to get more accurate with later models, possibly creating a believable comic including text.
Regardless of how accurately the AI can draw the comics (as long as they aren’t effectively identical to a single specific comic of hers), shouldn’t this just qualify as fair use? I can imitate SA’s style too and make a parody comic, or even just go the lazy way and change some text like alt-right “memers” did. As long as the content is distributed as “homage”, “parody”, “criticism” etc., doesn’t directly harm the Sarah Andersen’s financial interests, and makes it clear that the author is clearly not her, I think there should be no issue even if it features likeness of trademarked characters, phrases and concepts.
Makes me ashamed there is a book by her in my house (my sister received it as a gift).
This argument is more along the lines of what is actually being argued by AI companies in court. Style cannot be copyrighted. They argue AI is simply recreating a style.
The problem with this is that, in order to recreate a style, AI needs to be trained on that content. So if an AI starts reproducing art in the same style as a popular artist, it must have inherently been fed a whole bunch of that artist’s work. Artists claim this is a violation of copyright since they never agreed for their art to be used in that way. The AI companies argue fair use also allows use of copyrighted works for teaching or training. An art class can use a popular artist’s work as examples of how to recreate a certain style. Of course, training AI is different than training a group of students. Is it different enough that fair use doesn’t apply is the question being decided on in court.