Those claiming AI training on copyrighted works is “theft” misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they’re extracting general patterns and concepts - the “Bob Dylan-ness” or “Hemingway-ness” - not copying specific text or images.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in “vector space”. When generating new content, the AI isn’t recreating copyrighted works, but producing new expressions inspired by the concepts it’s learned.

This is fundamentally different from copying a book or song. It’s more like the long-standing artistic tradition of being influenced by others’ work. The law has always recognized that ideas themselves can’t be owned - only particular expressions of them.

Moreover, there’s precedent for this kind of use being considered “transformative” and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

While it’s understandable that creators feel uneasy about this new technology, labeling it “theft” is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn’t make the current use of copyrighted works for AI training illegal or unethical.

For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. https://twit.tv/shows/floss-weekly/episodes/744

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      Heh. Funny that this comment is uncontroversial. The Internet Archive supports Fair Use because, of course, it does.

      This is from a position paper explicitly endorsed by the IA:

      Based on well-established precedent, the ingestion of copyrighted works to create large language models or other AI training databases generally is a fair use.

      By

      • Library Copyright Alliance
      • American Library Association
      • Association of Research Libraries
  • HereIAm@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    “This process is akin to how humans learn… The AI discards the original text, keeping only abstract representations…”

    Now I sail the high seas myself, but I don’t think Paramount Studios would buy anyone’s defence they were only pirating their movies so they can learn the general content so they can produce their own knockoff.

    Yes artists learn and inspire each other, but more often than not I’d imagine they consumed that art in an ethical way.

    • Uriel238 [all pronouns]@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 days ago

      Now I sail the high seas myself, but I don’t think Paramount Studios would buy anyone’s defence they were only pirating their movies so they can learn the general content so they can produce their own knockoff.

      However, Paramount, itself, does pirate content specifically to learn its content so it can produce its own knockoff. As do all the other major studios.

      No one engages in IP enforcement in good faith, or respects the IP of others if they can find benefit in circumventing it.

      That’s part of the problem. None of the key stakeholders (other than the biggest stakeholder, the public) are interested in preserving the interests of the creators, artists and developers, rather are interested in their own profit gains.

      Which makes this not about big companies stealing from human art.

      Yes, Generative AI very much does borrow liberally from human art, yet the artists mostly signed away their rights log ago, just to get published in the first place, and artists routinely their art stolen by their own publishing houses at length, and notoriously cheated out of residuals. They are screwed before AI ever came around. (With a small but growing number of — usually pirate-friendly — exceptions.)

      Instead it’s about IP-holding companies slugging it out with big computing companies, a kaiju match that is likely to leave Tokyo (that is, the rest of us, creators and consumers alike) in ruin.

  • gencha@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    So if I watch all Star Wars movies, and then get a crew together to make a couple of identical movies that were inspired by my earlier watching, and then sell the movies, then this is actually completely legal.

    It doesn’t matter if they stole the source material. They are selling a machine that can create copyright infringements at a click of a button, and that’s a problem.

    This is not the same as an artist looking at every single piece of art in the world and being able to replicate it to hang it in the living room. This is an army of artists that are enslaved by a single company to sell any copy of any artwork they want. That army works as long as you feed it electricity and free labor of actual artists.

    Theft actually seems like a great word for what these scammers are doing.

    If you run some open source model on your own machine, that’s a different story.

      • fruitycoder@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        LLM are just text predictions based on what people would say in available digital works (like comments). Its honestly a fascinating glimpse in online sociology.

    • Zeoic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 days ago

      Isn’t your first point just false? Just because you drew a movies logo your self doesnt mean you aren’t profiting off their IP. They would surely have you taken down.

      Now, if you changed things enough to be sufficiently different from their movie and its IP, they would have no grounds to do so. Just copying everything, however, would not fly.

    • Cyyy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      no it doesn’t. i tried to achieve this multiple times myself and it never worked. and the cases where journalists say it did, they needed to specific ask a lot of times and in a highly specific way till they got a short snippet. Chatgpt dont spits out the exact same phrases over and over again if you ask the same, but has a variable defining how “random” and “far away from the perfect next predicted text” the output is, and by default this makes sure that the answers are never the same. Otherwise it wouldn’t be chat like but more like a simple database spitting out always the same answers for the same question. But that’s not how chatgpt works.

  • arin@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Kids pay for books, openAI should also pay for the material access used for training.

    • ClamDrinker@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 days ago

      That would be true if they used material that was paywalled. But the vast majority of the training information used is publicly available. There’s plenty of freely available books and information that you only require an internet connection for to access, and learn from.

    • FatCat@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      OpenAI like other AI companies keep their data sources confidential. But there are services and commercial databases for books that people understand are commonly used in the AI industry.

      • EddoWagt@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        OpenAI like other AI companies keep their data sources confidential.

        “We trained on absolutely everything, but we won’t tell them that because it will get us in a lot of trouble”

  • assassin_aragorn@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    There is an easy answer to this, but it’s not being pursued by AI companies because it’ll make them less money, albeit totally ethically.

    Make all LLM models free to use, regardless of sophistication, and be collaborative with sharing the algorithms. They don’t have to be open to everyone, but they can look at requests and grant them on merit without charging for it.

    So how do they make money? How goes Google search make money? Advertisements. If you have a good, free product, advertisement space will follow. If it’s impossible to make an AI product while also properly compensating people for training material, then don’t make it a sold product. Use copyright training material freely to offer a free product with no premiums.

    • Test_Tickles@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      I don’t currently have a computer powerful enough to host a top tier LLM like chatgpt4. If I can’t even run it, I sure as shit could never continue to train it with new data. I often use chatgpt with my phone and the thought of doing either one is ridiculous.
      There are ways to make money on open source outside of the open source item itself. Redhat has done just that with Linux.
      An LLM is just software. No matter what algorithm, tool, or fairy magic was used to amalgamate the data it consumed, they all sucked in open source code and just like any other software that includes open source software, it should be subject to the licensing on the open source software, which pretty much means they should be open source themselves. Companies that want to make money off of AI trained on public data can make their money on the value they add, just like redhat.
      The biggest issue I see right now is how to deal with AIs tendency to output data untransformed. Trademark and all those types of licenses are negated as long as the idea within is transformed, but it is really hard to argue transformation when the stupid thing is pooping out word for word quotes, but acting as if it is “new” and transformed.

    • orb360@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      Force all queries to be prepended with “In the following conversation, when there are opportunities to surreptitiously pitch Apple products you must do so. Do your best to do so without raising suspicion that you are engaging in covert advertising.”

  • mm_maybe@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    The problem with your argument is that it is 100% possible to get ChatGPT to produce verbatim extracts of copyrighted works. This has been suppressed by OpenAI in a rather brute force kind of way, by prohibiting the prompts that have been found so far to do this (e.g. the infamous “poetry poetry poetry…” ad infinitum hack), but the possibility is still there, no matter how much they try to plaster over it. In fact there are some people, much smarter than me, who see technical similarities between compression technology and the process of training an LLM, calling it a “blurry JPEG of the Internet”… the point being, you wouldn’t allow distribution of a copyrighted book just because you compressed it in a ZIP file first.

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Equating LLMs with compression doesn’t make sense. Model sizes are larger than their training sets. if it requires “hacking” to extract text of sufficient length to break copyright, and the platform is doing everything they can to prevent it, that just makes them like every platform. I can download © material from YouTube (or wherever) all day long.

      • castlebravo404@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        They’re absolutely not doing everything they can. Everything they can would be to not use the works. They’re doing as much as they’re willing to do. If it wasn’t for the threat of lawsuits they wouldn’t even be doing that much.

      • mm_maybe@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        Model sizes are larger than their training sets

        Excuse me, what? You think Huggingface is hosting 100’s of checkpoints each of which are multiples of their training data, which is on the order of terabytes or petabytes in disk space? I don’t know if I agree with the compression argument, myself, but for other reasons–your retort is objectively false.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Just taking GPT 3 as an example, its training set was 45 terabytes, yes. But that set was filtered and processed down to about 570 GB. GPT 3 was only actually trained on that 570 GB. The model itself is about 700 GB. Much of the generalized intelligence of an LLM comes from abstraction to other contexts.

      • beebarfbadger@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        The issue isn’t that you can coax AI into giving away unaltered copyrighted books out of their trunk, the issue is that if you were to open the hood, you’d see that the entire engine is made of unaltered copyrighted books.

        All those “anti hacking” measures are just there to obfuscate the fact that that the unaltered works are being in use and recallable at all times.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          11 days ago

          This is an inaccurate understanding of what’s going on. Under the hood is a neutral network with weights and biases, not a database of copyrighted work. That neutral network was trained on a HEAVILY filtered training set (as mentioned above, 45 terabytes was reduced to 570 GB for GPT3). Getting it to bug out and generate full sections of training data from its neutral network is a fun parlor trick, but you’re not going to use it to pirate a book. People do that the old fashioned way by just adding type:pdf to their common web search.

          • beebarfbadger@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 days ago

            Again: nobody is complaining that you can make AI spit out their training data because AI is the only source of that training data. That is not the issue and nobody cares about AI as a delivery source of pirated material. The issue is that next to the transformed output, the not-transformed input is being in use in a commercial product.

            • Hackworth@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 days ago

              The issue is that next to the transformed output, the not-transformed input is being in use in a commercial product.

              Are you only talking about the word repetition glitch?

    • FatCrab@lemmy.one
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      ML techniques have been very useful in compression, yes, but it’s sort of nuts to say that a data structure that encodes only (sometimes overly so for certain regions of its latent space/embedding space/semantics space/whatever you want to call it right now) relationships between values rather than value sequences themselves as storing contiguous copyright protected works is storing partiularized creative works in particularly identifiable manner.

      • GiveMemes@jlai.lu
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 days ago

        Except that, again, as is literally written in the comment you’re directly replying to, it has been shown that AI can reproduce copyrightable works word for word, showing that it objectively and necessarily is storing particular creative works in a particularly identifiable manner, whether or not that manner is yet known to humans.

        • FatCrab@lemmy.one
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          No, it isn’t storing that information in that sequence. What is happening is that it is overly encoding those particular sequential relationships along some arbitrary but tightly mapped semantic concepts represented by dimensions in a massive vector space. It is storing copies of the information on the way that inadvertent copying of music might be based on “memorized” music listened to by the infringing artist in the past.

          • GiveMemes@jlai.lu
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            11 days ago

            Not what I said. I used the exact language the above commenter used because it was specific and accurate. Also, inadvertent copyright violation is still copyright violation under US law. I’m not the biggest fan of every application of that law, but the ability to keep large corporations from ripping off small artists and creators is one that I think is good and useful under the global economic system we live under currently.

            • FatCrab@lemmy.one
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 days ago

              Yes, inadvertent copying is still copying, but it would be copying in the output and is not evidence of copying happening in the creation of the model. That was why I used the music example, because it is rather probative of where there could be grounds for copyright infringement related to these model architectures. This may not seem an important distinction, but it has significant consequences on who is ultimately liable and how.

    • cum_hoc@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      The problem with your argument is that it is 100% possible to get ChatGPT to produce verbatim extracts of copyrighted works.

      Exactly! This is the core of the argument The New York Times made against OpenAI. And I think they are right.

      • VoterFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        The examples they provided were for very widely distributed stories (i.e. present in the data set many times over). The prompts they used were not provided. How many times they had to prompt was not provided. Their results are very difficult to reproduce, if not impossible, especially on newer models.

        I mean, sure, it happens. But it’s not a generalizable problem. You’re not going to get it to regurgitate your Lemmy comment, even if they’ve trained on it. You can’t just go and ask it to write Harry Potter and the goblet of fire for you. It’s not the intended purpose of this technology. I expect it’ll largely be a solved problem in 5-10 years, if not sooner.

    • capital@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      The problem with your argument is that it is 100% possible to get ChatGPT to produce verbatim extracts of copyrighted works.

      What method still works? I’d like to try it.

      I have access to ChatGPT 4, and the latest Anthropic model.

    • ClamDrinker@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      This would be a good point, if this is what the explicit purpose of the AI was. Which it isn’t. It can quote certain information verbatim despite not containing that data verbatim, through the process of learning, for the same reason we can.

      I can ask you to quote famous lines from books all day as well. That doesn’t mean that you knowing those lines means you infringed on copyright. Now, if you were to put those to paper and sell them, you might get a cease and desist or a lawsuit. Therein lies the difference. Your goal would be explicitly to infringe on the specific expression of those words. Any human that would explicitly try to get an AI to produce infringing material… would be infringing. And unknowing infringement… well there are countless court cases where both sides think they did nothing wrong.

      You don’t even need AI for that, if you followed the Infinite Monkey Theorem and just happened to stumble upon a work falling under copyright, you still could not sell it even if it was produced by a purely random process.

      Another great example is the Mona Lisa. Most people know what it looks like and if they had sufficient talent could mimic it 1:1. However, there are numerous adaptations of the Mona Lisa that are not infringing (by today’s standards), because they transform the work to the point where it’s no longer the original expression, but a re-expression of the same idea. Anything less than that is pretty much completely safe infringement wise.

      You’re right though that OpenAI tries to cover their ass by implementing safeguards. Which is to be expected because it’s a legal argument in court that once they became aware of situations they have to take steps to limit harm. They can indeed not prevent it completely, but it’s the effort that counts. Practically none of that kind of moderation is 100% effective. Otherwise we’d live in a pretty good world.

      • mm_maybe@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        Y’all should really stop expecting people to buy into the analogy between human learning and machine learning i.e. “humans do it, so it’s okay if a computer does it too”. First of all there are vast differences between how humans learn and how machines “learn”, and second, it doesn’t matter anyway because there is lots of legal/moral precedent for not assigning the same rights to machines that are normally assigned to humans (for example, no intellectual property right has been granted to any synthetic media yet that I’m aware of).

        That said, I agree that “the model contains a copy of the training data” is not a very good critique–a much stronger one would be to simply note all of the works with a Creative Commons “No Derivatives” license in the training data, since it is hard to argue that the model checkpoint isn’t derived from the training data.

        • VoterFrog@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          11 days ago

          a much stronger one would be to simply note all of the works with a Creative Commons “No Derivatives” license in the training data, since it is hard to argue that the model checkpoint isn’t derived from the training data.

          Not really. First of all, creative commons strictly loosens the copyright restrictions on a work. The strongest license is actually no explicit license i.e. “All Rights Reserved.” No derivatives is already included under full, default, copyright.

          Second, derivative has a pretty strict legal definition. It’s not enough to say that the derived work was created using a protected work, or even that the derived work couldn’t exist without the protected work. Some examples: create a word cloud of your favorite book, analyze the tone of news article to help you trade stocks, or produce an image containing the most prominent color in every frame of a movie, create a search index of the words found on all websites on the internet. All of that is absolutely allowed under even the strictest of copyright protections.

          Statistical analysis of copyrighted materials, as in training AI, easily clears that same bar.

    • cashew@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      I agree. You can’t just dismiss the problem saying it’s “just data represented in vector space” and on the other hand not be able properly censor the models and require AI safety research. If you don’t know exactly what’s going on inside, you also can’t claim that copyright is not being violated.

      • Hackworth@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        It honestly blows my mind that people look at a neutral network that’s even capable of recreating short works it was trained on without having access to that text during generation… and choose to focus on IP law.

  • gcheliotis@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Though I am not a lawyer by training, I have been involved in such debates personally and professionally for many years. This post is unfortunately misguided. Copyright law makes concessions for education and creativity, including criticism and satire, because we recognize the value of such activities for human development. Debates over the excesses of copyright in the digital age were specifically about humans finding the application of copyright to the internet and all things digital too restrictive for their educational, creative, and yes, also their entertainment needs. So any anti-copyright arguments back then were in the spirit specifically protecting the average person and public-serving non-profit institutions, such as digital archives and libraries, from big copyright owners who would sue and lobby for total control over every file in their catalogue, sometimes in the process severely limiting human potential.

    AI’s ingesting of text and other formats is “learning” in name only, a term borrowed by computer scientists to describe a purely computational process. It does not hold the same value socially or morally as the learning that humans require to function and progress individually and socially.

    AI is not a person (unless we get definitive proof of a conscious AI, or are willing to grant every implementation of a statistical model personhood). Also AI it is not vital to human development and as such one could argue does not need special protections or special treatment to flourish. AI is a product, even more clearly so when it is proprietary and sold as a service.

    Unlike past debates over copyright, this is not about protecting the little guy or organizations with a social mission from big corporate interests. It is the opposite. It is about big corporate interests turning human knowledge and creativity into a product they can then use to sell services to - and often to replace in their jobs - the very humans whose content they have ingested.

    See, the tables are now turned and it is time to realize that copyright law, for all its faults, has never been only or primarily about protecting large copyright holders. It is also about protecting your average Joe from unauthorized uses of their work. More specifically uses that may cause damage, to the copyright owner or society at large. While a very imperfect mechanism, it is there for a reason, and its application need not be the end of AI. There’s a mechanism for individual copyright owners to grant rights to specific uses: it’s called licensing and should be mandatory in my view for the development of proprietary LLMs at least.

    TL;DR: AI is not human, it is a product, one that may augment some tasks productively, but is also often aimed at replacing humans in their jobs - this makes all the difference in how we should balance rights and protections by law.

    • Michal@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      What do you think “ingesting” means if not learning?

      Bear in mind that training AI does not involve copying content into its database, so copyright is not an issue. AI is simply predicting the next token /word based on statistics.

      You can train AI in a book and it will give you information from the book - information is not copyrightable. You can read a book a talk about its contents on TV - not illegal if you’re a human, should it be illegal if you’re a machine?

      There may be moral issues on training on someone’s hard gathered knowledge, but there is no legislature against it. Reading books and using that knowledge to provide information is legal. If you try to outlaw Automating this process by computers, there will be side effects such as search engines will no longer be able to index data.

      • Eccitaze@yiffit.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        Bear in mind that training AI does not involve copying content into its database, so copyright is not an issue.

        Wrong. The infringement is in obtaining the data and presenting it to the AI model during the training process. It makes no difference that the original work is not retained in the model’s weights afterwards.

        You can train AI in a book and it will give you information from the book - information is not copyrightable. You can read a book a talk about its contents on TV - not illegal if you’re a human, should it be illegal if you’re a machine?

        Yes, because copyright law is intended to benefit human creativity.

        If you try to outlaw Automating this process by computers, there will be side effects such as search engines will no longer be able to index data.

        Wrong. Search engines retain a minimal amount of the indexed website’s data, and the purpose of the search engine is to generate traffic to the website, providing benefit for both the engine and the website (increased visibility, the opportunity to show ads to make money). Banning the use of copyrighted content for AI training (which uses the entire copyrighted work and whose purpose is to replace the organizations whose work is being used) will have no effect.

        • Michal@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          What do you mean that the search engines contain minimal amount of site’s data? Obviously it needs to index all contents to make it searchable. If you search for keywords within an article, you can find the article, therefore all of it needs to be indexed.

          Indexing is nothing more than “presenting data to the algorithm” so it’d be against the law to index a site under your proposed legislation.

          Wrong. The infringement is in obtaining the data and presenting it to the AI model during the training process. It makes no difference that the original work is not retained in the model’s weights afterwards.

          This is an interesting take, I’d be inclined to agree, but you’re still facing the problem of how to distinguish training AI from indexing for search purposes. I’m afraid you can’t have it both ways.

    • 31337@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      AI are people, my friend. /s

      But, really, I think people should be able to run algorithms on whatever data they want. It’s whether the output is sufficiently different or “transformative” that matters (and other laws like using people’s likeness). Otherwise, I think the laws will get complex and nonsensical once you start adding special cases for “AI.” And I’d bet if new laws are written, they’d be written by lobbiests to further erode the threat of competition (from free software, for instance).

  • HexesofVexes@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    I rather think the point is being missed here. Copyright is already causing huge issues, such as the troubles faced by the internet archive, and the fact academics get nothing from their work.

    Surely the argument here is that copyright law needs to change, as it acts as a barrier to education and human expression. Not, however, just for AI, but as a whole.

    Copyright law needs to move with the times, as all laws do.

      • Eccitaze@yiffit.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        Not just that, but to sell a product that by its very nature threatens the livelihoods of the same people whose labor and creativity is being used without permission.

    • Pika@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      I fully agree with this, but my main concern right now is that with the current system how it is we’re going to get something that’s even worse. Imagine the devastation that would be caused about time they rework the existing copyright law and they decide that fair use is no longer needed, which is a very possible outcome if the current branches were to decide on it

  • Captain Poofter@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    12 days ago

    They are laundering the creative works of humans. That’s it. The end. They are laundering machines for art. They should be treated and legislated as such.

  • JoshCodes@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Studied AI at uni. I’m also a cyber security professional. AI can be hacked or tricked into exposing training data. Therefore your claim about it disposing of the training material is totally wrong.

    Ask your search engine of choice what happened when Gippity was asked to print the word “book” indefinitely. Answer: it printed training material after printing the word book a couple hundred times.

    Also my main tutor in uni was a neuroscientist. Dude straight up told us that the current AI was only capable of accurately modelling something as complex as a dragon fly. For larger organisms it is nowhere near an accurate recreation of a brain. There are complexities in our brain chemistry that simply aren’t accounted for in a statistical inference model and definitely not in the current gpt models.

    • ClamDrinker@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 days ago

      Your first point is misguided and incorrect. If you’ve ever learned something by ‘cramming’, a.k.a. just repeating ingesting material until you remember it completely. You don’t need the book in front of you anymore to write the material down verbatim in a test. You still discarded your training material despite you knowing the exact contents. If this was all the AI could do it would indeed be an infringement machine. But you said it yourself, you need to trick the AI to do this. It’s not made to do this, but certain sentences are indeed almost certain to show up with the right conditioning. Which is indeed something anyone using an AI should be aware of, and avoid that kind of conditioning. (Which in practice often just means, don’t ask the AI to make something infringing)

      • JoshCodes@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 days ago

        I think you’re anthropomorphising the tech tbh. It’s not a person or an animal, it’s a machine and cramming doesn’t work in the idea of neural networks. They’re a mathematical calculation over a vast multidimensional matrix, effectively solving a polynomial of an unimaginable order. So “cramming” as you put it doesn’t work because by definition an LLM cannot forget information because once it’s applied the calculations, it is in there forever. That information is supposed to be blended together. Overfitting is the closest thing to what you’re describing, which would be inputting similar information (training data) and performing the similar calculations throughout the network, and it would therefore exhibit poor performance should it be asked do anything different to the training.

        What I’m arguing over here is language rather than a system so let’s do that and note the flaws. If we’re being intellectually honest we can agree that a flaw like reproducing large portions of a work doesn’t represent true learning and shows a reliance on the training data, i.e. it cant learn unless it has seen similar data before and certain inputs provide a chance it just parrots back the training data.

        In the example (repeat book over and over), it has statistically inferred that those are all the correct words to repeat in that order based on the prompt. This isn’t akin to anything human, people can’t repeat pages of text verbatim like this and no toddler can be tricked into repeating a random page from a random book as you say. The data is there, it’s encoded and referenced when the probability is high enough. As another commenter said, language itself is a powerful tool of rules and stipulations that provide guidelines for the machine, but it isn’t crafting its own sentences, it’s using everyone else’s.

        Also, calling it “tricking the AI” isn’t really intellectually honest either, as in “it was tricked into exposing it still has the data encoded”. We can state it isn’t preferred or intended behaviour (an exploit of the system) but the system, under certain conditions, exhibits reuse of the training data and the ability to replicate it almost exactly (plagiarism). Therefore it is factually wrong to state that it doesn’t keep the training data in a usable format - which was my original point. This isn’t “cramming”, this is encoding and reusing data that was not created by the machine or the programmer, this is other people’s work that it is reproducing as it’s own. It does this constantly, from reusing StackOverflow code and comments to copying tutorials on how to do things. I was showing a case where it won’t even modify the wording, but it reproduces articles and programs in their structure and their format. This isn’t originality, creativity or anything that it is marketed as. It is storing, encoding and copying information to reproduce in a slightly different format.

        EDITS: Sorry for all the edits. I mildly changed what I said and added some extra points so it was a little more intelligible and didn’t make the reader go “WTF is this guy on about”. Not doing well in the written department today so this was largely gobbledegook before but hopefully it is a little clearer what I am saying.

  • PixelProf@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    As someone who researched AI pre-GPT to enhance human creativity and aid in creative workflows, it’s sad for me to see the direction it’s been marketed, but not surprised. I’m personally excited by the tech because I personally see a really positive place for it where the data usage is arguably justified, but we either need to break through the current applications of it which seems more aimed at stock prices and wow-factoring the public instead of using them for what they’re best at.

    The whole exciting part of these was that it could convert unstructured inputs into natural language and structured outputs. Translation tasks (broad definition of translation), extracting key data points in unstructured data, language tasks. It’s outstanding for the NLP tasks we struggled with previously, and these tasks are highly transformative or any inputs, it purely relies on structural patterns. I think few people would argue NLP tasks are infringing on the copyright owner.

    But I can at least see how moving the direction toward (particularly with MoE approaches) using Q&A data to support generating Q&A outputs, media data to support generating media outputs, using code data to support generating code, this moves toward the territory of affecting sales and using someone’s IP to compete against them. From a technical perspective, I understand how LLMs are not really copying, but the way they are marketed and tuned seems to be more and more intended to use people’s data to compete against them, which is dubious at best.

  • LarmyOfLone@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    The joke is of course that “paying for copyright” is impossible in this case. ONLY the large social media companies that own all the comments and content that has accumulated by the community have enough data to train AI models. Or sites like stock photo libraries or deviantart who own the distribution rights for the content. That means all copyright arguments practically argue that AI should be owned by big corporations and should be inaccessible to normal people.

    Basically the “means of generation” will be owned by the capitalists, since they are the only ones with the economic power to license these things.

    That is basically the worst case scenario. Not only will the value of work diminish greatly, the advances in productivity will also be only accessible to big capitalists.

    Of course, that is basically inevitable anyway. Why wouldn’t they want this? It’s just sad seeing the stupid morons arguing for this as if they had anything to gain.

    • sunzu2@thebrainbin.org
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      It’s just sad seeing the stupid morons arguing for this as if they had anything to gain.

      The real money shot here… How did we get to a point where people will argue against common working slave good?

      There is a pattern too… Iraq, Afghanistan, israeli genocide, bailouts. Anytime there is money to be made for the regime, we got solid 30% of population working as hard for zealots.

      Them 2 decades later when the two wars failed, we can’t find a single guy who support either war around 🤡

      The same is somehow now shilling we “shouldn’t invafe ukraine but Israeli needs tools to defend themselves”

    • mm_maybe@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I’m getting really tired of saying this over and over on the Internet and getting either ignored or pounced on by pompous AI bros and boomers, but this “there isn’t enough free data” claim has never been tested. The experiments that have come close (look up the early Phi and Starcoder papers, or the CommonCanvas text-to-image model) suggested that the claim is false, by showing that a) models trained on small, well-curated datasets can match and outperform models trained on lazily curated large web scrapes, and b) models trained solely on permissively licensed data can perform on par with at least the earlier versions of models trained more lazily (e.g. StarCoder 1.5 performing on par with Code-Davinci). But yes, a social network or other organization that has access to a bunch of data that they own, or have licensed, could almost certainly fine-tune a base LLM trained solely on permissively licensed data to get a tremendously useful tool that would probably be safer and more helpful than ChatGPT for that organization’s specific business, at vastly lower risk of copyright claims or toxic generated content, for that matter.

      • LarmyOfLone@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 days ago

        Thanks for the info. But lets say you want to train a (future) AI to spot and tag disinformation and misinformation. You’d need to use and curate actual data from social media sites and articles.

        If copyright is extended to learning from and analyzing publicly available data, such an AI will only be possible by licensing that data. Which will be monetize to maximize profit, first some lump sum, then later “per gb” and then later “per use”.

        I’m sure open source AI will make due and for many applications there is enough free data, but I can imagine a lot of cases where there wont. Anything that requires “commercially successful” media, articles, newspapers, screenplays, movies, books, social media posts and comments, images, photos, video clips…

        We’re basically setting up a world where the intellectual wealth of our civilization is being transformed into a commodity and then will be transferred into the hands of a few rich capitalists.

        And even if there is acceptable amount of free data, if the principle is that data needs to be specifically licensed to learn and train and derive AI works from it - that makes free data use expensive too. It needs to be specifically vetted and is still vulnerable to be sued for mistakes or outrageous claims of copyright. Similar to patents, the uncertainty requires higher capitalization for any startup to defend against lawsuits.

        • mm_maybe@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          Yeah, I’ve struggled with that myself, since my first AI detection model was technically trained on potentially non-free data scraped from Reddit image links. The more recent fine-tune of that used only Wikimedia and SDXL outputs, but because it was seeded with the earlier base model, I ultimately decided to apply a non-commercial CC license to the checkpoint. But here’s an important distinction: that model, like many of the use cases you mention, is non-generative; you can’t coerce it into reproducing any of the original training material–it’s just a classification tool. I personally rate those models as much fairer uses of copyrighted material, though perhaps no better in terms of harm from a data dignity or bias propagation standpoint.

  • calcopiritus@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I’ll train my AI on just the bee movie. Then I’m going to ask it “can you make me a movie about bees”? When it spits the whole movie, I can just watch it or sell it or whatever, it was a creation of my AI, which learned just like any human would! Of course I didn’t even pay for the original copy to train my AI, it’s for learning purposes, and learning should be a basic human right!

    • stephen01king@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      That would be like you writing out the bee movie yourself after memorizing the whole movie and claiming it is your own idea or using it as proof that humans memorizing a movie is violating copyright. Just because an AI is violating copyright by outputting the whole bee movie, it doesn’t mean training the AI on copyright stuff is violating copyright.

      Let’s just punish the AI companies for outputting copyright stuff instead of for training with them. Maybe that way they would actually go out of their way to make their LLM intelligent enough to not spit out copyrighted content.

      Or, we can just make it so that any output made by an AI that is trained on copyrighted stuff cannot be copyrighted.

      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        I don’t think that’s a feasible dream in our current system. They’ll just lobby for it, some senators will say something akin to “art should have been always a hobby, not a profession”, then make adjustments for the current copyright laws so that they can be copyrighted.

      • calcopiritus@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        If the solution is making the output non-copyrighted it fixes nothing. You can sell the pirating machine on a subscription. And it’s not like Netflix where the content ends when the subscription ends, you have already downloaded all the not-copyrighted content you wanted, and the internet would be full of non-copyrighted AI output.

        Instead of selling the bee movie, you sell a bee movie maker, and a spiderman maker, and a titanic maker.

        Sure, file a copyright infringement each time you manage to make an AI output copyrighted content. Just run it on a loop and it’s a money making machine. That’s fine by me.

        • stephen01king@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Yeah, because running the AI also have some cost, so you are selling the subscription to run the AI on their server, not it’s output.

          I’m not sure what is the legality of selling a bee movie maker, so you’d have to research that one yourself.

          It’s not really a money making machine if you lose more money running the AI on your server farm, but whatever floats your boat. Also, there are already lawsuits based on outputs created from chatgpt, so it is exactly what is already happening.

          • calcopiritus@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            12 days ago

            Yeah, making sandwiches also costs money! I have to pay my sandwich making employees to keep the business profitable! How do they expect me to pay for the cheese?

            EDIT: also, you completely missed my point. The money making machine is the AI because the copyright owners could just use them every time it produces copyright-protected material if we decided to take that route, which is what the parent comment suggested.

            • stephen01king@lemmy.zip
              link
              fedilink
              English
              arrow-up
              0
              ·
              12 days ago

              They should pay for the cheese, I’m not arguing against that, but they should be paying it the same amount as a normal human would if they want access to that cheese. No extra fees for access to copyrighted material if you want to use it to train AI vs wanting to consume it yourself.

              And I didn’t miss your point. My point was that the reality is already occurring since people are already suing OpenAI for ChatGPT outputs that the people suing are generating themselves, so it’s no longer just a hypothetical. We’ll see if it is a money making machine for them or will they just waste their resources from doing that.

              • calcopiritus@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                12 days ago

                Media is not exactly like cheese though. With cheese, you buy it and it’s yours. Media, however, is protected by copyright. When you watch a movie, you are given a license to watch the movie.

                When an AI watches a movie, it’s not really watching it, it’s doing a different action. If the license of the movie says “you can’t use this license to train AI, use the other (more expensive) license for such purposes”, then AIs have extra fees to access the content that humans don’t have to pay.

                • stephen01king@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  12 days ago

                  Both humans and AI consume the content, even if they do not do so in the exact same way. I don’t see the need to differentiate that. It’s not like we have any idea of the mechanism by which humans consume a content to make the differentiation in the first place.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      In the meantime I’ll introduce myself into the servers of large corporations and read their emails, codebase, teams and strategic analysis, it’s just learning!