I’ve recently noticed this opinion seems unpopular, at least on Lemmy.

There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples’ works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate “new” content based on probabilities.

My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai

I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people’s “likeness.” I understand the hate for AI generated shit (because it is shit). I really don’t understand where all this hate for using public data for building a “statistical” model to “learn” general patterns is coming from.

I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don’t think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that’s really just a problem with capitalism, and productivity increases are generally considered good.

  • Hamartiogonic@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    I think of an LLM as a tool, just like drill or a hammer. If you buy or rent these tools, you pay the tool company. If you use the tools to build something, your client pays you for that work.

    Similarly, OpenAI can charge me for extensive use of ChatGPT. I can use that tool to write a book, but it’s not 100% AI work. I need to spend several hours prompt crafting, structuring, reading and editing the book in order to make something acceptable. I don’t really act as a writer in this workflow, but more like an editor or a publisher. When I publish and sell my book, I’m entitled to some compensation for the time and effort that I put into it. Does that sound fair to you?

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      Yes of course you are.

      …but do you agree that if you use an AI in that way that you are benefitting from another author’s work? You may even, unknowingly, violate the copyright of the original author. You can’t be held liable for that infringement because you did it unwittingly. OpenAI, or whoever, must bare responsibility for that possible outcome through the use of their tool.

      • Hamartiogonic@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Yes, it’s true that countless authors contributed to the development of this LLM, but they were not compensated for it in any way. Doesn’t sound fair.

        Can we compare this to some other situation where the legal status has already been determined?

        • wewbull@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          I was thinking about money laundering when I wrote my response, but I’m not sure it’s a good analogy. It still feels to me like constructing a generative model is a form of “Copyright washing”.

          Fact is, the law has yet to be written.