When Adobe Inc. released its Firefly image-generating software last year, the company said the artificial intelligence model was trained mainly on Adobe Stock, its database of hundreds of millions of licensed images. Firefly, Adobe said, was a “commercially safe” alternative to competitors like Midjourney, which learned by scraping pictures from across the internet.

But behind the scenes, Adobe also was relying in part on AI-generated content to train Firefly, including from those same AI rivals. In numerous presentations and public postsabout how Firefly is safer than the competition due to its training data, Adobe never made clear that its model actually used images from some of these same competitors.

  • Mereo@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago
    • Garbage in -> Garbage out (x2)
    • Garbage in (x2) -> = Garbage out (x4)
    • Garbage in (x4) -> = Garbage out (x8)
    • Garbage in (x8) -> = Garbage out (x16)
  • seaQueue@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Oh hey, look. The cycle of AI ingesting garbage output from another AI model has begun. This can’t possibly impact quality or reliability in any way /s

  • Zink@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    We always thought the singularity is when our technology would take off advancing without us.

    Maybe that moment when it decides it doesn’t need us will be a rapid disintegration by machine circle jerk.

    • uriel238@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      They [the Golgafrincham] sent the B ship off first, but of course, the other two-thirds of the population stayed on the planet and lived full, rich and happy lives until they were all wiped out by a virulent disease contracted from a dirty telephone.

  • LEX@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Okay so that cuts it. Every single AI engine should be open source as they ALL use our collective knowledge. They should be treated like libraries, as publicly owned stores of knowledge for everyone’s use

    I thought maybe Firefly was the one exception, although I suspected some kind of shenanigans. But nope. These corpos stole our collective knowledge and culture and are now ransoming it back to us for profit.

  • airrow@hilariouschaos.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    the problem is “intellectual property” existing at all, just get rid of it entirely and make everything public domain

  • SeaJ@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I’ve seen Multiplicity enough times to know how this turns out.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      You’ve been watching the original movie multiple times? I just watch the most recent recording of myself describing the movie, and then record a new description over that, with each successive generation.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      No.

      I feel I should explain this but I got nothing. An image is an image. Whether it’s good or bad is a matter of personal preference.

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        When you process an image through the same pipeline multiple times, artifacts will appear and become amplified.

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          What’s happening here is just nothing like that. There is no amplifier. Images aren’t run through a pipeline.

            • General_Effort@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Yes, but the model is the end of that pipeline. The image is not supposed to come out again. A model can “memorize” an image, but then you wouldn’t necessarily expect an amplification of artifacts. Image generators are not supposed to d lossy compression, though the tech could be used for that.

              • Grimy@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                3 months ago

                If the image has errors that are hard to spot by the human eye and the model gets trained on these images, thoses errors that came about naturally on real data get amplified.

                Its not a model killer but it is something to watch out for.

      • hyper@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        I’m not so sure about that… if you train an ai on images with disfigured anatomy which it thinks is the “right” way it will generate new images with messed up anatomy. It gives a feedback loop, like when a mic picks up its own signal.

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Well, you wouldn’t train on images that you consider bad, or rather you’d use them as examples for what not to do.

          Yes, you have to be careful when training a model on its own output. It already has a tendency to produce that, so it’s easy to “overshoot”, so to say. But it’s not a problem in principle. It’s also not what’s happening here. Adobe doesn’t use the same model as Midjourney.

        • abhibeckert@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          3 months ago

          Midjourney doesn’t generate disfigured anatomy. You’re think of Stable Diffusion which is a smaller model that can generate an image in 30 seconds on my laptop GPU. Even SD is pretty good at avoiding that, with decent hardware and larger models (that need more memory).

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Yes, though that’s not what they’re doing. They train on images uploaded to their marketplace and, of course, some of these are AI generated.

            • Balder@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              3 months ago

              Data augmentation is a thing since a long time, but of course if the majority of your data is synthetic your model will suck on real world data. Though as these generative models get better and better at mimicking real world data and we select the results we want to use (removing the nonsense and hallucinations, artifacts etc.), we’re still feeding them “more data”.

              I guess we’ll have to wait and see what effect it’ll produce on future models. I think overall the improvements on LLMs have been good, even at slow steps we’re still figuring out how to better turn them into useful tools. I don’t know how well the image generation models have improved in the last 2 years though.

              • General_Effort@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                we’re still feeding them “more data”.

                Yes, that’s one way of putting it. What gets into the Adobe stock database is already curated. They also have the sales and tracking data.

                Though as these generative models get better and better at mimicking real world data

                Also yes on this. It doesn’t matter if your data is synthetic but only if it’s fit for purpose. That’s especially true in this case, where the distinction between synthetic and real is so unclear. You’re already including drawings, renders, photomanips, etc. I have no idea what kind of misconception people have that they would think it matters if some piece of digital art is AI generated.

              • General_Effort@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                It doesn’t matter how the image was made. It only matters what it is like and how it is used to affect the model.

                • Even_Adder@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  3 months ago

                  That’s what I’m saying. Synthetic images can help your model look better, but if you’re aiming for “realistic” output, but synthetic images are fundamentally not real images and too many will bias your model in a slightly different direction.

  • jimmydoreisalefty@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Adobe said a relatively small amount — about 5% — of the images used to train its AI tool was generated by other AI platforms. “Every image submitted to Adobe Stock, including a very small subset of images generated with AI, goes through a rigorous moderation process to ensure it does not include IP, trademarks, recognizable characters or logos, or reference artists’ names,” a company spokesperson said.

    Adobe Stock’s library has boomed since it began formally accepting AI content in late 2022. Today, there are about 57 million images, or about 14% of the total, tagged as AI-generated images. Artists who submit AI images must specify that the work was created using the technology, though they don’t need to say which tool they used. To feed its AI training set, Adobe has also offered to pay for contributors to submit a mass amount of photos for AI training — such as images of bananas or flags.