• Sabata11792@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    Thinly veiled and successful porn site blocks porn. Everyone leaves. They killed themselves for money.

    The lifers clinging to the site blame AI because the bots are the only thing keeping the lights on. All the humans left with the porn.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      My missus used to post drawings on there about 10-15 years ago.

      Think all the actual art is on Twitter these days (although some have gone to Mastodon).

      Just seems a bit of a niche social network when bigger ones exist with bigger audiences and more chance of people actually wanting something drawn. Even if it’s mostly really weird smut.

    • Sylvartas@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      I’d argue all the humans left when artstation became big. All my artists friends used to upload their (non porn) work to deviantart before artstation was popular. But banning the porn was the first nail in the coffin for sure.

      • Sabata11792@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        I haven’t head of art station till now. What’s the difference from diviant art? Seems like its a censored platform as well from 30 seconds of googling.

        • Sylvartas@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          The UI looks more “slick”, and it is censored, so your portfolio or whatever you want to showcase isn’t displayed alongside some MLP porn or pregnant Sonic comics. Which doesn’t mean there isn’t tons of “artistic nudity” on the site though, last time I checked.

          I’m not an artist myself but I know the artists in my industry (videogames) love to use it

  • sfantu@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    To take it from the publishing industry, A.I. is already decimating once-common job prospects. An April report from the Society of Authors found that 26 percent of the illustrators surveyed “have already lost work due to generative A.I.” and about 37 percent of illustrators “say the income from their work has decreased in value because of generative A.I.”

    I have to say … I LOVE THIS !

    Adapt or else …

  • atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Angelo ran da into the ground long before this. Not gonna lie, I’m not surprised. Not even disappointed.

  • haywire@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    AI generated content is great and all but it drowns out everything else on there. Anyone can type a prompt and generate a great looking image with a couple of attempts these days it seems.

    The people spending days, weeks, months and more on a piece can’t keep up.

    • istanbullu@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      The same way people using shovels can’t keep up with an excavator.

      Technology changes the world. This is nothing new.

    • LEX@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      People spending that much time on their work can and should create things in meat space.

    • HelloThere@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      It’s almost like low quality mechanisation is something that should be resisted. I wonder where I’ve heard that before…

      • Kusimulkku@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I don’t know for what product that’d be desirable. What did you have in mind?

    • lurch (he/him)@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      there’s some stuff image generating AI just can’t do yet. it just can’t understand some things. a big problem seems to be referring to the picture itself, like position or its border. another problem is combining things that usually don’t belong together, like a skin of sky. those are things a human artist/designer does with ease.

      • 100@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        think of an episode of any animated series with countless handmade backgrounds, good luck generating those with any sort of consistency or accuracy and you will be calling for an artist who can actually take instructions and iterate

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        there’s some stuff image generating AI just can’t do yet

        There’s a lot.

        Some of it doesn’t matter for certain things. And some of it you can work around. But try creating something like a graphic novel with Stable Diffusion, and you’re going to quickly run into difficulties. You probably want to display a consistent character from different angles – that’s pretty important. That’s not something that a fundamentally 2D-based generative AI can do well.

        On the other hand, there’s also stuff that Stable Diffusion can do better than a human – it can very quickly and effectively emulate a lot of styles, if given a sufficient corpus to look at. I spent a while reading research papers on simulating watercolors, years back. Specialized software could do a kind of so-so job. Stable Diffusion wasn’t even built for that, and with a general-purpose model, it already can turn out stuff that looks rather more-impressive than those dedicated software packages.

        • just another dev@lemmy.my-box.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          I think creating a lora for your character would help in that case. Not really easy to do as of yet, but technically possible, so it’s mostly a ux problem.

          • tal@lemmy.today
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            7 months ago

            I think creating a lora for your character would help in that case.

            A LORA is good for replicating a style, where there’s existing stuff, helps add training data for a particular subject. There are problems that existing generative AIs smack into that that’s good at fixing. But it’s not a cure-all for all limitations of such systems. The problems there is kinda fundamental to how the system works today – it’s not a lack of training data, but simply how the system deals with the world.

            The problem is that the LLM-based systems today think of the world as a series of largely-decoupled 2D images, linked only by keywords. A human artist thinks of the world as 3D, can visualize something – maybe using a model to help with perspective – and then render it.

            So, okay. If you want to create a facial portrait of a kinda novel character, that’s something that you can do pretty well with AI-based generators.

            But now try and render that character you just created from ten different angles, in unique scenes. That’s something that a human is pretty good at. Here’s a page from a Spiderman comic:

            https://spiderfan.org/images/title/comics/spiderman_amazing/031/18.jpg

            Like, try reproducing that page in Stable Diffusion, with the same views. Even if you can eventually get something even remotely approximating that, a human, traditional comic artist is going to be a lot faster at it than someone sitting in front of a Stable Diffusion box.

            Is it possible to make some form of art generator that can do that? Yeah, maybe. But it’s going to have to have a much more-sophisticated “mental” model of the world, a 3D one, and have solid 3D computer vision to be able to reduce scenes to 3D. And while people are working on it, that has its own extensive set of problems. Look at your training set. The human artist slightly stylized stuff or made errors that human viewers can ignore pretty easily, but a computer vision model that doesn’t work exactly like human vision and the mind might go into conniptions over. For example, look at the fifth panel there. The artist screwed up – the ship slightly overlaps the dock, right above the “THWIP”. A human viewer probably wouldn’t notice or care. But if you have some kind of computer vision system that looks for line intersections to determine relative 3d positioning – something that we do ourselves – it can very easily look at that image and have no idea what the hell is going on there. Or to give another example, the ship’s hull isn’t the same shape from panel to panel. In panel 4, the curvature goes one way; in panel 5, the other way. Say I’m a computer vision system trying to deal with that. Is what’s going on there that there ship is a sort of amorphous thing that is changing shape from frame to frame? Is it important for the shape to change, to create a stylized effect, or is it just the artist doing a good job of identifying what the matters to a human viewer? Does this show two Spidermen in different dimensions, alternating views? Are the views from different characters, who have intentional vision distortions? I mean, understanding what’s going on there entails identifying that something is a ship, knowing that ships don’t change shape, having some idea of what is important to a human viewer in the image, knowing from context that there’s one Spiderman, in one dimension, etc.

            The proportions aren’t exactly consistent from frame to frame, don’t perfectly reflect reality, and might be more effective at conveying movement or whatever than an actual rendering of a 3d model would be. That works for human viewers. And existing 2D systems can kind of dodge the problem (as long as they’re willing to live with the limitations that intrinsically come with a 2D model) because they’re looking at a bunch of already-stylized images. But now imagine that they’re trying to take images, then reduce them into a coherent 3D world, then learn to re-apply stylization. That may involve creating not just a 3D model, but enough understanding of the objects in that world to understand what stylization is reasonable on, and when. Is it technically possible? Probably. But is it a minor effort to get there from here? No, probably not. You’re going to have to make a system that works wildly differently from the way that the existing systems do. That’s even though what you’re trying to do might seem small from the standpoint of a human observer – just being able to get arbitrary camera angles of the image being rendered.

            The existing generative AIs don’t work all that much the way a human does. If you think of them as a “human” in a box, that means that there are some things that they’re gonna be pretty impressively good at that a human isn’t, but also some things that a human is pretty good at that they’re staggeringly godawful at. Some of those things that look minor (or even major) to a human viewer can be worked around with relatively-few changes, or straightforward, mechanical changes. But some of those things that look simple to a human viewer are really, really hard to improve on.

            • tal@lemmy.today
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              7 months ago

              On the other hand, there are things that a human artist is utterly awful at, that LLM-based generative AIs are amazing at. I mentioned that LLMs are great at producing works in a given style, can switch up virtually effortlessly. I’m gonna do a couple Spiderman renditions in different styles, takes about ten seconds a pop on my system:

              Spiderman as done by Neal Adams:

              Spiderman as done by Alex Toth:

              Spiderman in a noir style done by Darwyn Cooke:

              Spiderman as done by Roy Lichtenstein:

              Spiderman as painted by early-19th-century American landscape artist J. M. W. Turner:

              And yes, I know, fingers, but I’m not generating a huge batch to try to get an ideal image, just doing a quick run to illustrate the point.

              Note that none of the above were actually Spiderman artists, other than Adams, and that briefly.

              That’s something that’s really hard for a human to do, given how a human works, because for a human, the style is a function of the workflow and a whole collection of techniques used to arrive at the final image. Stable Diffusion doesn’t care about techniques, how the image got the way it is – it only looks at the output of those workflows in its training corpus. So for Stable Diffusion, creating an image in a variety of styles or mediums – even ones that are normally very time-consuming to work in – is easy as pie, whereas for a single human artist, it’d be very difficult.

              I think that that particular aspect is what gets a lot of artists concerned. Because it’s (relatively) difficult for humans to replicate artistic styles, artists have treated their “style” as something of their stock-in-trade, where they can sell someone the ability to have a work in their particular style resulting from their particular workflow and techniques that they’ve developed. Something for which switching up styles is little-to-no barrier, like LLM-based generative AIs, upends that business model.

              Both of those are things that a human viewer might want. I might want to say “take that image, but do it in watercolor” or “make that image look more like style X, blend those two styles”. LLMs are great at that. But I equally might want to say “show this scene from another angle”, and that’s something that human artists are great at.

      • anlumo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        It‘s even hard to impossible to generate the image of a person doing a handstand. All models assume a rightside-up person.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I’m a bit surprised about how quickly I got tired of seeing AI content (mostly porn and non-nudes) Somehow it all just looks the same. You’d think that being AI generated would give you infinite variety but apparently not.

  • istanbullu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Deviant is probably having the best time of its existance thanks to generative models.

  • metaStatic@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    Deviantart died a very very long time ago. the creature currently wearing it’s skin can fuck off and die for all I or any other actual artist cares.

  • A'random Guy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    There should always be a home for you degenerates to enjoy whatever category of poorly drawn unicorn porn one likes

    • FiniteBanjo@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I didn’t really move to another platform when I stopped using deviantart a few years ago, I just started sharing my work with small circles and local galleries instead.

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    wow I had forgotten about this website for such a long time. Like maybe 15-20 years ago it was a great resource for fantasy themed drawings and inspiration for rpg games

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Worse still, DeviantArt showed little desire to engage with these concerns

    Well. There it tis.

  • DFWSAM@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    It’s obvious, generative AI could not exist without human work on which to train and rather than ask, or pay, for access to it, tech companies (and the assholes running them) feel free to appropriate it as they see fit.

    Fuck them running.

    • istanbullu@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      The coolest AI work these days is open source, and developed by enthusiastic communities across the world.

  • stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Welp, I had no idea about this, time to delete my gallery I have had for 20 years.

    Stopped using it last year as it was just so slow.

    • BigFig@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Is there really a point? It’s likely already been scraped into the data pool

      • stoy@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Sure, I may be too late now, but removing real content makes their platform less valuable overall.