• 0 Posts
  • 154 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle



  • papertowels@lemmy.onetoFuck AI@lemmy.worldAgree?
    link
    fedilink
    arrow-up
    0
    ·
    27 days ago

    Regardless of where folks are located, I haven’t seen anything that suggests that affecting things in the physical realm is of equal difficulty to the digital realm, so it still makes sense to me that purely digital improvements are made faster.

    Re: robot painter

    I actually found this video which is fascinating, although the pieces it seems to make are… currently mediocre. Idk if you’ve found any videos on machines leveraging brush angle/rotation - using “techniques” like these are actually what interests me about this space and why I differentiated it from “printing”


  • papertowels@lemmy.onetoFuck AI@lemmy.worldAgree?
    link
    fedilink
    arrow-up
    0
    ·
    28 days ago

    The hardware part isn’t the difficult stuff, well it might be difficult but not research difficult, the control software is.

    That’s exactly the “ai” aspect of this, which is why I’ve been saying it’s a harder problem. Your control software has to account for so much because the real world has many unforeseen environmental issues.

    Solving everyday problems using proven methods? Try getting VC funding for that.

    Fwiw, one of my friends works at John Deere helping with automation by working on depth estimation via stereo cameras, I believe for picking fruit. That’s about as “automate menial labor” as you can get, imo. The work is being done, it’s just hard.

    Going for AGI is the big stuff. Impressing people is the big stuff. Dazzling people with hype is the big stuff.

    I think you’ve introduced some circular thinking in here - spot and atlas bots from boston dynamics do dazzle people. In fact I’d argue that with all the recent glue memes, people are disillusioned with agi.

    At the end of the day, manipulating things in the real world is simply harder than manipulating purely digital things.

    Once we’ve established that, then it makes sense why digital image generation is tackled first before many physical things. As a particularly relevant example, you’ll notice that there’s no robot physically painting (not printing!) the images generated by image generation, because it’s that much harder.

    Ultimately, however, I don’t think society is ready for robots that do our menial labor. We need some form of UBI, otherwise the robots will in fact just be terking our jerbs.


  • papertowels@lemmy.onetoFuck AI@lemmy.worldAgree?
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    28 days ago

    I wasn’t able to find exact r&d costs for stability ai, however all the numbers mentioned in this article are at least an order of magnitude lower than Boston dynamics r&d costs.

    Again, I assert that working in the physical realm brings along with it far more r&d costs on the basis that there’s far more that can go wrong that has to be accounted for. Yes, as you described, the problem itself is well defined, however the environment introduced countless factors that need to be accounted for before a safe product can be released to consumers.

    As an example, chess is potentially the most well defined problem as it is a human game with set rules. However a chess robot did grab and break a child’s finger a while back. There’s a reason why we don’t really have general manipulator arms in the house yet.

    There is no threat of physical harm in image generation.


  • papertowels@lemmy.onetoFuck AI@lemmy.worldAgree?
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    28 days ago

    I was thinking more development costs.

    Like I said, not really a clear analog, however I’m hoping this shows the difference working in the real world makes when it comes to R&D costs. Idk how accurate this website is, but we’re talking about literal billions in R&D. Versus most stable diffusion models are trained up by a grad student or something - and subsequently released for free potentially as a reflection of the r&d price.

    I don’t think my washing machine needs to be sentient to do a good job.

    But you will need them to be able to recognize the type of clothing and orientation in a 3d space before manipulating it, and that’s where training models comes in.


  • papertowels@lemmy.onetoFuck AI@lemmy.worldAgree?
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    28 days ago

    Why don’t you ask the direct question: Are advances in laundry and dish washing technology easier or harder than in image generation?

    Image generation, being purely software, is far easier than automating physical tasks. There’s very little risk of danger, you can iterate much faster, and costs are lower. Not really a clear analog, but Boston dynamics spot robot is like 75k, whereas most image generation models are downloadable for free. Once you start acting in the physical world things get expensive and hard.

    Consider the billions sunk into buying shovels from nvidia, how much progress could’ve been made on the laundry front?

    Automating laundry would’ve also required this, as the shovels are for general machine learning. In fact as far as I can tell, these gpus aren’t being bought for image generation, but large language models now.



  • papertowels@lemmy.onetoFuck AI@lemmy.worldAgree?
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    29 days ago

    Typing “big boobs anime girl pink hair rain low lighting trending on artstation” into a text box is not human involvement in art

    First, go ahead and get the shaming of me for asking this out of the way, and after that’s over with, why not?

    You said it yourself,

    No, there just needs to be some sort of human involvement.

    A human was involved in dreaming up a scene depicting something, and proceeded to use their tools to manifest their imagination into an image. They likely chose their tool (different models), used their preferred technique (selecting the right settings, right keywords, and probably regenerated areas that depicted 8 fingers on a hand), and the result was an image that they visualized.

    Sounds like human involvement to me, it’s not like lightning struck wood and boom, there’s a big boobed anime girl with pink hair in low lightning manifest.

    Again, gatekeeping whether or not it is art seems kinda silly. The proper attack vectors, IMO, are whether it’s good art, with a side of whether it’s stolen art.


  • papertowels@lemmy.onetoFuck AI@lemmy.worldAgree?
    link
    fedilink
    arrow-up
    1
    ·
    29 days ago

    Are you saying there needs to be an arbitrarily decided amount of human effort for something to be art?

    IMO any level of human effort (including picking a model and figuring out how to use it) should qualify something as art. Whether it’s good or shitty art is a whole other ballgame.


  • papertowels@lemmy.onetoFuck AI@lemmy.worldAgree?
    link
    fedilink
    arrow-up
    1
    ·
    29 days ago

    I would also say that a tree growing freely in the forest isn’t art, am I gatekeeping plants from art?

    Is bonsai an art? I’d say it is. In that case the difference between that and your example is humans providing artistic direction.

    Does the same not happen with generative models? In the typical use case, humans provide artistic direction.



  • papertowels@lemmy.onetoFuck AI@lemmy.worldAgree?
    link
    fedilink
    arrow-up
    0
    ·
    29 days ago

    No: It makes certain specialised subskills more accessible. AI can generate a song to you, you still need to know which song fits your game.

    Okay so at this point it’s sounding like an issue of semantics - you’re clearly saying that artists can use AI to help their tasking.

    I believe the other guy your responding to defines artist as someone who is able to create without AI.

    Y’all are hung up over what the definition of “artist” is, but you’re in agreement that generative ai can help those who are less skilled in the production of art.