• barsoap@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Versus most stable diffusion models are trained up by a grad student or something

    Most SD models are fine-tunes of stuff StabilityAI produces. Training those things from scratch is neither cheap nor easy, PonyXL is the one coming closest as there’s more of its own weights in there than of the SDXL base model but that wasn’t a single person enterprise. One lead dev yes but they had a community behind them.

    If it was that easy people wouldn’t be griping about StabilityAI removing any and all NSFW content from their dataset so the community has to work for months and months to make a model erm “usable”.

    • papertowels@lemmy.one
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      I wasn’t able to find exact r&d costs for stability ai, however all the numbers mentioned in this article are at least an order of magnitude lower than Boston dynamics r&d costs.

      Again, I assert that working in the physical realm brings along with it far more r&d costs on the basis that there’s far more that can go wrong that has to be accounted for. Yes, as you described, the problem itself is well defined, however the environment introduced countless factors that need to be accounted for before a safe product can be released to consumers.

      As an example, chess is potentially the most well defined problem as it is a human game with set rules. However a chess robot did grab and break a child’s finger a while back. There’s a reason why we don’t really have general manipulator arms in the house yet.

      There is no threat of physical harm in image generation.

      • barsoap@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 month ago

        The robot had been publicly used for over a decade with grandmasters and this is the first time an accident like this has occurred.

        I think that’s the problem, there. A decade ago they only just began figuring out how to integrate physical feedback, that robot might not even have any of it but it’s vital for fine motor skills and any kind of reactivity. The hardware part isn’t the difficult stuff, well it might be difficult but not research difficult, the control software is. Really, hardware isn’t the problem (that thing is human-operated).

        But anyhow I think all this is kinda missing the point of the quip. It’s neither about laundry nor furry porn, but the more general “automating what we like doing vs. automating what we don’t like to do”. And that might have a deeper reason buried in silicon valley culture and psychology: Going for AGI is the big stuff. Impressing people is the big stuff. Dazzling people with hype is the big stuff. Throwing techbro pipe dreams at problems is the big stuff. Building cars with sub 10 micron tolerance is the big stuff. Solving everyday problems using proven methods? Try getting VC funding for that. Silicon valley doesn’t do anything short of “this will bring about the singularity”.

        Or, differently put: Forget about doing the laundry. How about having to work in a washing machine factory to be able to afford following your creativity in your off time while our tech is more than good enough to run a washing machine factory with just a small team of supervisors. Even the software needed for automatic inspection of parts can nowadays be gotten off the shelf, no assembly or QA workers necessary.

        That kind of stuff has a gigantic ROI – alas, it’s also quite long-term, short-term it’s cheaper to hire human workers, not to mention human workers in the global south. But it’s going to get cheaper as more and more stuff will become standard engineering practice.

        • papertowels@lemmy.one
          link
          fedilink
          arrow-up
          0
          ·
          30 days ago

          The hardware part isn’t the difficult stuff, well it might be difficult but not research difficult, the control software is.

          That’s exactly the “ai” aspect of this, which is why I’ve been saying it’s a harder problem. Your control software has to account for so much because the real world has many unforeseen environmental issues.

          Solving everyday problems using proven methods? Try getting VC funding for that.

          Fwiw, one of my friends works at John Deere helping with automation by working on depth estimation via stereo cameras, I believe for picking fruit. That’s about as “automate menial labor” as you can get, imo. The work is being done, it’s just hard.

          Going for AGI is the big stuff. Impressing people is the big stuff. Dazzling people with hype is the big stuff.

          I think you’ve introduced some circular thinking in here - spot and atlas bots from boston dynamics do dazzle people. In fact I’d argue that with all the recent glue memes, people are disillusioned with agi.

          At the end of the day, manipulating things in the real world is simply harder than manipulating purely digital things.

          Once we’ve established that, then it makes sense why digital image generation is tackled first before many physical things. As a particularly relevant example, you’ll notice that there’s no robot physically painting (not printing!) the images generated by image generation, because it’s that much harder.

          Ultimately, however, I don’t think society is ready for robots that do our menial labor. We need some form of UBI, otherwise the robots will in fact just be terking our jerbs.

          • barsoap@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            30 days ago

            John Deere

            Illinois

            boston dynamics

            Massachusetts.

            …I mean they may recruit in the valley and have offices there but they’re not part of the business/VC culture there. And neither seeks trillions of dollars to sink into silicon.

            Ultimately, however, I don’t think society is ready for robots that do our menial labor. We need some form of UBI, otherwise the robots will in fact just be terking our jerbs.

            UBI is going to come one way or the other, question being whether we will have to fight neofeudal lords first.

            And it won’t be Musk or any of the valley guys ketamine and executing on evil plans doesn’t mix, it might not even be American, it might be… wait, yes. A Nestle+Siemens merger buying up Boston Dynamics and other companies having actually useful products. Nestle has evilness nailed down, Siemens is a big investment bank (with attached household appliance factory) and still miffed that laws forbid them from bribing foreign officials, and the rest provide the tech.

            The Chinese are another possibility, as in they have the capacity, though I can’t quite see tankies actually going for the abolishment of work.

            As a particularly relevant example, you’ll notice that there’s no robot physically painting (not printing!) the images generated by image generation, because it’s that much harder.

            I don’t think it’s that hard. The robot would be a slightly more involved plotter or a standard 6-axis arm, to train the model you don’t need to hook it up to the robot you could hook it up to a painting program, we’re quite good at simulating oil paint, including brush angle and rotation and everything, graphics tablets can detect those things and programs have been making use of it for quite a while. Might not go the whole way but far enough to only need fine-tuning once you hook up the robot.

            • papertowels@lemmy.one
              link
              fedilink
              arrow-up
              0
              ·
              29 days ago

              Regardless of where folks are located, I haven’t seen anything that suggests that affecting things in the physical realm is of equal difficulty to the digital realm, so it still makes sense to me that purely digital improvements are made faster.

              Re: robot painter

              I actually found this video which is fascinating, although the pieces it seems to make are… currently mediocre. Idk if you’ve found any videos on machines leveraging brush angle/rotation - using “techniques” like these are actually what interests me about this space and why I differentiated it from “printing”

              • barsoap@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                29 days ago

                The robot seems to use angle at least a bit, I have no idea how oil actually works so don’t ask me to judge its proficiency. All I know is that my tablet has angle and rotation and I’m not using it because sculpting doesn’t. Never went beyond pencil when it comes to 2d and realised that an outline is not a planar cut through an object not while practising drawing but while writing a shader.