“The real benchmark is: the world growing at 10 percent,” he added. “Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we’ll be fine as an industry.”

Needless to say, we haven’t seen anything like that yet. OpenAI’s top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail’s pace and requires constant supervision.

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    That’s standard for emerging technologies. They tend to be loss leaders for quite a long period in the early years.

    It’s really weird that so many people gravitate to anything even remotely critical of AI, regardless of context or even accuracy. I don’t really understand the aggressive need for so many people to see it fail.

    • Furbag@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I just can’t see AI tools like ChatGPT ever being profitable. It’s a neat little thing that has flaws but generally works well, but I’m just putzing around in the free version. There’s no dollar amount that could be ascribed to the service that it provides that I would be willing to pay, and I think OpenAI has their sights set way too high with the talk of $200/month subscriptions for their top of the line product.

    • Blakdragon@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      For me personally, it’s because it’s been so aggressively shoved in my face in every context. I never asked for it, and I can’t escape it. It actively gets in my way at work (github copilot) and has already re-enabled itself at least once. I’d be much happier to just let it exist if it would do the same for me.

  • AA5B@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    For a lot of years, computers added no measurable productivity improvements. They sure revolutionized the way things work in all segments of society for something that doesn’t increase productivity.

    AI is an inflating bubble: excessive spending, unclear use case. But it won’t take long for the pop, clearing out the failures and making successful use cases clearer, the winning approaches to emerge. This is basically the definition of capitalism

    • capybara@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      What time span are you referring to when you say “for a lot of years”?

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        Vague memories of many articles over much of my adult life decrying the costs of whatever the current trend with computers is being higher than the benefits.

        And I believe it, it’s technically true. There seems to be a pattern of bubbles where everyone jumps on the new hot thing, spend way too much money on it. It’s counterproductive, right up until the bubble pops, leaving the transformative successes.

        Or I believe it was a long term thing with electronic forms and printers. As long as you were just adding steps to existing business processes, you don’t see productivity gains. It took many years for businesses to reinvent the way they worked to really see the productivity gains

        • Snowstorm@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          If you want a reference there is a Rational Reminder Podcast (nerdy and factual personal finance podcast from a Canadian team) about this concept. It was the illustrated with trains or phone infrastructure 100 years ago : new technology looks nice -> people invest stupid amounts in a variety of projects-> some crash bring back stock valuations to reasonable level and at that point the technology is adopted and its infrastructure got subsidized by those who lost money on the stock market hot thing. Then a new hot thing emerge. The Internet got its cycle in 2000, maybe AI is the next one. Usually every few decade the top 10 in the s/p 500 changes.

  • halcyoncmdr@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    Correction, LLMs being used to automate shit doesn’t generate any value. The underlying AI technology is generating tons of value.

    AlphaFold 2 has advanced biochemistry research in protein folding by multiple decades in just a couple years, taking us from 150,000 known protein structures to 200 Million in a year.

    • shaggyb@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Well sure, but you’re forgetting that the federal government has pulled the rug out from under health research and therefore had made it so there is no economic value in biochemistry.

    • DozensOfDonner@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      Yeah tbh, AI has been an insane helpful tool in my analysis and writing. Never would I have been able to do thoroughly investigate appropriate statisticall tests on my own. After following the sources and double checking ofcourse, but still, super helpful.

    • Artyom@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I think you’re confused, when you say “value”, you seem to mean progressing humanity forward. This is fundamentally flawed, you see, “value” actually refers to yacht money for billionaires. I can see why you would be confused.

  • WalnutLum@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    He probably saw that softbank and masayoshi son were heavily investing in it and figured it was dead.

      • CodexArcanum@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        Like all good sci-fi, they just took what was already happening to oppressed people and made it about white/American people, while adding a little misdirection by extrapolation from existing tech research. Only took about 20 years for Foucault’s boomerang to fully swing back around, and keep in mind that all the basic ideas behind LLMs had been worked out by the 80s, we just needed 40 more years of Moore’s law to make computation fast enough and data sets large enough.

        • trolololol@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          12 days ago

          Ah yes same with Boolean logic, it only took a century for Moore law to pick up, they had a small milestone along the way when the transistor was invented. All computer science was already laid out by Boole from day 1, including everything that AI already does or will ever do.

          /S

        • TimewornTraveler@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Foucault’s boomerang

          fun fact, that idea predates foucault by a couple decades. ironically, it was coined by a black man from Martinique. i think he called it the imperial boomerang?

  • funkless_eck@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I’ve been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.

    So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

    The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.

    If you don’t know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That’s no one’s fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.

    Same for coding, if you understand what your code does, it’s a helpful tool for unsticking part of a problem, it can’t write the whole thing from scratch

    • mr_jaaay@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      Exactly - I find AI tools very useful and they save me quite a bit of time, but they’re still tools. Better at some things than others, but the bottom line is that they’re dependent on the person using them. Plus the more limited the problem scope, the better they can be.

      • wordcraeft@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        Yes, but the problem is that a lot of these AI tools are very easy to use, but the people using them are often ill-equipped to judge the quality of the result. So you have people who are given a task to do, and they choose an AI tool to do it and then call it done, but the result is bad and they can’t tell.

        • mr_jaaay@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          True, though this applies to most tools, no? For instance, I’m forced to sit through horrible presentations beause someone were given a task to do, they created a Powerpoint (badly) and gave a presentation (badly). I don’t know if this is inherently a problem with AI…

    • earphone843@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      12 days ago

      For coding it’s also useful for doing the menial grunt work that’s easy but just takes time.

      You’re not going to replace a senior dev with it, of course, but it’s a great tool.

      My previous employer was using AI for intelligent document processing, and the results were absolutely amazing. They did sink a few million dollars into getting the LLM fine tuned properly, though.

  • ToaLanjiao@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 days ago

    LLMs in non-specialized application areas basically reproduce search. In specialized fields, most do the work that automation, data analytics, pattern recognition, purpose built algorithms and brute force did before. And yet the companies charge nx the amount for what is essentially these very conventional approaches, plus statistics. Not surprising at all. Just in awe of how come the parallels to snake oil weren’t immediately obvious.

    • Arghblarg@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 days ago

      I think AI is generating negative value … the huge power usage is akin to speculative blockchain currencies. Barring some biochemistry and other very, very specialized uses it hasn’t given anything other than, as you’ve said, plain-language search (with bonus hallucination bullshit, yay!) … snake oil, indeed.

      • themurphy@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        13 days ago

        Its a little more complicated than that I think. LLMs and AI is not remotely the same with very different use cases.

        I believe in AI for sure in some fields, but I understand the skeptics around LLMs.

        But the difference AI is already doing in the medical industry and hospitals is no joke. X-ray scannings and early detection of severe illness is the one being used specifically today, and will save thounsands of lives and millions of dollars / euros.

        My point is, its not that black and white.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          On this topic, the vast majority of people seem to think that AI means the free tier of ChatGPT.

          AI isn’t a magical computer demon that can grant all of your wishes, but that doesn’t mean that it is worthless.

          For example, Alphafold essentially solved protein folding and diffusion models built on that discovery let us generate novel proteins with specific properties with the same ease as we can make a picture of an astronaut on a horse.

          Image classification is massively useful in manufacturing. Instead of custom designed programs purpose built for each client ($$$), you can find tune existing models with generic tools using labor that doesn’t need to be a software engineer.

          Robotics is another field. The amount of work required for humans to design and code their control systems was enormous. Now you can use standard models, give them arbitrary limbs and configurations and train them in simulated environments. This massively cuts down on the amount of engineering work ($$$) required.

  • straightjorkin@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Makes sense that the company that just announced their qbit advancement would be disparaging the only “advanced” thing other companies have shown in the last 5 years.

  • bearboiblake@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 days ago

    microsoft rn:

    ✋ AI

    👉 quantum

    can’t wait to have to explain the difference between asymmetric-key and symmetric-key cryptography to my friends!

  • Mak'@pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Very bold move, in a tech climate in which CEOs declare generative AI to be the answer to everything, and in which shareholders expect line to go up faster…

    I half expect to next read an article about his ouster.

    • enkers@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      My theory is it’s only a matter of time until the firing sprees generate enough backlog of actual work that isn’t being realised by the minor productivity gains from AI until the investors start asking hard questions.

      Maybe this is the start of the bubble bursting.

      • Mak'@pawb.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        I’ve basically given up hope of the bubble ever bursting, as the market lives in La La Land, where no amount of bad decision-making seems to make a dent in the momentum of “line must go up”.

        Would it be cool for negative feedback to step in and correct the death spiral? Absolutely. But, I advise folks to not start holding their breath so soon…

  • finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    YES

    YES

    FUCKING YES! THIS IS A WIN!

    Hopefully they curtail their investments and stop wasting so much fucking power.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I think the best way I’ve heard it put is “if we absolutely have to burn down a forest, I want warp drive out of it. Not a crappy python app”