• mrvictory1@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    In 2 interviews, the interviewees claim that the investors may lose faith in return of investment if “the killer application of AI” is not available in 18 months. In other words, if the AI is a bubble, it will burst in only 18 months.

    • Alphane Moon@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      That seems like a fair assumption. I would argue we are at the peak of the bubble and only recently we’ve seen the suits (Goldman Sachs and more broadly analysts at banks) start asking questions about ROI and real use cases.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Yeah but it’s Goldman Sachs saying it. Presumably because they haven’t invested in AI.

    Perhaps we could get a non-biased opinion and also from an actual expert rather than some finance ghoul who really doesn’t know anything?

    • demonsword@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Presumably because they haven’t invested in AI.

      Presumably is carrying all the weight of your whole post here

      Perhaps we could get a non-biased opinion and also from an actual expert rather than some finance ghoul who really doesn’t know anything?

      I also hate banks, but usually those guys can sniff out market failures way ahead of the rest of us. All their bacon rides on that, after all

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It’s noteworthy because it’s Goldman Sachs. Lots of money people are dumping it into AI. When a major outlet for money people starts to show skepticism, that could mean the bubble is about to pop.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      The problem is experts in AI are biased towards AI (it pays their salaries).

    • 0x0@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’d say they know a thing or two about finance… so maybe they didn’t invest because they see it as overhype?

  • nondescripthandle@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    We know, you guys tried using the buzz around it to push down wages. You either got what you wanted and flipped tune, or realized you fell for another tech bro middle-manning unsolicited solutions into already working systems.

  • coffee_with_cream@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    It’s weird to me that people on Lemmy are so anti ML. If you aren’t impressed, you haven’t used it enough. “Oh it’s not 100% perfect”

    • smiletolerantly@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I was fully on board until, like, a year ago. But the more I used it, the more obviously it came undone.

      I initially felt like it could really help with programming. And it looked like it, too - when you fed it toy problems where you don’t really care about how the solution looks, as long as it’s somewhat OK. But once you start giving it constraints that stem from a real project, it just stops being useful. It ignores constraints (use this library, do not make additional queries, …), and when you point out its mistake and ask it to to better it goes “oh, sorry! Here, let me do the same thing again, with the same error!”.

      If you’re working in a less common language, it even dreams up non-existing syntax.

      Even the one thing it should be good at - plain old language - it sucks ass at. It’s become so easy to spot LLM garbage, just due to its style.

      Worse, asking it to proofread a text for spelling and grammar mistakes, but to explicitly do not change the wording or style, there’s about a 50/50 chance it will either

      • change your wording or style, or
      • point out errors that are not even in the original text in the first place!

      I could honestly go on and on, but what it boils down to is: it is able to string together words that make it sound like it knows what it is doing, but it is just that, a facade. And it looks like for more and more people, the spell is finally breaking.

  • jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Yeah… It’s machine learning with a hype team.

    There are some great applications, but they are very narrow

  • themurphy@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Haha, there’s a company that didn’t invest in AI in time.

    Sounds just like Republican Elon Musk when he cried over AI being years ahead of his own.

    • Laser@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Even a broken clock is right twice a day.

      I don’t want to imply GS did a responsible thing, but… if they assessed the situation two years ago and decided RoI is unlikely and as such didn’t invest - wouldn’t their current stance actually be reasonable?

    • nyan@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Even a stopped clock is right twice a day. Provided it’s an analog clock.

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    AI was a promise more than anything. When ChatGPT came out, all the AI companies and startups promised exponential improvements that will chaaangeee the woooooorrlllddd

    Two years later it’s becoming insanely clear they hit a wall and there isn’t going to be much change unless someone makes a miraculous discovery. All of that money was dumped in to just make bigger models that are 0.1% better than the last one. I’m honestly surprised the bubble hasn’t popped yet, it’s obvious we’re going nowhere with this.

      • bamboo@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        There are millions of people devoting huge amounts of time and energy into improving AI capabilities, publishing paper after paper finding new ways to improve models, training, etc. Perhaps some companies are using AI hype to get free money but that doesn’t discredit the hard work of others.

        • henrikx@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Can’t believe you get downvoted for saying that. No worries though as the haters will all be proven wrong eventually.

    • bluGill@kbin.run
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      @simple@lemm.ee

      @technology@lemmy.world @tek@calckey.world

      ai has been doing that trick since the 1950s. There have been a lot of use coming out of ai, but it has never been called ai once successful and never lived up to the early hype. some in the know about all those previous ones were surprised by the hype and not surprised about where it has gone, while others pushed the hype.

      The details have changed but nothing else.

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        (Repeating myself due to being banned from my previous instance for offering to solve a problem with nukes)

        Bring back Lisp machines. I like what was called AI when they were being made.

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yeah the only innovation here is that OpenAI had the balls to use the entire internet as a training set. The underlying algorithms aren’t really new, and the limitations have been understood by data scientists, computer scientists, and mathematicians for a long time.

        • Frozengyro@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          So now it just has to use every conversation that happens as a data set. They could use microphones from all over the world to listen and learn and understand better…

    • henrikx@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      You should all see the story about the invention blue LEDs. No one believed that it could work except some japanese guy (Shuji Nakamura) who kept working on it despite his company telling him to stop. No one believed it could ever be solved despite being so close. He solved it and the rewards were astronomical.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I mean if you ignore all the papers that point out how dubious the gen AI benchmarks are, then it is very impressive.