• Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 hours ago

    Thank fuck. Can we have cheaper graphics cards again please?

    I’m sure a RTX 4090 is very impressive, but it’s not £1800 impressive.

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 hours ago

    Marcus is right, incremental improvements in AIs like ChatGPT will not lead to AGI and were never on that course to begin with. What LLMs do is fundamentally not “intelligence”, they just imitate human response based on existing human-generated content. This can produce usable results, but not because the LLM has any understanding of the question. Since the current AI surge is based almost entirely on LLMs, the delusion that the industry will soon achieve AGI is doomed to fall apart - but not until a lot of smart speculators have gotten in and out and made a pile of money.

  • acargitz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 hours ago

    It’s so funny how all this is only a problem within a capitalist frame of reference.

  • rational_lib@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 hours ago

    As I use copilot to write software, I have a hard time seeing how it’ll get better than it already is. The fundamental problem of all machine learning is that the training data has to be good enough to solve the problem. So the problems I run into make sense, like:

    1. Copilot can’t read my mind and figure out what I’m trying to do.
    2. I’m working on an uncommon problem where the typical solutions don’t work
    3. Copilot is unable to tell when it doesn’t “know” the answer, because of course it’s just simulating communication and doesn’t really know anything.

    2 and 3 could be alleviated, but probably not solved completely with more and better data or engineering changes - but obviously AI developers started by training the models on the most useful data and strategies that they think work best. 1 seems fundamentally unsolvable.

    I think there could be some more advances in finding more and better use cases, but I’m a pessimist when it comes to any serious advances in the underlying technology.

  • j4p@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 hours ago

    Sigh I hope LLMs get dropped from the AI bandwagon because I do think they have some really cool use cases and love just running my little local models. Cut government spending like a madman, write the next great American novel, or eliminate actual jobs are not those use cases.

  • KeenFlame@feddit.nu
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 hours ago

    I am so tired of the ai hype and hate. Please give me my gen art interest back please just make it obscure again to program art I beg of you

  • Decker108@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    13 hours ago

    Nice, looking forward to it! So much money and time wasted on pipe dreams and hype. We need to get back to some actually useful innovation.

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    24 hours ago

    largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence

    Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn’t understand the technology were making those claims. Which helped feed the hype.

    I 100% agree that we’re going to see an AI market correction. It’s going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.