• FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      12
      arrow-down
      9
      ·
      6 months ago

      You’re rooting for a revolutionary new technology to fail rather than get better. I’d call that wrong.

      If nothing else, AI is never going to get worse than it is now. So if that’s intolerably bad for you then improvement is the only way out.

      • Ogmios@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        6 months ago

        AI is never going to get worse than it is now

        Is that just a wild assumption, or…? One phenomena that has already been witnessed with AI is that it does in fact get worse if it trains upon it’s own output.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          6
          arrow-down
          3
          ·
          6 months ago

          Given that I have locally-run AIs sitting on my home computer that I have no plan to delete (until something better comes along), then yeah, it’s never going to get worse. If all else fails I can just use the existing AI for as long as I want. It doesn’t “wear out.”

          • Ogmios@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            6 months ago

            It doesn’t “wear out.”

            The physical components will, and compatible components for older systems keep getting harder to come across. Computers are not immortal entities. Maintenance of older machines will continually become more labour and cost intensive over time.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              5
              arrow-down
              1
              ·
              6 months ago

              Computers are general-purpose machines. You can run a computer program on any computer, it may just be faster or slower depending on the computer’s capabilities.

              The AIs I run locally are also open-source, so if future computers lose compatibility with existing programs they can be recompiled for the new architecture.

              I suppose we could lose the ability to build computers entirely, but that strikes me as a much bigger and more general issue than just this AI thing.

              • Ogmios@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                6 months ago

                You can run a computer program on any computer

                Incorrect. Certain programs require certain standards for how the hardware is designed. There are already lots of old programs which can’t be run natively on modern machines, and using software to emulate a compatible environment can impact performance in more ways than just speed.

                • FaceDeer@fedia.io
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  6 months ago

                  You’re wildly wrong about the fundamentals of computer science here. I’d be starting from first principles trying to explain further. I recommend reading up on Turing machines, or perhaps getting ChatGPT to explain it to you.

            • knightly the Sneptaur@pawb.social
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              6 months ago

              The models are digital, making copies for safekeeping is easy.

              The hardware is a computer, and computers are general-purpose. The kind that run AI models well at infrastructure scale are rather high end, but are still available off-the-shelf.

      • pelespirit@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        You’re rooting for a revolutionary new technology to fail rather than get better

        As long as the oligarchs who run and own these AI systems are at the helm, yes I’m rooting for it to fail. Better is in the eyes of the beholder. Because come on, we all know better is going to be defined as better for the oligarchs, not you or me.

  • mPony@kbin.social
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    6 months ago

    In other news, the world’s wealthiest people are running out of money after burning through the entire planet. Sources say one of the world’s multi-billionaires purchased a law firm that was in bed with the RIAA roughly 10-15 years ago when music piracy was supposedly costing more money than the GDP of all the peoples of the world, combined. “The Owners” (as they have recently rebranded) have decided to collect on this unpaid debt from every living soul, and from all the multinational companies who have been long-established as having no living souls whatsoever. A nameless, faceless, pitiless representative was quoted as saying: “Resistance… is futile. Your life, as it has been, is over. From this time forward, you will service… us.”

  • Immersive_Matthew@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    6 months ago

    While the article makes a big deal about a lack of data and even hint at synthetic data as an option, the truth is synthetic data is already being used and is just as good apparently at training. Such a misinformation article designed to stir the AI haters especially the headline.

    • voidx@futurology.todayOPM
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 months ago

      They seem to be experimenting with that for sure, but need to ensure quality of the model doesn’t degrade, as per source article:

      Anthropic’s chief scientist, Jared Kaplan, said some types of synthetic data can be helpful. Anthropic said it used “data we generate internally” to inform its latest versions of its Claude models. OpenAI also is exploring synthetic data generation, the spokeswoman said.

  • kakes@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    Imo we’ve clearly hit a limit with vertical scaling of data. We need some kind of breakthrough on better ways to process what data we’ve got if we want to continue making meaningful progress.