• nickwitha_k (he/him)@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      They are typically closed-loop for home computers. Datacenters are a different beast and a fair amount of open-loop systems seem to be in place.

  • Aceticon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    What! A! Surprise!

    I’m shocked, I tell you, totally and utterly shocked by this turn of events!

      • vane@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        But their operation cost is 5 billions per year, they plan to raise 6.5 billions from microsoft, apple and nvidia this year and they have not raised it yet. If their model fail next year and sales not happen will shareholders of big 3 pay 6.5 billions in 2026. There were couple companies that raised such amount of money at start like for example Docker Inc. Where is Docker now in enterprise ? They needed to change licensing model to even survive and their operation cost is just storage of docker containers. I doubt openai will survive this decade. Sam Altman is just preparing for Microsoft takeover before the ship is sunk.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      It’s amusing. Meta’s AI team is more open than "Open"AI ever was - they publish so many research papers for free, and the latest versions of Llama are very capable models that you can run on your own hardware (if it’s powerful enough) for free as long as you don’t use it in an app with more than 700 million monthly users.

      • a9cx34udP4ZZ0@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        That’s because Facebook is selling your data and access to advertise to you. The better AI gets across the board, the more money they make. AI isn’t the product, you are.

        OpenAI makes money off selling AI to others. AI is the product, not you.

        The fact facebook release more code, in this instance, isn’t a good thing. It’s a reminder how fucked we all are because they make so much off our personal data they can afford to give away literally BILLIONS of dollars in IP.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Facebook doesn’t sell your data, nor does Google. That’s a common misconception. They sell your attention. Advertisers can show ads to people based on some targeting criteria, but they never see any user data.

  • celsiustimeline@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    Whoops. We made the most expensive product ever designed, paid for entirely by venture capital seed funding. Wanna pay for each ChatGPT query now that you’ve been using it for 1.5 years for free with barely-usable results? What a clown. Aside from the obvious abuse that will occur with image, video, and audio generating models, these other glorified chatbots are complete AIDS.

    • sunbeam60@lemmy.one
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Barely usable results?! Whatever you may think of the pricing (which is obviously below cost), there are an enormous amount of fields where language models provide insane amount of business value. Whether that translates into a better life for the everyday person is currently unknown.

    • assassin_aragorn@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      paid for entirely by venture capital seed funding.

      And stealing from other people’s works. Don’t forget that part

    • flo@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      barely usable results

      Using chatgpt and copilot has been a huge productivity boost for me, so your comment surprised me. Perhaps its usefulness varies across fields. May I ask what kind of tasks you have tried chatgpt for, where it’s been unhelpful?

      • wholookshere@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        Literally anything that requires knowing facts to inform writing. This is something LLMs are incapable of doing right now.

        Just look up how many R’s are in strawberry and see how chat gpt gets it wrong.

  • N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    There’s an alternate timeline where the non-profit side of the company won, Altman the Conman was booted and exposed, and OpenAI kept developing machine learning in a way that actually benefits actual use cases.

    Cancer screenings approved by a doctor could be accurate enough to save so many lives and so much suffering through early detection.

    Instead, Altman turned a promising technology into a meme stock with a product released too early to ever fix properly.

      • mustbe3to20signs@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        AI models can outmatch most oncologists and radiologists in recognition of early tumor stages in MRI and CT scans.
        Further developing this strength could lead to earlier diagnosis with less-invasive methods saving not only countless live and prolonging the remaining quality life time for the individual but also save a shit ton of money.

        • msage@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Wasn’t it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?

          • mustbe3to20signs@feddit.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            There were more than one system proven to “cheat” through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
            Since multiple multiple image recognition systems are in development, I can’t imagine they’re all this faulty.

            • msage@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              They are not ‘faulty’, they have been fed wrong training data.

              This is the most important aspect of any AI - it’s only as good as the training dataset is. If you don’t know the dataset, you know nothing about the AI.

              That’s why every claim of ‘super efficient AI’ need to be investigated deeper. But that goes against line-goes-up principle. So don’t expect that to happen a lot.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Or we get to a time where we send a reprogrammed terminator back in time to kill altman 🤓

  • Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    The restructuring could turn the already for-profit company into a more traditional startup and give CEO Sam Altman even more control — including likely equity worth billions of dollars.

    I can see why he would want that, yes. We’re supposed to ooo and ahh at a technical visionary, who is always ultimately a money guy executive who wants more money and more executive power.

    • toynbee@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I saw an interesting video about this. It’s outdated (from ten months ago, apparently) but added some context that I, at least, was missing - and that also largely aligns with what you said. Also, though it’s not super evident in this video, I think the presenter is fairly funny.

      https://youtu.be/L6mmzBDfRS4

      • Melatonin@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        That was a worthwhile watch, thank you for making my life better.

        I await the coming AI apocalypse with hope that I am not awake, aware, or sensate when they do whatever it is they’ll do to use or get rid of me.