For OpenAI, o1 represents a step toward its broader goal of human-like artificial intelligence. More practically, it does a better job at writing code and solving multistep problems than previous models. But it’s also more expensive and slower to use than GPT-4o. OpenAI is calling this release of o1 a “preview” to emphasize how nascent it is.

The training behind o1 is fundamentally different from its predecessors, OpenAI’s research lead, Jerry Tworek, tells me, though the company is being vague about the exact details. He says o1 “has been trained using a completely new optimization algorithm and a new training dataset specifically tailored for it.”

OpenAI taught previous GPT models to mimic patterns from its training data. With o1, it trained the model to solve problems on its own using a technique known as reinforcement learning, which teaches the system through rewards and penalties. It then uses a “chain of thought” to process queries, similarly to how humans process problems by going through them step-by-step.

At the same time, o1 is not as capable as GPT-4o in a lot of areas. It doesn’t do as well on factual knowledge about the world. It also doesn’t have the ability to browse the web or process files and images. Still, the company believes it represents a brand-new class of capabilities. It was named o1 to indicate “resetting the counter back to 1.”

I think this is the most important part (emphasis mine):

As a result of this new training methodology, OpenAI says the model should be more accurate. “We have noticed that this model hallucinates less,” Tworek says. But the problem still persists. “We can’t say we solved hallucinations.”

  • Lucidlethargy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I think I’ve used it if this is the latest available, and it’s terrible. It keeps feeding me wrong information, and when you correct it, it says you’re right… But if you ask it again, it again feeds you the wrong information.

    • leftzero@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      if you ask it again, it again feeds you the wrong information

      Well, it’s a LLM, they can’t learn anything without rebuilding the whole model from scratch, which I wouldn’t exactly call learning anyway… all they “know” is what word is most likely to follow a certain sequence of words according to their model.
      Any other facts or information are completely inconsequential for their operation and results.

  • ulkesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I just love how people seem to want to avoid using the word lie.

    It’s either misinformation, or alternative facts, or hallucinations.

    Granted, a lie does tend to have intent behind it, so with ChatGPT, it’s probably better to say falsehood, instead. But either way, it’s not fact, it’s not truth, and people, especially schools, should stop using it as a credible source.

    • JustTesting@lemmy.hogru.ch
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      There was a recent paper that argues ‘bullshitting’ is the most apt analogy. I.e. telling something to satisfy the other person without caring about the truth content of what you say

  • Etterra@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    That’s not what reasoning is. Training is understanding what they’re talking about and being able to draw logical conclusions based on what they’ve learned. It’s being able to say, I didn’t know but wait a second and I’ll look it up," and then summing that info up in original language.

    All Open AI did was make it less stupid and slap a new coat of paint on it, hoping nobody asks too many questions.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    trained to answer more complex questions, faster than a human can.

    I can answer math questions really really fast. Not correct though, but like REALLY fast!

    • tee9000@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      It scores 83% on a qualifying exam for the international mathematics olympiad compared to the previous model’s 13% so…

      • average_joe@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        When you say previous model, you mean gemini with alpha geometry (an actual RL method)? Which scored a silver?

        I mean not only google did it before, they also released their details unlike openai’s “just trust me bro, its RL”.

        Openai also said that we should reserve 25k tokens for this “reasoning” and they will be charged the same as output tokens which is exorbitantly high (60$ for 1m tokens).

        And the cherry on top is that they won’t even give us these “reasoning” tokens. How the hell am I supposed to improve my prompts if I can’t even see it? How would I reduce the hallucinations without it?

        My personal experience is that, it does have an extra reasoning thing going for itself but in no way does it make openai’s tactics tolerable. The quality does not increase enough to justify its cost per token, let alone their “reasoning tokens” BS.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      I’m the same with any programming question as long as the answer is Hello World

      • VirtualOdour@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        That’s a flat out lie, I use it for code all the time and it’s fantastic at writing useful functions if you tell it what you want. It’s also fantastic if you ask it to explain code or options for problem solving.

    • hedgehog@ttrpg.network
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      I’m more concerned about them using the word “sapient.” My dog is sentient; it’s not a high bar to clear.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 days ago

      Is that even the goal? Do we want an AI that’s self aware because I thought that basically the whole point was to have an intelligence without a mind.

      We don’t really want sapient AI because if we do that then we have to feel bad about putting it in robots and making them do boring jobs. Don’t we basically want guildless servants, isn’t that the point?

    • nave@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      At the same time, o1 is not as capable as GPT-4o in a lot of areas. It doesn’t do as well on factual knowledge about the world. It also doesn’t have the ability to browse the web or process files and images. Still, the company believes it represents a brand-new class of capabilities. It was named o1 to indicate “resetting the counter back to 1.”

      I think it’s more of a proof of concept then a fully functioning model at this point.

  • Chozo@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    Technophobes are trying to downplay this because “AI bad”, but this is actually a pretty significant leap from GPT and we should all be keeping an eye on this, especially those who are acting like this is just more auto-predict. This is a completely different generation process than GPT which is just glorified auto-predict. It’s the difference between learning a language by just reading a lot of books in that language, and learning a language by speaking with people in that language and adjusting based on their feedback until you are fluent.

    If you thought AI comments flooding social media was already bad, it’s soon going to get a lot harder to discern who is real, especially once people get access to a web-connected version of this model.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      It’s weird how so many of these “technophobes” are IT professionals. Crazy that people would line up to go into a profession they so obviously hate and fear.

      • Chozo@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        6 days ago

        I’ve worked in tech for 20 years. Luddites are quite common in this field.

        • Voroxpete@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          Read some history mate. The luddites weren’t technophobes either. They hated the way that capitalism was reaping all the rewards of industrializion. They were all for technological advancement, they just wanted it to benefit everyone.

          • Chozo@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            6 days ago

            I’m using the current-day usage of the term, but I think you knew that.

    • BetaDoggo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      All signs point to this being a finetune of gpt4o with additional chain of thought steps before the final answer. It has exactly the same pitfalls as the existing model (9.11>9.8 tokenization error, failing simple riddles, being unable to assert that the user is wrong, etc.). It’s still a transformer and it’s still next token prediction. They hide the thought steps to mask this fact and to prevent others from benefiniting from all of the finetuning data they paid for.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        They hide the thought steps to mask this fact and to prevent others from benefiting from all of the finetuning data they paid for.

        Well possibly but they also hide the chain of thought steps because as they point out in their article it needs to be able to think about things outside of what it’s normally allowed allowed to say which obviously means you can’t show the content. If you’re trying to come up with worst case scenarios for a situation you actually have to be able to think about those worst case scenarios

  • Nougat@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    “We have noticed that this model hallucinates less,” Tworek says. But the problem still persists. “We can’t say we solved hallucinations.”

    On one hand, yeah, AI hallucinations.

    On the other hand, have you met people?

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    So for those not familar with machine learning, which was the practical business use case for “AI” before LLMs took the world by storm, that is what they are describing as reinforcement learning. Both are valid terms for it.

    It’s how you can make an AI that plays Mario Kart. You establish goals that grant points, stuff to avoid that loses points, and what actions it can take each “step”. Then you give it the first frame of a Mario Kart race, have it try literally every input it can put in that frame, then evaluate the change in points that results. You branch out from that collection of “frame 2s” and do the same thing again and again, checking more and more possible future states.

    At some point you use certain rules to eliminate certain branches on this tree of potential future states, like discarding branches where it’s driving backwards. That way you can start opptimizing towards the options at any given time that get the most points im the end. Keep the amount of options being evaluated to an amount you can push through your hardware.

    Eventually you try enough things enough times that you can pretty consistently use the data you gathered to make the best choice on any given frame.

    The jank comes from how the points are configured. Like AI for a delivery robot could prioritize jumping off balconies if it prioritizes speed over self preservation.

    Some of these pitfalls are easy to create rules around for training. Others are far more subtle and difficult to work around.

    Some people in the video game TAS community (custom building a frame by frame list of the inputs needed to beat a game as fast as possible, human limits be damned) are already using this in limited capacities to automate testing approaches to particularly challenging sections of gameplay.

    So it ends up coming down to complexity. Making an AI to play Pacman is relatively simple. There are only 4 options every step, the direction the joystick is held. So you have 4n states to keep track of, where n is the number of steps forward you want to look.

    Trying to do that with language, and arguing that you can get reliable results with any kind of consistency, is blowing smoke. They can’t even clearly state what outcomes they are optimizing for with their “reward” function. God only knows what edge cases they’ve overlooked.


    My complete out of my ass guess is that they did some analysis on response to previous gpt output, tried to distinguish between positive and negative responses (or at least distinguish against responses indicating that it was incorrect). They then used that as some sort of positive/negative points heuristic.

    People have been speculating for a while that you could do that, crank up the “randomness”, have it generate multiple responses behind the scenes and then pit those “pre-responses” against each other and use that criteria to choose the best option of the “pre-responses”. They could even A/B test the responses over multiple users, and use the user responses as further “positive/negative points” reinforcement to feed back into it in a giant loop.

    Again, completely pulled from my ass. Take with a boulder of salt.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      To be a little nitpicky most of the AI that can play Mario kart are trained not with a reinforcement learning algorithm, but woth a genetic algorithm, which is a sort of different thing.

      Reinforcement learning is rather like how you teach a child. Show them a bunch of good stuff, and show them a bunch of bad stuff, and tell them which is the good stuff and which is the bad stuff.

      Genetic algorithms are where you just leave it alone, simulate the evolutionary process on an accelerated time scale, and let normal evolutionary processes take over. Much easier, and less processor intensive, plus you don’t need huge corpuses of data. But it takes ages, and it also sometimes results in weird behaviors because evolution finds a solution you never thought of, or it finds a solution to a different problem to the one you were trying to get it to find a solution to.

      • Nougat@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        6 days ago

        … sometimes results in weird behaviors because evolution finds a solution you never thought of, or it finds a solution to a different problem to the one you were trying to get it to find a solution to.

        Those outcomes seem especially beneficial.

        But it takes ages, …

        Is this process something that distributed computing could be leveraged for, akin to SETI@home?

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          I work in computer science but not really anything to do with AI so I’m only adjacently knowledgeable about it. But my understanding is unfortunately, no not really. The problem would be that if you run a bunch of evolutions in parallel you just get a bunch of independent AIs, all with slightly different parameters but they’re incapable of working together because they weren’t evolved to work together, they were evolved independently.

          In theory you could come up with some kind of file format that allowed for the transfer of AI between each cluster, but you’d probably spend as much time transferring AI as you saved by having multiple iterations run at the same time. It’s n^n problem, where n is the number of AIs you have.

          • FatCrab@lemmy.one
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 days ago

            Genetic algorithms is a sort of broad category and there’s certainly ways you could federate and parallelize. I think autoML basically applies this within the ML space (multiple trainings explore a solution topology and convergence progress is compared between epochs, with low performers dropping out). Keep in mind, you can also use a genetic algorithm to learn how to explore an old fashioned state tree.

    • Nougat@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      7 days ago

      Again, completely pulled from my ass. Take with a boulder of salt.

      You’re under arrest. That’s ass-salt.

  • drspod@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Can’t wait to read about it telling someone to put glue on pizza.