LLMs are solving MCAT, the bar test, SAT etc like they’re nothing. At this point their performance is super human. However they’ll often trip on super simple common sense questions, they’ll struggle with creative thinking.

Is this literally proof that standard tests are not a good measure of intelligence?

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    7
    ·
    8 months ago

    OP picked standardized tests that only require memorization because they have zero idea what a real IQ test like the WAIS is like.

    Also how those IQ tests work. You kind of have to go in “blind” to get an accurate result. And LLM can’t do anything “blind” because you have to train them.

    A chatbots can’t even take a real IQ test, if we trained a chatbots to take a real IQ test, it would be a pointless test

    • JackGreenEarth@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      12
      ·
      8 months ago

      Nobody is a blank slate. Everyone has knowledge from their past experience, and instincts from their genetics. AIs are the same. They are trained on various things just as humans have experienced various things, but they can be just as blind as each other on the contents of the test.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        5
        ·
        8 months ago

        No, they wouldn’t.

        Because real IQ tests arent just multiple choice exams

        You would have to train it to handle the different tasks, and training it at the tasks would make it better at the tasks, raising their scores.

        I don’t know if the issue is you don’t know about how IQ tests work, or what LLM can do.

        But it’s probably both instead of one or the other.

      • Ottomateeverything@lemmy.world
        link
        fedilink
        arrow-up
        12
        arrow-down
        4
        ·
        edit-2
        8 months ago

        You’re entirely missing the point.

        The requirements and basis of IQ tests are they are problems you haven’t seen before. An LLM works by recognizing existing data and returning what came next in the training set.

        LLMs work directly in opposition of how an IQ text works.

        Things like past experience are all the shit IQ tests need to avoid in order to be accurate. And they’re exactly what LLMs work off of.

        By definition, LLMs have no IQ.