• Lugh@futurology.todayOPM
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    6 months ago

    This sounds like marketing hype. Giving AI reasoning is a problem researchers have been failing to solve since Marvin Minsky in the 1960s, and there is still no fundamental breakthrough on the horizon. Even DeepMind’s latest effort is tame; it just suggests getting AI to check itself more accurately against external sources.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      6 months ago

      there is still no fundamental breakthrough on the horizon.

      I mean, we’re currently in the midst of one, so that might be obscuring the horizon somewhat. Modern AIs are able to reason in ways that no AIs could previously, don’t let the perfect be the enemy of the good.

    • Phoenix5869@futurology.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      This sounds like marketing hype.

      Yeah, this is exactly my thoughts aswell. If AI truly had reasoning capabilities, it would be global front page news.

  • A_A@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    Hi Lugh,
    thanks for this nice link (and article).

    Researchers in the domain express increasing worryiness on this topic :

    "… Reinforcement learning (RL) agents that plan over a long time horizon far more effectively than humans present particular risks. (…)

    https://www.science.org/doi/10.1126/science.adl0625
    (i don’t have access to the full article)

    My opinion is that we should hope and worry (at least a little bit).