Roko’s basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement.It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment’s name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.

While the theory was initially dismissed as nothing but conjecture or speculation by many LessWrong users, LessWrong co-founder Eliezer Yudkowsky reported users who panicked upon reading the theory, due to its stipulation that knowing about the theory and its basilisk made one vulnerable to the basilisk itself. This led to discussion of the basilisk on the site being banned for five years. However, these reports were later dismissed as being exaggerations or inconsequential, and the theory itself was dismissed as nonsense, including by Yudkowsky himself. Even after the post’s discreditation, it is still used as an example of principles such as Bayesian probability and implicit religion. It is also regarded as a simplified, derivative version of Pascal’s wager.

Found out about this after stumbling upon this Kyle Hill video on the subject. It reminds me a little bit of “The Game”.

  • Thorny_Insight@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    First of all, the AI doesn’t exist in 2015, so people could just…not build it.

    I don’t think that’s an option. I can only think of two scenarios in which we don’t create AGI:

    1. It can’t be created.

    2. We destroy ourselves before we get to AGI

    Otherwise we will keep improving our technology and sooner or later we’ll find ourselves in the precence of AGI. Even if every nation makes AI research illegal there’s still going to be handful of nerds who continue the development in secret. It might take hundreds if not thousands of years but as long as we’re taking steps in that direction we’ll continue to get closer. I think it’s inevitable.

    • Cryophilia@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Sure, but that particular AI? The “eternal torment” AI? Why the fuck would we make that. Just don’t make it.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 months ago

        We don’t. Humans are only needed to create AI that’s at the bare minimum as good at creating new AIs as humans are. Once we create that then it can create a better version of itself and this better version will make an even better one and so on.

        This is exactly what the people worried about AI are worried about. We’ll lose control of it.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            5 months ago

            Yeah but it answers the question “why would we create an AI like that”. It might not be “us” who creates it. You just wanted a camp fire but created a forest fire instead.

      • BobTheDestroyer@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale

        Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus

        Alex Blechman