A shocking story was promoted on the “front page” or main feed of Elon Musk’s X on Thursday:

“Iran Strikes Tel Aviv with Heavy Missiles,” read the headline.

This would certainly be a worrying world news development. Earlier that week, Israel had conducted an airstrike on Iran’s embassy in Syria, killing two generals as well as other officers. Retaliation from Iran seemed like a plausible occurrence.

But, there was one major problem: Iran did not attack Israel. The headline was fake.

Even more concerning, the fake headline was apparently generated by X’s own official AI chatbot, Grok, and then promoted by X’s trending news product, Explore, on the very first day of an updated version of the feature.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    Same similiar thing happened with major newspapers about 100 / 150 years ago … governments realized that if any one group or company had control over all the information without regulation, businesses will quickly figure out ways to monetize information for the benefit of those with all the money and power. They then had to figure out how to start regulating newspapers and news media in order to maintain some sort of control and sanity to the entire system.

    But like the newspapers of old … no one will do anything about all this until it causes a major crisis or causes a terrible event … or events.

    In the meantime … big corporations controlling 99% of all media and news information will stay unregulated or regulated as little as possible until terrible things happen and society breaks down.

  • Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I don’t really understand this headline

    The bot made it? So why was it promoted as trending?

    • Deceptichum@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      It’s pretty, trending is based on . . . What’s trending by users.

      Or as the article explains for those who can’t comprehend what trending means.

      Based on our observations, it appears that the topic started trending because of a sudden uptick of blue checkmark accounts (users who pay a monthly subscription to X for Premium features including the verification badge) spamming the same copy-and-paste misinformation about Iran attacking Israel. The curated posts provided by X were full of these verified accounts spreading this fake news alongside an unverified video depicting explosions.

      • h3rm17@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Nah, it’s a bit like government. Its only his responsability if it is no one elses responsability. Like, they can have the most corrupt gabinets, most presidents do not resign/abdicate, whatever the word is.

          • h3rm17@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Ok, you just downvote and say no, but no explanation given. In my gov several cases of corruption arised during the last couple of years, and way more in the past. They affect high ranking ministers, and yet the oresident does not resign. Same with companies, they get paid the most, do the least, claim it is vecause they have “lots of responsabilities” but still never pay the price

            • baru@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              but no explanation given

              You didn’t explain, so why should I? I did see you made things up.

            • maynarkh@feddit.nl
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Corporations are completely authoritarian, while most governments are not, or at least not completely. If there really is a “rogue engineer”, Musk can very easily fire them. Even if there was, it’s his responsibility to organize a company in such a way that this cannot happen, with people having oversight over other people.

              He is very clearly failing to do any of that.

      • umbraroze@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Yup. Got also added to the Jargon File, which was an influential collection of hacker slang.

        If there’s one thing that Elon is really good at, it’s taking obscure beloved nerd tidbits and then pigeon-shitting all over them.

    • expr@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      In case you’re not familiar, https://en.m.wikipedia.org/wiki/Grok.

      It’s somewhat common slang in hacker culture, which of course Elon is shitting all over as usual. It’s especially ironic since the meaning of the word roughly means “deep or profound understanding”, which their AI has anything but.

  • cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Oh, what a surprise. Another AI spat out some more bullshit. I can’t wait until companies finally give up on trying to do everything with AI.

  • style99@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    People who deploy AI should be held responsible for the slander and defamation the AI causes.

    • Fedizen@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      “Somebody I don’t like said something bad about one of the worlds richest oligarchs therefore all criticism of him is invalid”

      The guy has enough money to protect himself from bad criticism and address narratives he doesn’t like; he doesn’t need sad pathetic losers defending him on the internet like he’s a defenseless baby.

  • kadu@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    I wonder how legislation is going to evolve to handle AI. Brazilian law would punish a newspaper or social media platform claiming that Iran just attacked Israel - this is dangerous information that could affect somebody’s life.

    If it were up to me, if your AI hallucinated some dangerous information and provided it to users, you’re personally responsible. I bet if such a law existed in less than a month all those AI developers would very quickly abandon the “oh no you see it’s impossible to completely avoid hallucinations for you see the math is just too complex tee hee” and would actually fix this.