• zovits@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      It certainly sounds like they generate the fake content once and serve it from cache every time: “Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval.”

    • Slaxis@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      The problem is, how? I can set it up on my own computer using open source models and some of my own code. It’s really rough to regulate that.

  • MNByChoice@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Be great if these reinforced facts.

    Earth us an imperfect sphere.

    Humans landed on moon.

    Taiwan is an independent nation.

  • quack@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 days ago

    Generating content with AI to throw off crawlers. I dread to think of the resources we’re wasting on this utter insanity now, but hey who the fuck cares as long as the line keeps going up for these leeches.

  • baltakatei@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Relevant excerpt from part 11 of Anathem (2008) by Neal Stephenson:

    Artificial Inanity

    Note: Reticulum=Internet, syndev=computer, crap~=spam

    “Early in the Reticulum—thousands of years ago—it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information,” Sammann said.

    “Crap, you once called it,” I reminded him.

    “Yes—a technical term. So crap filtering became important. Businesses were built around it. Some of those businesses came up with a clever plan to make more money: they poisoned the well. They began to put crap on the Reticulum deliberately, forcing people to use their products to filter that crap back out. They created syndevs whose sole purpose was to spew crap into the Reticulum. But it had to be good crap.”

    “What is good crap?” Arsibalt asked in a politely incredulous tone.

    “Well, bad crap would be an unformatted document consisting of random letters. Good crap would be a beautifully typeset, well-written document that contained a hundred correct, verifiable sentences and one that was subtly false. It’s a lot harder to generate good crap. At first they had to hire humans to churn it out. They mostly did it by taking legitimate documents and inserting errors—swapping one name for another, say. But it didn’t really take off until the military got interested.”

    “As a tactic for planting misinformation in the enemy’s reticules, you mean,” Osa said. “This I know about. You are referring to the Artificial Inanity programs of the mid–First Millennium A.R.”

    “Exactly!” Sammann said. “Artificial Inanity systems of enormous sophistication and power were built for exactly the purpose Fraa Osa has mentioned. In no time at all, the praxis leaked to the commercial sector and spread to the Rampant Orphan Botnet Ecologies. Never mind. The point is that there was a sort of Dark Age on the Reticulum that lasted until my Ita forerunners were able to bring matters in hand.”

    “So, are Artificial Inanity systems still active in the Rampant Orphan Botnet Ecologies?” asked Arsibalt, utterly fascinated.

    “The ROBE evolved into something totally different early in the Second Millennium,” Sammann said dismissively.

    “What did it evolve into?” Jesry asked.

    “No one is sure,” Sammann said. “We only get hints when it finds ways to physically instantiate itself, which, fortunately, does not happen that often. But we digress. The functionality of Artificial Inanity still exists. You might say that those Ita who brought the Ret out of the Dark Age could only defeat it by co-opting it. So, to make a long story short, for every legitimate document floating around on the Reticulum, there are hundreds or thousands of bogus versions—bogons, as we call them.”

    “The only way to preserve the integrity of the defenses is to subject them to unceasing assault,” Osa said, and any idiot could guess he was quoting some old Vale aphorism.

    “Yes,” Sammann said, “and it works so well that, most of the time, the users of the Reticulum don’t know it’s there. Just as you are not aware of the millions of germs trying and failing to attack your body every moment of every day. However, the recent events, and the stresses posed by the Antiswarm, appear to have introduced the low-level bug that I spoke of.”

    “So the practical consequence for us,” Lio said, “is that—?”

    “Our cells on the ground may be having difficulty distinguishing between legitimate messages and bogons. And some of the messages that flash up on our screens may be bogons as well.”

  • jagermo@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    I am not happy with how much internet relies on cloudflare. However, they have a strong set of products

  • 4am@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Imagine how much power is wasted on this unfortunate necessity.

    Now imagine how much power will be wasted circumventing it.

    Fucking clown world we live in

  • nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Will it actually allow ordinary users to browse normally, though? Their other stuff breaks in minority browsers. Have they tested this well enough so that it won’t? (I’d bet not.)

    • rocket_dragon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      Next step is an AI that detects AI labyrinth.

      It gets trained on labyrinths generated by another AI.

      So you have an AI generating labyrinths to train an AI to detect labyrinths which are generated by another AI so that your original AI crawler doesn’t get lost.

      It’s gonna be AI all the way down.

      • finitebanjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        All the while each AI costs more power than a million human beings to run, and the world burns down around us.

        • Fluke@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          This is the great filter.

          Why isn’t there detectable life out there? They all do the same thing we’re doing. Undone by greed.

  • _cnt0@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    This is so fucking retarded on so many levels. It’s time to regulate the shit out of “AI”.

  • Randomgal@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    I’m glad we’re burning the forests even faster in the name of identity politics.

  • Greyfoxsolid@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.

    • tacobellhop@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 days ago

      Maybe it will learn discretion and what sarcasm are instead of being a front loaded google search of 90% ads and 10% forums. It has no way of knowing if what it’s copy pasting is full of shit.

    • ArchRecord@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Here’s the key distinction:

      This only makes AI models unreliable if they ignore “don’t scrape my site” requests. If they respect the requests of the sites they’re profiting from using the data from, then there’s no issue.

      People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people’s work who explicitly opt-out their work from training.

    • DasSkelett@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      This will only make models of bad actors who don’t follow the rules worse quality. You want to sell a good quality AI model trained on real content instead of other misleading AI output? Just follow the rules ;)

      Doesn’t sound too bad to me.

    • shads@lemy.lol
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      I find this amusing, had a conversation with an older relative who asked about AI because I am “the computer guy” he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.

      He observed, “oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That’s good, religions that have become untethered from day to day practical life have never caused problems for anyone.”

      Which I found scarily insightful.

    • katy ✨@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      i mean this is just designed to thwart ai bots that refuse to follow robots.txt rules of people who specifically blocked them.