- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.
Registration bypass: https://archive.is/3tEl0
Probably so. It’s always going to be an arms race, just like with malware.
I mean… not really. This isn’t even a defence. Any web crawler worth its salt will just stop after a while. And they do so for literally decades already
Indeed. And any modern AI training system is going to be extensively curating any training data that ends up being fed into the AI, probably processing it through other AIs to generate synthetic data from it. The days of early ChatGPT where LLMs were trained by just dumping giant piles of random text on them and hoping it’ll figure it out somehow are long past.
This reminds me of Nightshade, the supposed anti-art-AI technique that could be defeated by resizing the image (which all art AI training systems do as a matter of course). It may make people “feel better” but it’s not going to have any real impact on anything.
sure, it is easy to detect and they will. however, at the moment they don’t seem to be doing it. The author said this after deploying a POC:
So no, it is not a silver bullet. but it is a defense strategy, which seems to work at the moment.
No, a few million hits from bots is routine for anything that’s facing the public at all. Others have posted on this thread (or others like it, this article’s been making the rounds a lot in the past few days) that even the most basic of sites can get that sort of bot traffic, and that it’s just a simple recursion depth limit setting to avoid the “infinite maze” aspect.
As for AI training, the access log says nothing about that. As I said, AI training sets are not made by just dumping giant piles of randomly scraped text on AIs any more. If a trainer scraped one of those “infinite maze” sites the quality of the resulting data would be checked, and if it was generated by anything remotely economical for the site to be running it’d almost certainly be discarded as junk.