I just feel like OpenAI might accept this and ignore the website, although it’s very unlikely they will actually do that.
Is there not some way to just blacklist the AI domain or IP range?
No, because there isn’t a single IP range or user agent, and many developers are going to lengths to defeat anti-scraping measures, which include user agent spoofing as well as vpns and the like to mask the source of the traffic.
If you read the few articles about people being attacked by AI in the recent months they all tell the same story: it’s not possible. The AI companies are targetting on purpose other sites and working non stop to actively avoid any kind of blocking that could be active. They rotate IPs regularly, they change User agents, they ignore robots.txt, deduplicate requests over bunch of ips, if they detect they are being blocked they start only doing one request in each ip, they change user agents the moment they detect one is being blocked, etc etc etc.
whitelists and the end of anonymity
Or just decent regulation. You’re offering an AI product? You can’t attest that it’s been trained in a legitimate way?
Into the shadow realm with you.
You can download a torrent of the whole thing, they don’t need to give it to anyone.
This release is powered by our Snapshot API’s Structured Contents beta, which outputs Wikimedia project data in a developer-friendly, machine-readable format. Instead of scraping or parsing raw article text, Kaggle users can work directly with well-structured JSON representations of Wikipedia content—making this ideal for training models, building features, and testing NLP pipelines.