this post was submitted on 21 Mar 2025
59 points (100.0% liked)

Technology

38642 readers
327 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

Indeed, GNOME has been experiencing issues since a last November; as a temporary solution they had rate-limited non-logged in users from seeing merge requests and commits, which obviously also caused issues for real human guests.

The solution the eventually settled to was switching to Anubis. This is a page that presents a challenge to the browser, which then has to spend time doing some math and presenting the solution back to the server. If it's right, you get access to the website.

According to the developer, this project is "a bit of a nuclear response, but AI scraper bots scraping so aggressively have forced my hand. I hate that I have to do this, but this is what we get for the modern Internet because bots don't conform to standards like robots.txt, even when they claim to".

top 2 comments
sorted by: hot top controversial new old
[–] megopie@beehaw.org 12 points 1 month ago* (last edited 1 month ago)

I wonder how effective it would be just to put a bunch of data on servers meant to poison the training data they’re scraping. Like, make it data that only a bot trying to get everything would find, not something that users would see or encounter.

[–] BentiGorlich@gehirneimer.de 8 points 1 month ago

I fullheartedly agree... My mbin servers have been targeted multiple times by stupid AI scraper bots and even google didn't adhere to the request limit set in the robots.txt...