writing.exchange is one of the many independent Mastodon servers you can use to participate in the fediverse.
A small, intentional community for poets, authors, and every kind of writer.

Administered by:

Server stats:

335
active users

#crawlers

5 posts5 participants0 posts today
Replied in thread

@camwilson #AI #Crawlers are not only increasing bandwidth costs for #Wikipedia, but looking for code on which to train are similarly weighing down open software sources.

It's like some giant monster devouring resources and requiring nuclear fusion and all the fresh drinking water to do not very much. Interesting that animal intelligence gets by without consuming all the data in the world and a few worms, insects, or a peanut butter and jelly sandwich.

Thanks to Fijxu use of Anubis videos still can be watched on inv.nadeko.net. 🫡

I feel like because of the aggressive bot scraping that intensified not long ago will going to make it impossible to continue to use feed readers and the only way to interact with websites will going to be restricted to only be possible from web browsers.
Already opening up videos in mpv from my rss subscribed invidious feeds not working, it was my preferred way to watch videos. Just to clarify I'm aware that rss still works the only thing that doesn't is opening up video links directly with mpv or with any other video player that can do the same. And not only that but I fear at some point reading full articles inside an rss reader will not work forcing me to open article links in a web browser, even if some of feeds can fetch full articles minimizing the need to do so.

I'm not trying to minimize the impact of this scrapers that have on free and open source projects and on web admins who have to deal with this onslaught of bot activity, they are the ones who got it worst.

"#AI" #crawlers are a cancerous disease that must be eradicated, not just fended off.

»The costs are both technical and financial. The Read the Docs project reported that blocking AI crawlers immediately decreased their #traffic by 75 percent, going from 800GB per day to 200GB per day. This change saved the project approximately $1,500 per month in bandwidth costs, according to their blog post "AI crawlers need to be more respectful."«

arstechnica.com/ai/2025/03/dev

man sitting in sofa in a flooded living room, feets in water, writing on a laptop
Ars Technica · Open source devs say AI crawlers dominate traffic, forcing blocks on entire countriesBy Benj Edwards

The pushback against AI crawlers threatens the transparency & open borders of the web that allow non-AI apps to flourish. If unfixed, the web will increasingly be fortified with logins, paywalls, & access tolls.

#ai #crawlers #transparency #openborders
technologyreview.com/2025/02/1

MIT Technology Review · AI crawler wars threaten to make the web more closed for everyoneBy Shayne Longpre

It looks like LLM-producing companies that are massively #crawling the #web require the owners of a website to take action to opt out. Albeit I am not intrinsically against #generativeai and the acquisition of #opendata, reading about hundreds of dollars of rising #cloud costs for hobby projects is quite concerning. How is it accepted that hypergiants skyrocket the costs of tightly budgeted projects through massive spikes in egress traffic and increased processing requirements? Projects that run on a shoestring budget and are operated by volunteers who dedicated hundreds of hours without any reward other than believing in their mission?

I am mostly concerned about the default of opting out. Are the owners of those projects required to take action? Seriously? As an #operator, it would be my responsibility to methodically work myself through the crawling documentation of the hundreds of #LLM #web #crawlers? I am the one responsible for configuring a unique crawling specification in my robots.txt because hypergiants make it immanently hard to have generic #opt-out configurations that tackle LLM projects specifically?

I reject to accept that this is our new norm. A norm in which hypergiants are not only methodically exploiting the work of thousands of individuals for their own benefit and without returning a penny. But also a norm, in which the resource owner is required to prevent these crawlers from skyrocketing one's own operational costs?

We require a new #opt-in. Often, public and open projects are keen to share their data. They just don't like the idea of carrying the unpredictable, multitudinous financial burden of sharing the data without notice from said crawlers. Even #CommonCrawl has safe-fail mechanisms to reduce the burden on website owners. Why are LLM crawlers above the guidelines of good #Internet citizenship?

To counter the most common argument already: Yes, you can deny-by-default in your robots.txt, but that excludes any non-mainstream browser, too.

Some concerning #news articles on the topic: