Today I added some numbers to my firewall block lists again. I feel somewhat bad about them, because I guess my robots.txt was not setup up correctly. At the same time, I feel like I don’t owe anything to unwatched crawlers.
So for the moment I banned:
What’s the country for global companies that don’t pay taxes? “From the Internet‽”
I’m still not quite sure what to do now. I guess I just don’t know how I feel about crawling in general. What would a network look like that doesn’t crawl? Crawling means that somebody is accumulating data. Valuable data. Toxic data. Haven’t we been through all this? I got along well with the operator of GUS when we exchanged a few emails. And yet, the crawling makes me uneasy.
Data parsimony demands that we don’t collect the data we don’t need; that we don’t store the data we collect; that we don’t keep the data we store. Delete that shit! One day somebody inherits, steal, leaks, or buys that data store and does things with it that we don’t want. I hate it that defending against leeches (eager crawlers I feel are misbehaving) means I need to start tracking visitors. Logging IP numbers. Seeing what pages the active IP numbers are looking at. Are they too fast for a human? Is the sequence of links they are following a natural reading sequence? I hate that I’m being forced to do this every now and then. And what if I don’t? Perhaps somebody is going to use Soweli Lukin to index Gopherspace? Perhaps somebody is going to use The Transjovian Vault to index Wikipedia via Gemini? Unsupervised crawlers will do anything.
There’s something about the whole situation that’s struggling to come out. I’m having trouble putting to words.
Like… There’s a certain lack of imagination out there.
People say: that’s the only way a search engine can work. Maybe? Maybe not? What if sites sent updates, compiled databases? A bit like the Sitemap format? A sort of compiled and compressed word/URI index? And if then very few people actually sent in those indexes, would that not be a statement in itself? Now people don’t object because it takes effort. But perhaps they wouldn’t opt-in either!
People say: anything you published is there for the taking. Well, maybe if you’re a machine. But if there is a group of people sitting around a cookie jar, you wouldn’t say “nobody is stopping me from taking them all.” Human behaviour can be nuanced and if we cannot imagine technical soltions that are nuanced, then I don’t feel like it’s on me to reduce my expectations. Perhaps it’s on implementors to design more nuanced solutions! And yes, those solutions are going to be more complicated. Obviously so! We’ll have to design ways to negotiate consent, privacy, data ownership.
It’s a failure of design if “anything you publish is there for the taking” is the only option. Since I don’t want this, I think it’s on me and others who dislike this attitude to confidently set boundaries. I use fail2ban to ban user agents who make too many requests, for example. Somebody might say: “why don’t you use a caching proxy?” The answer is that I don’t feel like it is on me to build a technical solution that scales to the corpocaca net; I should be free to run a site built for the smol net. If you don’t behave like human on the smol net, I feel free to defend my vision of the net as I see fit – and I encourage you to do the same.
People say: ah, I understand – you’re using a tiny computer. I like tiny computers. That’s why you want us to treat your server like it was smol. No. I want you to treat my server like it was smol because we’re on the smol net.
For my websites, I took a look at my log files and saw that at the very least (!) 21% of my hits are bots (18253 / 88862). Of these, 20% are by the Google bot, 19% are by the Bing bot, 10% are by the Yandex bot, 5% are by the Apple bot, and so on. And that is considering a long robots.txt, and a huge Apache config file to block a gazillion more user agents! Is this what you want for Gemini? The corpocaca Gemini? Not me!