Recently I was writing about my dislike of crawlers. They turned into a kind of necessary evil on the web – but it’s not too late to choose a different future for Gemini. I want to encourage all server authors and crawler authors to think long and hard about alternatives.
One feature I dislike about crawlers is that they follow all the links. Sure, we have a semi-useful “robots.txt” specification but it’s easy to get wrong on both sides. I’ve had bugs in my “robots.txt” file for a long time without noticing them.
Now, if the argument is that I cannot prevent crawlers from leeching my site, then the reply is of course that I will try to defend myself even if it is impossible to get 100% right. The first line of defence is going to be my “robots.txt” file. It’s not perfect, and that’s fine. It’s not perfect because I just need to look at the Apache config file I use to block all the misbehaving bots and user agents.
Ugh, look at the bots hitting my websites:
$ /home/alex/bin/bot-detector < /var/log/apache2/access.log.1
--------------Bandwidth-------Hits-------Actions--Delay
Everybody 2416M 102520
All Bots 473M 23063 100% 19%
-------------------------------------------------------
bingbot 240836K 8157 35% 31% 10s
YandexBot 36279K 3905 16% 3% 22s
Googlebot 65808K 3679 15% 34% 23s
Adsbot 20187K 3115 13% 0% 27s
Applebot 66607K 908 3% 0% 95s
Facebot 1611K 390 1% 0% 220s
PetalBot 1548K 329 1% 12% 257s
Bot 2101K 308 1% 0% 280s
robots 525K 231 1% 0% 374s
Slackbot 1339K 224 0% 96% 382s
SemrushBot 572K 194 0% 0% 438s
A full 22% of all user agents have something like “bot” in their name. Just look at them! Let’s take the last one, SemrushBot. The user agent also has a link, and if you want, you can take a look. All the goals it lists are disgusting, or benefit corporations and not me, nor other humans. Barf with me as you read statements such as “the Brand Monitoring tool to index and search for articles” or “the On Page SEO Checker and SEO Content template tools reports”. 🤮
Have a look at your own webserver logs. 22% of my CPU resources, of the CO₂ my server produces, of the electricity it eats, for machines that do not have my best interest in mind. I don’t want a web that’s 20% bots crawling all over my site. I don’t want a Gemini space that’s 20% bots crawling all over my capsules.
OK, so let’s talk about defence.
When I look at my Gemini logs, I see that plenty of requests come from Amazon hosts. I take that as a sign of autonomous agents. I might sound like a fool on the Butlerian Jihad, but if I need to block entire networks, then I will. Looking up WHOIS data also costs resources. It would be better if we could identify these bots by looking at their behaviour.
The first mistake crawlers make is that they are too fast. So here’s what I’m currently doing: for every IP, I’m keeping track of the last 30 requests in the last 60s. If there are more requests, the IP number is blocked. Thus, if your average clicking rate is more than 1 click per 2s over a 1min window, you’re probably a bot and you get blocked. I might have to turn this up. Perhaps 1 click per 5s makes more sense for a human.
But there’s more. I see the crawlers clicking on all the links. All the HTML renderings of the pages are already available via Gemini. It makes no sense to request all of these. All the raw wiki text of the pages are available as well. It makes no sense to request all of these, either. All the links to leave a comment are also on every page. It makes no sense to request all of these either.
Here’s what I’m talking about. I picked an IP number from the logs and checked what they’ve been requesting:
2020-12-25 08:32:37 gemini://transjovian.org:1965/page/Linking/2
2020-12-25 08:32:45 gemini://alexschroeder.ch:1965/history/Perl
2020-12-25 08:32:59 gemini://communitywiki.org:1965/page/CategoryWikiProcess
2020-12-25 08:33:18 gemini://transjovian.org:1965/page/Titan/5
2020-12-25 08:33:30 gemini://communitywiki.org/page/CultureOrganis%C3%A9e
2020-12-25 08:33:57 gemini://transjovian.org:1965/history/Spaces
2020-12-25 08:34:23 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/TimurIsmagilov
2020-12-25 08:34:30 gemini://alexschroeder.ch:1965/tag/Hex%20Describe
2020-12-25 08:34:56 gemini://communitywiki.org:1965/page/SoftwareBazaar
2020-12-25 08:35:02 gemini://communitywiki.org:1965/page/DoTank
2020-12-25 08:35:22 gemini://transjovian.org:1965/test/history/Welcome
2020-12-25 08:36:56 gemini://alexschroeder.ch:1965/tag/Gadgets
2020-12-25 08:38:20 gemini://alexschroeder.ch:1965/tag/Games
2020-12-25 08:45:58 gemini://alexschroeder.ch:1965/do/comment/GitHub
2020-12-25 08:46:05 gemini://alexschroeder.ch:1965/html/GitHub
2020-12-25 08:46:12 gemini://alexschroeder.ch:1965/raw/Comments_on_GitHub
2020-12-25 08:46:19 gemini://alexschroeder.ch:1965/raw/GitHub
2020-12-25 08:47:45 gemini://alexschroeder.ch:1965/page/2018-08-24_GitHub
2020-12-25 08:47:51 gemini://alexschroeder.ch:1965/do/comment/Comments_on_2018-08-24_GitHub
2020-12-25 08:47:57 gemini://alexschroeder.ch:1965/html/Comments_on_2018-08-24_GitHub
2020-12-25 09:21:26 gemini://alexschroeder.ch:1965/do/more
See what I mean? This is not a human. This is an unsupervised bot, otherwise the operator would have discovered that this makes no sense.
The solution I’m using for my websites is logging IP numbers and using fail2ban to ban IP numbers that request too many pages. The ban is for 10min, and if you’re a “recidive”, meaning you got banned three times for 10min, then you’re going to be banned for a week. The problem I have is that I would prefer a solution that doesn’t log IP numbers. It’s good for privacy and we should write our software such that privacy comes first.
So I wrote a Phoebe extension called “speed bump”. Here’s what it currently does.
For every IP number, Phoebe records the last 30 requests in the last 60 seconds. If there are more than 30 requests in the last 60 seconds, the IP number is blocked. If somebody is faster on average than two seconds per request, I assume it’s a bot, not a human.
For every IP number, Phoebe records whether the last 30 requests were suspicious or not. A suspicious request is a request that is “disallowed” for bots according to “robots.txt” (more or less). If 10 requests or more of the last 30 requests in the last 60 seconds are suspicious, the IP number is also blocked. That is, even if somebody is as slow as three seconds per request, if they’re all suspicious, I assume it’s a bot, not a human.
When an IP number is blocked, it is blocked for 60s, and there’s a 120s probation time. When you’re blocked, Phoebe responds with a “44” response. This means: slow down!
If the IP number sends another request while it is blocked, or if it gives cause for another block in the probation time, it is blocked again and the blocking time is doubled: the IP is blocked for 120s and probation is extended by 240s. And if it happens again, it is doubled again: blocked for 240s and probabation is extended by 480s.
The “/do/speed-bump/debug” URL (which requires a known client certificate) shows you the raw data, and the “/do/speed-bump/status” URL (which also requires a known client certificate) shows you a human readable summary of what’s going on.
Here’s an example:
Speed Bump Status
From To Warns Block Until Probation IP
n/a n/a 0/ 0 60s n/a 100m 3.8.145.31
n/a n/a 0/ 0 60s 4h 14h 35.176.162.140
n/a n/a 0/ 0 60s n/a 9h 18.134.198.207
-280s -1s 7/30 n/a n/a n/a 3.10.221.60
All four of these numbers belong to “Amazon Data Services UK”.
If there are numbers in the “From” and “To” columns, that means the IP made a request in the last 60s. The “Warns” column says how many of the requests were considered “suspicious”. “Block” is the block time. As you can see, none of the bots managed to increase the block time. Why is that? The “Probation” column offers a glimpse into what happened: as the bots kept making requests while they were blocked, they kept adding to their own block.
A bit later:
Speed Bump Status
From To Warns Block Until Probation IP
n/a n/a 0/ 0 60s n/a 83m 3.8.145.31
n/a n/a 0/ 0 60s 4h 13h 35.176.162.140
n/a n/a 0/ 0 60s n/a 9h 18.134.198.207
-219s -7s 3/30 n/a n/a n/a 3.10.221.60
It seems that the last IP number is managing to thread the line.
Clearly, this is all very much in flux. I’m still working on it – and finding bugs in my “robots.txt”, unfortunately. I’ll keep this page updated as I learn more. One idea I’ve been thinking about is the time windows: how many pages would an enthusiastic human read on a new site: 60 pages in an hour, one minute per page? Or maybe twice as much? That would point towards keeping a counter for a long term average: if you’re requesting more than 60 pages in 30min, perhaps a timeout of 30min is appropriate?
The smol net is also a slow net. There’s no need for almost all activity to be crawlers. If at all, crawlers should be the minority! So, if my sites had 95% human activity and 5% robot activity, I’d be more understanding. But right now, it’s crazy. All the CO₂ wasted, for bots.
I’m on The Butlerian Jihad!
#Gemini #Phoebe
Yep, Phoebe is at fault.
So now I have to find a way to reproduce this result locally. Ideally we’ll find that there’s a single config file at fault... If I’m unlucky it’s because Phoebe 2 switched from Net::Server to Mojo::IOLoop and I’m somehow not closing the connections correctly, or some reference remains that prevents them from closing 100%. Yikes.
– Alex 2021-01-03 09:57 UTC
OK, I think I have a test setup that works. Let me know if you see an error in my thinking.
The test starts a server in the background, listening to a random port ($port) like all the other tests. I need its process id so that I can determine the memory it uses:
Given the process id ($pid), I can now ask for the memory size of the process:
So what I’m doing is I get this size after starting the server, then I run 100 requests, and I get the size again. I run just Phoebe, no problem. If I install the speed bump extension (see 2020-12-25 Defending against crawlers), size goes up.
Well, I guess I’m fine with it going up a bit since we need to keep some numbers in memory, right? There problem is only truly a problem if the memory keeps going up. Let’s try.
100 requests:
1000:
10000:
OK, clearly it just stays the same. Hm...
Perhaps it’s time to take a look at Devel::MAT. Too bad Devel::MAT::Dumper doesn’t install on the server. Something about xlocale.h missing. Installing libnewlib-dev (which provides /usr/include/newlib/xlocale.h) appears to have no effect:
OK. Before doing anything else, I’m going to use Perlbrew to upgrade Perl 5.26 to 5.32. Then we’ll see about installing Devel::MAT.
– Alex 2021-01-03 17:38 UTC
OK, that worked! I wrote a little extension that allows me to write a heap dump to disk on the server.
Install it in your conf.d directory, set the known fingerprints to the hash of your favourite fingerprint in your config file (or edit the heap-dump.pl file):
Install the Devel::MAT::Dump package on your server, and Devel::MAT on your development machine. Restart Phoebe.
Finally, visit the path /do/heap-dump on your Phoebe instance. You’ll be prompted for your client certificate. Point your Gemini client to the one whose fingerprint you added your config above, and now it should say: “Heap Dump Saved” – copy the file it created to your local machine and examine it, using the user guide linked above as your guide.
Here’s the output right after I restarted it:
I’m going to let it run over night and we’ll see how it does tomorrow.
– Alex 2021-01-03 21:34 UTC
OK, a few hours later...
That is not very encouraging. There apparently is no single thing that grew in limitless ways. 😢
What I don’t really understand is that the numbers I’m seeing are all around 20 MiB, and yet Munin is telling me that the minimum memory is 97.31 MiB and the maximum is 266.54 MiB (also the current value). So where are the 170 MiB I’m missing?
Hm. Let’s take a look at the two scalar values we saw up in the list of “largest” SVs. Surprisingly, they are exactly the same size. Let’s examine them.
OK, that is surprising. I don’t recognize this text. Perhaps there are more like it? After all, the count shows 200878 scalars. What’s with the refcount? Who is keeping those references?
OK, nobody is referring to the first one, which is even more surprising. Perhaps it’s about to be garbage collected? The second value gives us a hint:
The SSL stuff. 😭 This reminds me of the only test in gemini-diagnostics Phoebe is currently failing: “Server should send a close_notify alert before closing the connection.”
Perhaps that’s my problem. The SSL/TLS connections aren’t closed correctly, so they hang around.
I’ll have to think about this some more.
– Alex 2021-01-04 08:12 UTC
Ten hours later, Munin says 418.04 MiB, and the heap dump says:
– Alex 2021-01-04 17:40 UTC
Ok, let’s look at the ever growing hash at 0x558c8d32a720:
Ugh, SSL again! Verify callback – that also rings a bell. Phoebe code:
Hm. So now I’m thinking: a hash with 16,000 SSL sockets?
– Alex 2021-01-04 20:05 UTC
I’ve made a small change: as the stream closes, I undefine the $data reference. I figured that something close to this loop (”for every socket?”) must be keeping references around.
Twelve hours later and Munin reports a current RSS of 206 MiB. That’s much better than anticipated! I’ll keep it running for another day or so, see how it develops.
– Alex 2021-01-06 07:55 UTC
A few more hours and memory is down to 174 MiB, so memory actually went down. Amazing! 😄
– Alex 2021-01-06 12:14 UTC
Too bad with memory climbing again. That’s wasn’t the answer, or not the entire answer, sadly! Back at 325 MiB.
When I look at this, at all the complications, and the memory used, and compare it to the implementations based on Net::Server (”old style”) with it’s 23 MiB (used for the Gopher and Finger server on ports 70 and 79, as well as the SSL enabled Gopher on port 7334) – I don’t have happy thoughts.
I don’t understand what this means… I looks like everything is OK?
Comparing the two heap dumps I have:
The difference seems minimal. And yet, ps -e rss reports 325 MiB instead of 174 MiB used.
– Alex 2021-01-06 21:30 UTC
Well… What do we have? I have three heap dump files for the current process:
RSS reported by ps is 174 MiB, 325 MiB, and 822 Mib. It’s not looking good. What amazes me, however, is that the memory usage as reported by Munin using “/proc/meminfo” doesn’t seem to reflect that. Or maybe it does?
Anyway, I’m experimenting with pmat tools. The output of “pmat-leakreport” is extremly long. It’s pages and pages of this:
I don’t know what to make of this. The documenation says:
So, all of these are new appearances, and they don’t disappear again. 😟
Let’s look at them, starting at the end:
OK, I’d it’s pretty clear by now: the “ssleay_verify_callback” stuff is bonkers!
Verify callback – remember the Phoebe code? Also note how “%default” is mentioned above.
But I just read the IO::Socket::SSL man page. It says: “If you have a server and it looks like you have a memory leak you might check the size of your session cache. Default for Net::SSLeay seems to be 20480, see the example for SSL_create_ctx_callback for how to limit it.”
OK. I’m going to try that instead. The previous code had no effect. I’m limiting the session cache to 128, like in the example provided. Restarted the server. Memory is down below 100 MiB again:
– Alex
I’m surprised. This seems to have helped!
Ignoring the spike of Emacs Wiki and just looking Phoebe: we seem to be stable at 250 MiB.
Cool!
Hm… Then again, at the same time, I changed the “phoebe.service” file for systemd:
That is… uncannily close! So here’s what I’m doing now: I decreased the SSH cache from 128 to 64, and I didn’t change the “phoebe.service” file. Let’s see if memory use is halved, i.e. the cache is relevant, or not, i.e. the memory limits are relevant.
– Alex 2021-01-09 15:00 UTC
OK, reducing the cache size didn’t help. We’re back at 250 MiB. This is a memory leak of sorts, but the resource limit I have in the service file is working.
I think I’m going to reduce those limits:
Let’s see whether it works well enough!
– Alex 2021-01-10 09:54 UTC
It works well enough. I added this to the documentation and stopped trying to understand where Mojo::IOLoop::TLS, IO::Socket::SSL, Net::SSLeay, or OpenSSL are failing.
– Alex 2021-01-11 11:26 UTC
Add Comment