Comments on 2020-12-25 Defending against crawlers

Wouldn’t you get most of them by just blocking everything with “[Bb]ot” in the User-Agent?

Adam 2020-12-25 16:15 UTC


It depends on what your goal is, and on the protocol you’re talking about. In the second half of my post I was talking about Gemini. That is a very simply protocol: establish a TCP/IP connection, with TLS, send a URI, get bet a status header line + content. That is, the request does not contain any header lines, unlike HTTP.

As for HTTP, which I mention in the first half: if a search engine were to crawl the new pages on my sites, slowly, then I wouldn’t mind so much, as long as the search engine is one intended for humans (these days that would be Google and Bing, I guess). I’d like to block those that misbehave, or that have goals I disagree with, and I’d like not to block the future search engine that is going to dethrone Google and Bing. I need to keep that hope alive, in any case. So if I want a nuanced result, I need a nuanced response. Slow down bots that can take a hint. Block bots that don’t. Block bots from dubious companies. And so on.

– Alex 2020-12-25 21:47 UTC


Here’s the current status of my “speed bump” extension to Phoebe:

Speed Bump Status
 From    To Warns Block Until Probation IP
 -10m   -9m 11/11  365d  364d      729d 3.11.81.100
 -12h  -12h 11/11  365d  364d      729d 18.130.221.176
 -12h  -12h 11/13  365d  364d      729d 3.9.134.250
 -14h  -14h 11/15  365d  364d      729d 3.8.127.24
 -14h  -14h 11/13  365d  364d      729d 167.114.7.65
 -10h  -10h 11/12  365d  364d      729d 18.134.146.76
 -16m  -14m 11/12  365d  364d      729d 3.10.232.193

All of these IP numbers have blocked themselves for over a year (or until I restart the server). Usign “whois” to identify the organisation (and verifying my guess for tilde.team using “dig”) we get the following:

3.11.81.100     Amazon Data Services UK
18.130.221.176  Amazon Data Services UK
3.9.134.250     Amazon Data Services UK
3.8.127.24      Amazon Data Services UK
167.114.7.65    Tilde Team
18.134.146.76   Amazon Data Services UK
3.10.232.193    Amazon Data Services UK

Oh well. Every new IP number is going to make 10–20 requests and it’s going to add a line. We could improve upon the model: once an IP is blocked for a year (the maximum), then use WHOIS to look up the IP number range. Taking the first number for example, we find that the “NetRange” is 3.8.0.0 - 3.11.255.255 and the “CIDR” is 3.8.0.0/14. Keep watching, once we have three IP numbers from the entire range blocked, there’s no need to block them all individually, we can just block the whole range. In our example, we would have reacted once we had blocked 3.11.81.100, 3.9.134.250, and 3.8.127.24. At that point, 3.10.232.193 would have been blocked preemptively.

Compare this to how GUS works. Indexing runs are made a few times a month. The IP numbers the requests come from a documented. They don’t change like the crawler (or crawlers?) running on Amazon. I’m tempted to say the bot operators hosting their bot on Amazon look like they are actively trying to evade the block. It feels like trespassing and it makes me angry.

– Alex 2020-12-26


Tilde Team is probably people, not a crawler. I gave more details in a reply to your toot.

petard 2020-12-26 19:21 UTC


For those who don’t follow us on Mastodon… 😁 I replied with a screenshot of more or less the following, saying that the requests made from Tilde Team seem to indicate that this is an unsupervised crawler, not humans. The vast majority of requests is from a bot.

2020-12-27 01:20:31 gemini://alexschroeder.ch:1965/2008-05-09_Ontology_of_Twitter
2020-12-27 01:20:40 gemini://alexschroeder.ch:1965/2011-02-14_The_Value_of_a_Web_Site
2020-12-27 01:20:48 gemini://alexschroeder.ch:1965/2013-01-23_Security_of_Code_Downloaded_from_Online_Sources
2020-12-27 01:20:54 gemini://alexschroeder.ch:1965/2016-05-28_nginx_as_a_caching_proxy
2020-12-27 01:21:01 gemini://alexschroeder.ch:1965/Comments_on_2011-02-14_The_Value_of_a_Web_Site
2020-12-27 01:24:54 gemini://transjovian.org:1965/gemini/diff/common%20wiki%20structure/1
2020-12-27 01:25:01 gemini://transjovian.org:1965/gemini/diff/common%20wiki%20structure/2
2020-12-27 01:25:08 gemini://transjovian.org:1965/gemini/diff/common%20wiki%20structure/3
2020-12-27 01:25:15 gemini://transjovian.org:1965/gemini/do/atom
2020-12-27 01:25:23 gemini://transjovian.org:1965/gemini/do/rss
2020-12-27 01:25:29 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/1
2020-12-27 01:25:37 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/2
2020-12-27 01:25:43 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/3
2020-12-27 01:46:49 gemini://communitywiki.org:1965/do/comment/BestPracticesForWikiTheoryBuilding
2020-12-27 01:46:58 gemini://communitywiki.org:1965/html/BestPracticesForWikiTheoryBuilding
2020-12-27 01:47:04 gemini://communitywiki.org:1965/page/PromptingStatement
2020-12-27 01:47:11 gemini://communitywiki.org:1965/page/WeLoveVolunteers
2020-12-27 01:47:18 gemini://communitywiki.org:1965/raw/BestPracticesForWikiTheoryBuilding
2020-12-27 01:47:26 gemini://communitywiki.org:1965/raw/Comments_on_BestPracticesForWikiTheoryBuilding
2020-12-27 01:47:33 gemini://communitywiki.org:1965/tag/inprogress
2020-12-27 01:47:41 gemini://communitywiki.org:1965/tag/practice
2020-12-27 01:47:48 gemini://communitywiki.org:1965/tag/practices
2020-12-27 01:47:56 gemini://communitywiki.org:1965/tag/prescription
2020-12-27 01:48:02 gemini://communitywiki.org:1965/tag/prescriptions
2020-12-27 01:48:11 gemini://communitywiki.org:1965/tag/recommendation
2020-12-27 01:48:16 gemini://communitywiki.org:1965/tag/recommendations
2020-12-27 01:48:23 gemini://communitywiki.org:1965/tag/theorybuilding
2020-12-27 01:51:05 gemini://communitywiki.org:1965/do/comment/HansWobbe
2020-12-27 01:51:08 gemini://communitywiki.org:1965/html/HansWobbe
2020-12-27 01:57:51 gemini://communitywiki.org:1965/page/BlikiNet
2020-12-27 02:17:04 gemini://communitywiki.org:1965/page/ChainVideo
2020-12-27 02:28:46 gemini://communitywiki.org:1965/page/CwbHwoAg
2020-12-27 02:58:36 gemini://communitywiki.org:1965/page/DfxMapping

Suspicious signs:

These are not people. This is a crawler verifying its database. And ignoring robots.txt.

I think the main problem is that I run multiple sites served via Gemini with thousands of pages, and all the pages have links to alternate views (page history, page diff, HTML copy, raw copy, comments prompt), so perhaps mine are the only sites where crawlers might actually get to their limits. If somebody new sets up a Gemini server and serves two score static gemtext files, then these crawlers do little harm. But as it stands, there’s a constant barrage on my servers that stands in no relation to the amount of human activity.

Some of these URIs are violating robots.txt. But it’s not just that. I also feel a moral revulsion: all the CO₂ wasted shows a disregard for resources these people are not paying for. This is exactly the problem our civilisation faces, on a small scale.

Thus, where as GoogleBot and BingBot might be nominally useful (the wealth concentration we’ve seen as a consequence of their data gathering notwithstanding), the ratio of change to crawl is and remains important. Once a site is crawled, how often and what URLs should you crawl again? The current system is so wasteful.

Anyway, I have a lot of anger in me.

– Alex 2020-12-27


That’s a good summary of our conversation. My suggestion that requests from Tilde Team were probably people was based on the fact that it’s a public shell host that people use to browse gemini. (I have an account there and use it happily. It’s mostly a nice place with people I like to talk to. I am not otherwise affiliated.)

Seeing that log dump makes it clear that someone on that system is behaving badly.

petard 2020-12-27 14:32 UTC


Current status:

Speed Bump Status
 From    To Warns Block Until Probation IP
 -33m  -33m 30/30   28d   27d       55d 78.47.222.156 78.46.0.0/15
 -17h  -17h 11/11   28d   27d       55d 3.9.165.84 3.8.0.0/14
 -46h  -46h 17/17   28d   26d       54d 18.130.170.163 18.130.0.0/16
  -2d   -2d 11/11   28d   26d       54d 18.134.12.41 18.132.0.0/14
 -44h  -44h 11/11   28d   26d       54d 18.132.209.113 18.132.0.0/14
 -22h  -22h 13/13   28d   27d       55d 35.178.128.94 35.178.0.0/15
 -38h  -38h 12/12   28d   26d       54d 3.8.185.90 3.8.0.0/14
 -17h  -17h 12/12   28d   27d       55d 35.177.73.123 35.176.0.0/15
 -42h  -42h 11/11   28d   26d       54d 18.130.151.101 18.130.0.0/16
  -5h   -5h 13/13   28d   27d       55d 167.114.7.65 167.114.0.0/17
 -17h  -17h 14/14   28d   27d       55d 52.56.225.165 52.56.0.0/16
 -42h  -42h 12/12   28d   26d       54d 18.135.104.61 18.132.0.0/14
  -8h   -8h 12/12   28d   27d       55d 35.179.91.110 35.178.0.0/15
  -4h   -4h 11/11   28d   27d       55d 18.130.166.9 18.130.0.0/16
 -20h  -20h 11/11   28d   27d       55d 52.56.232.202 52.56.0.0/16
 -36h  -36h 13/13   28d   26d       54d 35.178.91.123 35.178.0.0/15
 -36h  -36h 11/11   28d   26d       54d 3.8.195.248 3.8.0.0/14

Until CIDR
  27d 18.130.0.0/16
  27d 3.8.0.0/14
  27d 35.178.0.0/15
  26d 18.132.0.0/14
→ menu

Almost all of them Amazon Data Services UK, a few Hetzner, some OVH Hosting.

Seeing whole net ranges being blocked makes me happy. The code seems to work as expected.

– Alex 2020-12-29 16:35 UTC


The list keeps growing. I decided to write a script that would retrieve this page for me, and call WHOIS for all the networks identified.

#!/usr/bin/perl
use Modern::Perl;
use Net::Whois::IP qw(whoisip_query);
say "Requesting data";
my $data = qx(gemini --cert_file=/home/alex/.emacs.d/elpher-certificates/alex.crt --key_file=/home/alex/.emacs.d/elpher-certificates/alex.key gemini://transjovian.org/do/speed-bump/status);
say "Reading blocked networks";
my %seen;
while ($data =~ /(\d+\.\d+\.\d+\.\d+|[0-9a-f]+:[0-9a-f]+:[0-9a-f:]+)\/\d+/g) {
  my $ip = $1;
  next if $seen{$ip};
  $seen{$ip} = 1;
  my $response = whoisip_query($ip);
  my $name = $response->{OrgName} || $response->{netname} || $response->{Organization};
  if ($name) {
    say "$ip $name";
  } else {
    say "$ip";
    for (keys %$response) {
      say "  $_: $response->{$_}";
    }
  }
}

Result:

Requesting data
Reading blocked networks
3.8.0.0 Amazon Data Services UK
35.178.0.0 Amazon Data Services UK
18.130.0.0 Amazon Data Services UK
35.176.0.0 Amazon Data Services UK
52.56.0.0 Amazon Data Services UK
78.46.0.0 HETZNER-nbg1-dc1
167.114.0.0 OVH Hosting, Inc.
18.132.0.0 Amazon Data Services UK
67.205.144.0 DigitalOcean, LLC

Oh hey, Digital Ocean is new.

– Alex 2020-12-30 11:10 UTC


Let’s check the number of requests blocked, relying on the Phoebe log files. “Looking at <some URL>” is an info log message it prints for every request. Let’s count them:

# journalctl --unit phoebe --since 2020-12-29|grep "Looking at"|wc -l
11700

Let’s see how many are caught by network range blocks:

# journalctl --unit phoebe --since 2020-12-29|grep "Net range is blocked"|wc -l
1812

Let’s see how many of them are just lone IP numbers being blocked:

# journalctl --unit phoebe --since 2020-12-29|grep "IP is blocked"|wc -l
2862

And first time offenders:

# journalctl --unit phoebe --since 2020-12-29|grep "Blocked for"|wc -l
8

I guess that makes 4682 blocked bot requests out of 11700 requests, or 40% of all requests.

The good news is that more than half seem to be legit? Or are they? I’m growing more suspicious all the time.

Let’s check HTTP access!

# journalctl --unit phoebe --since 2020-12-29|grep "HTTP headers"|wc -l
320
# journalctl --unit phoebe --since 2020-12-29|grep "HTTP headers"|perl -e 'while(<STDIN>){m/(\w*bot\w*)/i; print "$1\n"}'|sort|uniq --count
      1 
     22 bingbot
      2 Bot
     80 googlebot
     34 Googlebot
     88 MJ12bot
     32 SeznamBot
     61 YandexBot

That is, of the 11700 requests I’m looking at, I’ve had 320 web requests, of which 319 (!) where bots.

I think the next step will be to change the robots.txt served via the web to disallow them all.

– Alex 2020-12-30 11:40 UTC


Hm, but blocking IPAs the style you mention would e.g. block my hacker space, where I’ve told a bunch of nerds that Gemini is cool, and they should have a look at … your site. And if it isn’t a hacker space, it’s a student’s dorm, or similar, behind NAT.

I understand your anger, but blocking IPAs in the end isn’t better than Hotmail & Google not accepting mail from my host - they think it’s suspicious, because it’s small (it has proper DNS, no blacklist and so on, they just ASSUME it would could be wrong. Internet is “everyone can talk to everyone”, and my approach is to make that happen. Every counter approach is breaking the Internet, IMHO. YMMV.

– Götz 2021-01-05 23:40 UTC


How would you defend against bad actors, then? Simply accept it as a fact of life and add better infrastructure, or put the “smol net” behind a login? If all I have is an IP number of a peer connecting to my server, then all the consequences must relate to the IP number, or there must be no consequences. That’s how I understand the situation.

– Alex Schroeder 2021-01-06 11:09 UTC


Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. Note: in order to facilitate peer review and fight vandalism, we will store your IP number for a number of days. See Privacy Policy for more information. See Info for text formatting rules. You can edit this page if you need to fix typos. You can subscribe to updates by email without leaving a comment.

To save this page you must answer this question:

Just say HELLO