2020-01-31 Banning myself with fail2ban

Recently I have noticed that I’m sometimes banned from my own websites. That is, the site is not reachable, but when I check any of the “is the site down for everybody or is it just me?” sites, it’s always just me. I also cannot SSH to the machine unless I use the IPv6 address directly.

OK, so fail2ban is banning the IPv4 of my home network. Why? Is some app I’m using bombarding the site with requests? Let’s check.

Visiting ip4.me tells me my IPv4. Grepping /var/log/fail2ban.log I see that it has been banned at 11:03 and 13:03 today, by the alex-apache rule.

This rules counts every access as a potential fail and ignores some URLs that I deem to be harmless (static files such as CSS files, fonts, pictures, podcast episodes, PDF files, and so on:

failregex = ^(www\.)?(alexschroeder\.ch|arabisch-lernen\.org|campaignwiki\.org|communitywiki\.org|emacswiki\.org|flying-carpet\.ch|korero\.org|oddmuse\.org|orientalisch\.info):[0-9]+ <HOST>

ignoreregex = ^[^"]*"GET /(robots\.txt |favicon\.ico |[^/ ]+.(css|js) |cgit/|css/|fonts/|pics/|export/|podcast/|1pdc/|gallery/|static/|munin/|osr/|indie/|rpg/|face/|traveller/|hex-describe/|text-mapper/|contrib/pics/)

This is from /etc/fail2ban/filter.d/alex-apache.conf. The actual limits are defined in /etc/fail2ban/jail.d/alex.conf:

[alex-apache]
enabled = true
port    = http,https
logpath = %(apache_access_log)s
findtime = 40
maxretry = 20

So basically you’re allowed 20 requests in 40s, not counting the requests matching ignoreregex.

OK, let’s see the requests. Here’s a little Perl script I wrote:

#!/usr/bin/env perl
while (<STDIN>) {
  m/^(\S+:\d+) ([0-9.]+) - - \[(.*?)\] "(.*?)" (\d+) (\d+|-) "(.*?)" "(.*?)"/ or warn "Cannot parse:\n$_" and next;
  my ($host, $ip4, $date, $request, $code, $size, $referrer, $agent) = ($1, $2, $3, $4, $5, $6, $7, $8);
  $requests{$request}++;
  $total++;
}
@result = sort {$requests{$b} <=> $requests{$a}} keys %requests;
foreach $label (@result) {
  printf "%70s %10d   %3d%%\n", $label, $requests{$label}, 100* $requests{$label} / $total;
}

Let’s use it to report on today’s log entries from my IP:

    GET /pdfs/spellcasters/ HTTP/1.1         12    19%
                 GET /pdfs/ HTTP/1.1         12    19%
               GET /jewelry HTTP/1.1          8    13%
             GET /wiki/Apps HTTP/1.1          8    13%

Hm, who the hell is requesting /pdfs/spellcasters/ and /pdfs/‽ Not me, that’s for sure! At least not that I know of. Then again, I’d say that the /pdfs directory contains just static files so adding that to the ignoreregex should be no problem.

I still wonder who does this, though. Is it the Firefox app on iOS?

Tags:

Comments


Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. Note: in order to facilitate peer review and fight vandalism, we will store your IP number for a number of days. See Privacy Policy for more information. See Info for text formatting rules. You can edit the comment page if you need to fix typos. You can subscribe to new comments by email without leaving a comment.

To save this page you must answer this question:

Please say HELLO.