2019-01-20 fail2ban to watch over my sites

I’m hosting my sites on a tiny server. At the same time, it’s all dynamic web apps and wikis and CGI scripts that take CPU and memory resources. Not a problem, if humans are browsing the site. If people try to download the entire site using automated tools that don’t wait between requests (leeches), or don’t check the meta information included in HTTP headers and HTML tags, then they overload my sites, and they get lost in the maze of links: page history, recent changes, old page revisions, you can download for ever if you’re not careful.

Enter fail2ban. This tool watches log files for regular expressions (filters) and if it find matches, it adds IP numbers to the firewall. You can then tell it which filters to apply to which log files and how many hits you’ll allow (jail).

When writing the rules, I just need to be careful: it’s OK to download a lot of static files. I just don’t want leeches or spammers (trying to brute force the questions I sometimes ask before people get to edit their first page on my sites).

Here’s my setup:

alex-apache.conf

/etc/fail2ban/filter.d/alex-apache.conf

This is for the Apache web server with virtual hosts. The comment shows an example entry.

Notice the ignoreregex to make sure that some of the apps and directories don’t count.

Note that only newer versions of fail2ban will be able to match IPv6 hosts.

# Author: Alex Schroeder <alex@gnu.org>

[Definition]
# ANY match in the logfile counts!
# communitywiki.org:443 000.000.000.000 - - [24/Aug/2018:16:59:55 +0200] "GET /wiki/BannedHosts HTTP/1.1" 200 7180 "https://communitywiki.org/wiki/BannedHosts" "Pcore-HTTP/v0.44.0"
failregex = ^[^:]+:[0-9]+ <HOST> 

# Except cgit, css files, images...
# alexschroeder.ch:443 0:0:0:0:0:0:0:0 - - [28/Aug/2018:09:14:39 +0200] "GET /cgit/bitlbee-mastodon/objects/9b/ff0c237ace5569aa348f6b12b3c2f95e07fd0d HTTP/1.1" 200 3308 "-" "git/2.18.0"
ignoreregex = ^[^"]*"GET /(robots\.txt |favicon\.ico |[^/ ]+.(css|js) |cgit/|css/|fonts/|pics/|1pdc/|gallery/|static/|munin/|osr/|indie/|face/|traveller/|hex-describe/|text-mapper/)

alex-gopher.conf

/etc/fail2ban/filter.d/alex-gopher.conf

Yeah, I also make the wiki available via gopher...

# Author: Alex Schroeder <alex@gnu.org>

[Init]
# 2018/08/25-09:08:55 CONNECT TCP Peer: "[000.000.000.000]:56281" Local: "[000.000.000.000]:70"
datepattern = ^%%Y/%%m/%%d-%%H:%%M:%%S

[Definition]
# ANY match in the logfile counts!
failregex = CONNECT TCP Peer: "\[<HOST>\]:\d+"

alex.conf

/etc/fail2ban/jail.d/alex.conf

Now I need to tell fail2ban which log files to watch and which filters to use.

Note how I assume a human will basically click a link every 2s. Bursts are OK, but 20 hits in 40s are the limit.

Notice that the third jail just reuses the filter of the second jail.

[alex-apache]
enabled = true
port    = http,https
logpath = %(apache_access_log)s
findtime = 40
maxretry = 20

[alex-gopher]
enabled = true
port    = 70
logpath = /home/alex/farm/gopher-server.log
findtime = 40
maxretry = 20

[alex-gopher-ssl]
enabled = true
filter  = alex-gopher
port    = 7443
logpath = /home/alex/farm/gopher-server-ssl.log
findtime = 40
maxretry = 20

Tags:

Comments

Blimey. I don’t remember the last time anyone did anything gophery I noticed.

Blue Tyson 2019-01-30 10:47 UTC


The #gopher discussion is alive and well on Mastodon... 😊

– Alex Schroeder 2019-01-30 13:11 UTC


Something to review when I have a bit of time: Web Server Security by @infosechandbook.

– Alex Schroeder 2019-02-01 18:11 UTC


Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. Note: in order to facilitate peer review and fight vandalism, we will store your IP number for a number of days. See Privacy Policy for more information. See Info for text formatting rules. You can edit the comment page if you need to fix typos. You can subscribe to new comments by email without leaving a comment.

To save this page you must answer this question:

Please say HELLO.