Phoebe

Phoebe is my Gemini-first wiki. That is, it’s a wiki. It’s primary user interface is Gemini. You can also read the wiki via the web just fine. It reads a config file on startup, allowing you to modify it.

The one instance I run has a huge config file which allows me to serve this wiki (and a bunch of others) using Gemini.

2021-02-21 Perl dependencies for Phoebe

Every now and then I’m getting reports from the CPAN testers that my modules fail tests because they don’t declare all their dependencies in the “Makefile.PL” file. The problem is that on those systems where the modules happen to be installed – such as mine – these problems don’t surface.

The first step to solve this is to write a test that uses “Test::Prereq” to check whether all the modules used are declared as a dependencies in the “Makefile.PL”. So far so good. I added many dependencies I considered unnecessary – I felt they were part of the Perl core since 5.26.0, the starting point I picked for Phoebe. But now I want to make sure, and I want a test to make sure I’m not slipping.

So what I did now is to use “perlbrew”, using a new Perl version without any modules installed (more or less?), rerunning “perl Makefile.PL” to regenerate a Makefile for the new Perl I’m using, running “perlbrew install_cpanm” to get a new “cpanm”, and then using “cpanm .” to attempt an install with all the dependencies.

The result? It’s still building the list of dependencies. 😴💤

Time passes.

OK, I can reproduce the problem!

Building and testing App-phoebe-2.1 ... FAIL
! Installing . failed. See /home/alex/.cpanm/work/1613930095.24976/build.log for details. Retry with --force to force install it.
56 distributions installed

Whaaaat! 😠

Well, it seems that somewhere, somehow, “DateTime::Format::ISO8601” is being used, so I added that as a dependency, and that in turn pulls in a ton of other modules… Oh well.

I’ve been chatting with @AFresh1 and he says that dependencies for optional code is a sign that perhaps that optional code should go into a separate distribution. Hm. I guess that makes sense. I’m currently just wondering about the architecture of the thing.

Extensions right now: there is a directory you can create and any Perl source file you put there is getting loaded on startup. Where do these files come from? From my repo. That is, if you install Phoebe using CPAN, these files are unpacked and tested, but then they aren’t being installed anywhere.

Extensions could instead be: there is a module you can install, such as App::phoebe::gopher, which installs a Perl module. In your config file, you’d write: “use App::phoebe::gopher”. Done?

Add Comment

2021-02-16 Perl upgrading woes

I’ve got the first failing reports from CPAN testers reporting some TLS issues:

Mojo::Reactor::Poll: I/O watcher failed: Client creation failed: SSL connect attempt failed error:14094458:SSL routines:ssl3_read_bytes:tlsv1 unrecognized name

This seems to be related to upgrading IO::Socket::SSL from 2.068 to 2.069, but when I check the commit log, I see nothing suspicious.

When you upgrade Mojolicious from 8.67 to 9.0 things are even worse because they removed the tls_verify option for Mojo::IOLoop::TLS.

In t/test.pl you can change the test client code using Mojo::IOLoop as follows:

@@ -125,13 +125,15 @@ sub query_gemini {
   my ($header, $mimetype, $encoding, $buffer);
 
   # create client
-  Mojo::IOLoop->client({
-    address => "127.0.0.1",
-    port => $port,
-    tls => 1,
-    tls_cert => "t/cert.pem",
-    tls_key => "t/key.pem",
-    tls_verify => 0x00, } => sub {
+  Mojo::IOLoop->client(
+    {
+      address => "127.0.0.1",
+      port => $port,
+      tls => 1,
+      tls_cert => "t/cert.pem",
+      tls_key => "t/key.pem",
+      tls_options => { SSL_verify_mode => 0x00 }
+    } => sub {
       my ($loop, $err, $stream) = @_;
       die "Client creation failed: $err\n" if $err;
       $stream->on(error => sub {

That still doesn’t solve the TLS/SSL error, however. If you upgrade Net::SSLeay from 1.88 to 1.90 that seems to make no difference, and the list of changes appears innocuous.

My system has OpenSSL 1.1.1d installed, if that makes a difference.

I currently have no workaround except downgrading.

Comments on 2021-02-16 Perl upgrading woes

I have at least an inkling of what’s wrong. First, I verified that I can have a simple setup with Mojo::IOLoop acting both as server and as client, and that I can use IO::Socket::SSL as a client as well.

So that’s not where the problem is. The problem is somewhere in the hostnames.

Here’s an example. Start the server serving the hostname “melanobombus” and the IP 127.0.0.1.

$ phoebe --host melanobombus --host 127.0.0.1 --log_level=debug
[2021-02-17 22:42:52.78734] [39528] [info] Running ./wiki/config
[2021-02-17 22:42:52.78762] [39528] [info] PID: 39528
[2021-02-17 22:42:52.78767] [39528] [info] Host: melanobombus 127.0.0.1
[2021-02-17 22:42:52.78769] [39528] [info] Port: 1965
[2021-02-17 22:42:52.78772] [39528] [info] Space: 
[2021-02-17 22:42:52.78778] [39528] [info] Token: hello
[2021-02-17 22:42:52.78781] [39528] [info] Main page: 
[2021-02-17 22:42:52.78784] [39528] [info] Pages: 
[2021-02-17 22:42:52.78786] [39528] [info] MIME types: 
[2021-02-17 22:42:52.78788] [39528] [info] Wiki data directory: ./wiki
[2021-02-17 22:42:52.78807] [39528] [info] Listening on 127.0.1.1:1965
[2021-02-17 22:42:52.78911] [39528] [info] Listening on 127.0.0.1:1965

As you can see, “melanobombus” is translated to 127.0.1.1 because of my “/etc/hosts” file, which contains the following:

127.0.0.1	localhost
127.0.1.1	melanobombus
127.0.1.1	xn--mlanobombus-bbb

::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Phoebe takes host “melanobombus”, figures out the IP number, and listens on the appropriate port.

Now let’s test it, by requesting a few names:

$ script/gemini gemini://melanobombus | head -2
20 text/gemini; charset=UTF-8
# Welcome to Phoebe!
$ script/gemini gemini://127.0.0.1 | head -2
20 text/gemini; charset=UTF-8
# Welcome to Phoebe!
$ script/gemini gemini://localhost | head -2
Mojo::Reactor::Poll: I/O watcher failed: SSL connect attempt failed error:14094458:SSL routines:ssl3_read_bytes:tlsv1 unrecognized name

So I’m guessing the problem has nothing to do with the TLS options. This has nothing to do with SSL3 or TLSv1. The problem is that “localhost” isn’t being served, somehow, eventhough “localhost” translates to 127.0.0.1 and thus it would get served without TLS. Somehow, the TLS part now know that it’s only supposed to serve 127.0.0.1 and “melanobombus”. It shouldn’t work for “localhost” and it’s telling me: “you have requested an unrecognized name”. I got confused by the name of the error location: “routines:ssl3_read_bytes:tlsv1”.

All right!


This would be the CN attribute of the TLS cert. It’s only valid for the domain named, unfortunatlely. As far as TLS is concerned, localhost and 127.0.0.1 are two different domains.

– splatt9990 2021-02-17 23:03 UTC


Yes, this seems to be it. This used to work and now it no longer does. What I did now in my test setup was to remove all the occurrences of 127.0.0.1, replacing them with localhost, plus related changes, and that seems to work. I think I started doing that because some CPAN testers did not have “localhost” set, but I no longer remember. We’ll see what happens.

– Alex 2021-02-20 22:49 UTC

Add Comment

2021-01-01 Oddmuse memory issues

Not really! It’s Phoebe… 😭

I just noticed that three of my server processes seem to consume a lot of memory. The following three wikis were reporting more than 2G used, each:

  • Community Wiki
  • Emacs Wiki
  • Oddmuse Wiki

Strangely, this site, or Campaign Wiki seemed not to be affected. I don’t see any performance problems.

Memory issues started on week 51, 2020

I’m suspecting that something else is going on. I restarted the the three processes using Hypnotoad, a hot deployment. But htop is still telling me that we’re using a lot of memory. When I sort by memory, however, I see it’s Phoebe at the very top.

Phoebe is consuming 40% if all memory

So now I’m wondering:

  • did I install some memory leaking version of Phoebe in week 51 of 2020?
  • is Phoebe being counted for the three sites instead of counting as Phoebe?

Something to investigate in the upcoming days. For now, I’m going to restart Phoebe.

Memory use is halved, now

I fixed the regular expressions for my Munin setup so that all the command line parameters I’m using to start Phoebe don’t trigger matches for other processes such as Community Wiki, Emacs Wiki, or Oddmuse Wiki.

That still leaves the question: What did I install, what changes did I make around December 14? I have no idea...

Comments on 2021-01-01 Oddmuse memory issues

Yep, Phoebe is at fault.

Memory use increasing for Phoebe only

So now I have to find a way to reproduce this result locally. Ideally we’ll find that there’s a single config file at fault... If I’m unlucky it’s because Phoebe 2 switched from Net::Server to Mojo::IOLoop and I’m somehow not closing the connections correctly, or some reference remains that prevents them from closing 100%. Yikes.

– Alex 2021-01-03 09:57 UTC


OK, I think I have a test setup that works. Let me know if you see an error in my thinking.

The test starts a server in the background, listening to a random port ($port) like all the other tests. I need its process id so that I can determine the memory it uses:

lsof -i:$port -F p

Given the process id ($pid), I can now ask for the memory size of the process:

ps -q $pid -o size

So what I’m doing is I get this size after starting the server, then I run 100 requests, and I get the size again. I run just Phoebe, no problem. If I install the speed bump extension (see 2020-12-25 Defending against crawlers), size goes up.

Well, I guess I’m fine with it going up a bit since we need to keep some numbers in memory, right? There problem is only truly a problem if the memory keeps going up. Let’s try.

100 requests:

$ make test TEST_FILES=t/leak.t
PERL_DL_NONLAZY=1 "/home/alex/perl5/perlbrew/perls/perl-5.30.0/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/leak.t
t/leak.t .. 1/?
#   Failed test 'No memory lost (42524 = 43256)'
#   at t/leak.t line 33.
#          got: '42524'
#     expected: '43256'
# Looks like you failed 1 test of 1.
t/leak.t .. Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/1 subtests

Test Summary Report
-------------------
t/leak.t (Wstat: 256 Tests: 1 Failed: 1)
  Failed test:  1
  Non-zero exit status: 1
Files=1, Tests=1,  2 wallclock secs ( 0.02 usr  0.00 sys +  0.71 cusr  0.07 csys =  0.80 CPU)
Result: FAIL
Failed 1/1 test programs. 1/1 subtests failed.
make: *** [Makefile:940: test_dynamic] Error 1

1000:

#   Failed test 'No memory lost (42528 = 43256)'

10000:

#   Failed test 'No memory lost (42524 = 43256)'

OK, clearly it just stays the same. Hm...

Perhaps it’s time to take a look at Devel::MAT. Too bad Devel::MAT::Dumper doesn’t install on the server. Something about xlocale.h missing. Installing libnewlib-dev (which provides /usr/include/newlib/xlocale.h) appears to have no effect:

Building Devel-MAT-Dumper
cc -I/home/alex/perl5/perlbrew/perls/perl-5.26.1/lib/5.26.1/x86_64-linux/CORE -DVERSION="0.42" -DXS_VERSION="0.42" -fPIC -c -fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -O2 -o lib/Devel/MAT/Dumper.o lib/Devel/MAT/Dumper.c
In file included from lib/Devel/MAT/Dumper.xs:8:
/home/alex/perl5/perlbrew/perls/perl-5.26.1/lib/5.26.1/x86_64-linux/CORE/perl.h:738:13: fatal error: xlocale.h: No such file or directory
 #   include <xlocale.h>
             ^~~~~~~~~~~
compilation terminated.

OK. Before doing anything else, I’m going to use Perlbrew to upgrade Perl 5.26 to 5.32. Then we’ll see about installing Devel::MAT.

– Alex 2021-01-03 17:38 UTC


OK, that worked! I wrote a little extension that allows me to write a heap dump to disk on the server.

Install it in your conf.d directory, set the known fingerprints to the hash of your favourite fingerprint in your config file (or edit the heap-dump.pl file):

our @known_fingerprints = qw(
  sha256$54c0b95dd56aebac1432a3665107d3aec0d4e28fef905020ed6762db49e84ee1);

Install the Devel::MAT::Dump package on your server, and Devel::MAT on your development machine. Restart Phoebe.

Finally, visit the path /do/heap-dump on your Phoebe instance. You’ll be prompted for your client certificate. Point your Gemini client to the one whose fingerprint you added your config above, and now it should say: “Heap Dump Saved” – copy the file it created to your local machine and examine it, using the user guide linked above as your guide.

Here’s the output right after I restarted it:

$ pmat phoebe.pmat
Perl memory dumpfile from perl 5.32.0 non-threaded
Heap contains 217470 objects
pmat> largest
HASH(24948) at 0x558c89875548=strtab: 1.7 MiB
HASH(5672) at 0x558c89b3c450: 261.0 KiB
SCALAR(PV) at 0x558c89beaa18: 215.7 KiB
ARRAY(1,!REAL) at 0x558c89899098: 92.5 KiB
HASH(2231) at 0x558c8aac5240: 84.3 KiB
others: 15.2 MiB

I’m going to let it run over night and we’ll see how it does tomorrow.

– Alex 2021-01-03 21:34 UTC


OK, a few hours later...

$ pmat phoebe.pmat
Perl memory dumpfile from perl 5.32.0 non-threaded
Heap contains 313361 objects
pmat> largest
HASH(39010) at 0x558c89875548=strtab: 2.4 MiB
HASH(8940) at 0x558c89b8f2a8: 337.6 KiB
HASH(5672) at 0x558c89b3c450: 261.0 KiB
SCALAR(PV) at 0x558c98eaf6e8: 215.7 KiB
SCALAR(PV) at 0x558c89beaa18: 215.7 KiB
others: 21.0 MiB

That is not very encouraging. There apparently is no single thing that grew in limitless ways. 😢

pmat> count
  Kind        Count (blessed)        Bytes  (blessed)
  ARRAY       11108         7      1.2 MiB  520 bytes
  CODE        10387                1.3 MiB
  GLOB        14796        16      2.1 MiB    2.4 KiB
  HASH        23305      3182      7.4 MiB  849.6 KiB
  INVLIST       655              674.4 KiB
  IO             33        33      5.2 KiB    5.2 KiB
  PAD          7870                1.5 MiB
  REF         41199         1    965.6 KiB   24 bytes
  REGEXP       2503       253    528.0 KiB   53.4 KiB
  SCALAR     200878        12      8.0 MiB 1022 bytes
  STASH         627              801.6 KiB
  -------    ------ ---------    --------- ----------
  (total)    313361      3504     24.5 MiB  912.1 KiB

What I don’t really understand is that the numbers I’m seeing are all around 20 MiB, and yet Munin is telling me that the minimum memory is 97.31 MiB and the maximum is 266.54 MiB (also the current value). So where are the 170 MiB I’m missing?

Hm. Let’s take a look at the two scalar values we saw up in the list of “largest” SVs. Surprisingly, they are exactly the same size. Let’s examine them.

...
SCALAR(PV) at 0x558c98eaf6e8: 215.7 KiB
SCALAR(PV) at 0x558c89beaa18: 215.7 KiB
...

pmat> show 0x558c98eaf6e8
SCALAR(PV) at 0x558c98eaf6e8 with refcount 1
  size 215.7 KiB (220840 bytes)
  PV="// This Source Code Form is subject to the terms of the Mozilla "...
  PVLEN 220798

pmat> show 0x558c89beaa18
SCALAR(PV) at 0x558c89beaa18 with refcount 1
  size 215.7 KiB (220840 bytes)
  PV="// This Source Code Form is subject to the terms of the Mozilla "...
  PVLEN 220798

OK, that is surprising. I don’t recognize this text. Perhaps there are more like it? After all, the count shows 200878 scalars. What’s with the refcount? Who is keeping those references?

pmat> inrefs --all 0x558c98eaf6e8

pmat> inrefs --all 0x558c89beaa18
s  a constant  CODE(PP) at 0x558c89b8f080

OK, nobody is referring to the first one, which is even more surprising. Perhaps it’s about to be garbage collected? The second value gives us a hint:

pmat> show 0x558c89b8f080
CODE(PP) at 0x558c89b8f080 with refcount 1
  size 128 bytes
  named as &IO::Socket::SSL::PublicSuffix::_builtin_data
  no hekname
  stash=STASH(19) at 0x558c89bd0f98
  glob=GLOB(&*) at 0x558c89bea460
  location=/home/alex/perl5/perlbrew/perls/perl-5.32.0/lib/site_perl/5.32.0/IO/Socket/SSL/PublicSuffix.pm line 348
  scope=CODE() at 0x558c89c4f820
  pad[0]=PAD(1) at 0x558c89b8f188

The SSL stuff. 😭 This reminds me of the only test in gemini-diagnostics Phoebe is currently failing: “Server should send a close_notify alert before closing the connection.”

Perhaps that’s my problem. The SSL/TLS connections aren’t closed correctly, so they hang around.

I’ll have to think about this some more.

– Alex 2021-01-04 08:12 UTC


Ten hours later, Munin says 418.04 MiB, and the heap dump says:

pmat> largest
HASH(46066) at 0x558c89875548=strtab: 3.3 MiB
HASH(15996) at 0x558c89b8f2a8: 631.0 KiB
SCALAR(PV) at 0x558c8af5d8a0: 305.0 KiB
HASH(5672) at 0x558c89b3c450: 261.0 KiB
SCALAR(PV) at 0x558c98cc4d48: 232.0 KiB
others: 24.5 MiB

pmat> count
  Kind        Count (blessed)        Bytes  (blessed)
  ARRAY       12116         7      1.3 MiB  520 bytes
  CODE        11395                1.4 MiB
  GLOB        14796        16      2.1 MiB    2.4 KiB
  HASH        30361      3182      9.6 MiB  849.6 KiB
  INVLIST       655              674.4 KiB
  IO             33        33      5.2 KiB    5.2 KiB
  PAD          8878                1.8 MiB
  REF         56319         1      1.3 MiB   24 bytes
  REGEXP       2510       253    529.5 KiB   53.4 KiB
  SCALAR     228143        12      9.7 MiB 1022 bytes
  STASH         627              801.6 KiB
  -------    ------ ---------    --------- ----------
  (total)    365833      3504     29.2 MiB  912.1 KiB

– Alex 2021-01-04 17:40 UTC


Ok, let’s look at the ever growing hash at 0x558c8d32a720:

pmat> values 0x558c89b8f2a8
  ...
  {ptr_0x558c8d3d5f40} REF() at 0x558c8d29c850 => HASH(1) at 0x558c8d2ab990
  {ptr_0x558c8d3eabc0} REF() at 0x558c8d2ec190 => HASH(1) at 0x558c8d29fc48
  {ptr_0x558c8d3eb470} REF() at 0x558c8d3bdc18 => HASH(1) at 0x558c8d3bdc30
  {ptr_0x558c8d3f6cd0} REF() at 0x558c8d2f3e00 => HASH(1) at 0x558c8d32a720
  ... (15946 more)

pmat [more]> values 0x558c8d32a720
  {"ssleay_verify_callback!!func"} REF() at 0x558c8d3a4758 => CODE(PP,closure) at 0x558c8d32aac8

pmat> values 0x558c8d3bdc30
  {"ssleay_verify_callback!!func"} REF() at 0x558c8d3bdca8 => CODE(PP,closure) at 0x558c8d3bd8e8

pmat> values 0x558c8d29fc48
  {"ssleay_verify_callback!!func"} REF() at 0x558c8d2ac260 => CODE(PP,closure) at 0x558c8d32aac8

pmat> values 0x558c8d2ab990
  {"ssleay_verify_callback!!func"} REF() at 0x558c8d29d078 => CODE(PP,closure) at 0x558c8d32aac8

Ugh, SSL again! Verify callback – that also rings a bell. Phoebe code:

IO::Socket::SSL::set_defaults(
  SSL_verify_mode => SSL_VERIFY_PEER,
  SSL_verify_callback => \&verify_fingerprint);

Hm. So now I’m thinking: a hash with 16,000 SSL sockets?

– Alex 2021-01-04 20:05 UTC


I’ve made a small change: as the stream closes, I undefine the $data reference. I figured that something close to this loop (”for every socket?”) must be keeping references around.

Mojo::IOLoop->server({
  address => $address,
  port => $port,
  tls => 1,
  tls_cert => $server->{cert_file},
  tls_key  => $server->{key_file},
} => sub {
  my ($loop, $stream) = @_;
  my $data = { buffer => '', handler => \&handle_request };
  $stream->on(read => sub {
    my ($stream, $bytes) = @_;
    $log->debug("Received " . length($bytes) . " bytes");
    $data->{buffer} .= $bytes;
    $data->{handler}->($stream, $data) });
  $stream->on(close => sub {
    my $stream = shift;
    undef($data);
   }) });

Twelve hours later and Munin reports a current RSS of 206 MiB. That’s much better than anticipated! I’ll keep it running for another day or so, see how it develops.

– Alex 2021-01-06 07:55 UTC


A few more hours and memory is down to 174 MiB, so memory actually went down. Amazing! 😄

pmat> largest
HASH(29739) at 0x55cf4fc7c548=strtab: 2.0 MiB
HASH(5672) at 0x55cf4ff43d60: 261.0 KiB
SCALAR(PV) at 0x55cf4fff2238: 215.7 KiB
HASH(4782) at 0x55cf4ff96aa8: 176.1 KiB
ARRAY(1,!REAL) at 0x55cf4fca01d8: 92.5 KiB
others: 17.2 MiB

pmat> count
  Kind        Count (blessed)        Bytes  (blessed)
  ARRAY       10198         7      1.2 MiB  520 bytes
  CODE         9603                1.2 MiB
  GLOB        14374        16      2.1 MiB    2.4 KiB
  HASH        11712      3181      5.0 MiB  884.5 KiB
  INVLIST       583              652.3 KiB
  IO             33        33      5.2 KiB    5.2 KiB
  PAD          7079                1.3 MiB
  REF         24869         1    582.9 KiB   24 bytes
  REGEXP       2303       253    485.8 KiB   53.4 KiB
  SCALAR     173493        12      6.8 MiB 1022 bytes
  STASH         610              776.3 KiB
  -------    ------ ---------    --------- ----------
  (total)    254857      3503     19.9 MiB  946.9 KiB

– Alex 2021-01-06 12:14 UTC


Too bad with memory climbing again. That’s wasn’t the answer, or not the entire answer, sadly! Back at 325 MiB.

Phoebe rising

When I look at this, at all the complications, and the memory used, and compare it to the implementations based on Net::Server (”old style”) with it’s 23 MiB (used for the Gopher and Finger server on ports 70 and 79, as well as the SSL enabled Gopher on port 7334) – I don’t have happy thoughts.

pmat> count --blessed
  Kind                                             Count (blessed)        Bytes  (blessed)
  ARRAY                                            11506         7      1.3 MiB  520 bytes
      Math::BigInt::Calc                               0         3      0 bytes  216 bytes
      Net::DNS::RR::OPT                                0         2      0 bytes  160 bytes
      Class::Struct::Tie_ISA                           0         1      0 bytes   64 bytes
      Net::DNS::RR                                     0         1      0 bytes   80 bytes
  CODE                                             10786                1.3 MiB
  GLOB                                             14792        16      2.1 MiB    2.4 KiB
      IO::Socket::IP                                   0        15      0 bytes    2.2 KiB
      IO::Socket::SSL                                  0         1      0 bytes  152 bytes
  HASH                                             26086      3182      8.2 MiB  884.6 KiB
      Specio::DeclaredAt                               0      1537      0 bytes  324.2 KiB
      Specio::Constraint::Simple                       0      1428      0 bytes  483.8 KiB
      DateTime::Format::Builder::Parser::Regex         0        67      0 bytes   15.9 KiB
      Specio::Constraint::Parameterizable              0        60      0 bytes   22.1 KiB
      Mojo::IOLoop::Server                             0        15      0 bytes    3.7 KiB
      Specio::Constraint::ObjectIsa                    0        15      0 bytes    5.5 KiB
      Specio::Constraint::Enum                         0        10      0 bytes    3.8 KiB
      Specio::Constraint::Union                        0         7      0 bytes    2.2 KiB
      Specio::Constraint::AnyCan                       0         6      0 bytes    2.2 KiB
      Specio::Constraint::ObjectCan                    0         5      0 bytes    1.8 KiB
      (others)                                         0        32      0 bytes   19.5 KiB
  INVLIST                                            651              662.6 KiB
  IO                                                  33        33      5.2 KiB    5.2 KiB
      IO::File                                         0        33      0 bytes    5.2 KiB
  PAD                                               8269                1.6 MiB
  REF                                              47156         1      1.1 MiB   24 bytes
      IO::Socket::SSL::SSL_HANDLE                      0         1      0 bytes   24 bytes
  REGEXP                                            2491       253    525.4 KiB   53.4 KiB
      Regexp                                           0       253      0 bytes   53.4 KiB
  SCALAR                                          211663        12      8.2 MiB 1022 bytes
      Encode::XS                                       0         5      0 bytes  360 bytes
      JSON::PP::Boolean                                0         4      0 bytes  288 bytes
      Cpanel::JSON::XS                                 0         2      0 bytes  292 bytes
      Math::BigInt                                     0         1      0 bytes   82 bytes
  STASH                                              627              803.2 KiB
  --------------------------------------------    ------ ---------    --------- ----------
  (total)                                         334060      3504     25.8 MiB  947.1 KiB

I don’t understand what this means… I looks like everything is OK?

Comparing the two heap dumps I have:

$ pmat-counts phoebe-2021-01-06T*
phoebe-2021-01-06T13:14.pmat
  Kind        Count (blessed)        Bytes  (blessed)
  ARRAY       10198         7      1.2 MiB  520 bytes
  CODE         9603                1.2 MiB
  GLOB        14374        16      2.1 MiB    2.4 KiB
  HASH        11712      3181      5.0 MiB  884.5 KiB
  INVLIST       583              652.3 KiB
  IO             33        33      5.2 KiB    5.2 KiB
  PAD          7079                1.3 MiB
  REF         24869         1    582.9 KiB   24 bytes
  REGEXP       2303       253    485.8 KiB   53.4 KiB
  SCALAR     173493        12      6.8 MiB 1022 bytes
  STASH         610              776.3 KiB
  -------    ------ ---------    --------- ----------
  (total)    254857      3503     19.9 MiB  946.9 KiB
phoebe-2021-01-06T22:35.pmat
  Kind        Count         (blessed)            Bytes              (blessed)
  ARRAY       11506(+1308)          7          1.3 MiB(+119.4 KiB)  520 bytes
  CODE        10786(+1183)                     1.3 MiB(+147.9 KiB)
  GLOB        14792(+418)          16          2.1 MiB(+62.0 KiB)     2.4 KiB
  HASH        26086(+14374)      3182(+1)      8.2 MiB(+3.2 MiB)    884.6 KiB(+168 bytes)
  INVLIST       651(+68)                     662.6 KiB(+10.3 KiB)
  IO             33                33          5.2 KiB                5.2 KiB
  PAD          8269(+1190)                     1.6 MiB(+330.7 KiB)
  REF         47156(+22287)         1          1.1 MiB(+522.4 KiB)   24 bytes
  REGEXP       2491(+188)         253        525.4 KiB(+39.7 KiB)    53.4 KiB
  SCALAR     211663(+38170)        12          8.2 MiB(+1.5 MiB)   1022 bytes
  STASH         627(+17)                     803.2 KiB(+26.8 KiB)
  -------    -------------- -------------    --------------------- ----------------------
  (total)    334060(+79203)      3504(+1)     25.8 MiB(+5.9 MiB)    947.1 KiB(+168 bytes)

The difference seems minimal. And yet, ps -e rss reports 325 MiB instead of 174 MiB used.

– Alex 2021-01-06 21:30 UTC


Well… What do we have? I have three heap dump files for the current process:

23M 2021-01-06 13:14 phoebe-2021-01-06T13:14.pmat
29M 2021-01-06 22:35 phoebe-2021-01-06T22:35.pmat
42M 2021-01-07 21:03 phoebe-2021-01-07T21:03.pmat

RSS reported by ps is 174 MiB, 325 MiB, and 822 Mib. It’s not looking good. What amazes me, however, is that the memory usage as reported by Munin using “/proc/meminfo” doesn’t seem to reflect that. Or maybe it does?

Anyway, I’m experimenting with pmat tools. The output of “pmat-leakreport” is extremly long. It’s pages and pages of this:

$ pmat-leakreport phoebe-2021-01-0[67]*
…
LEAK[2] HASH(1) at 0x55cf5e4d3e00
LEAK[2] REF() at 0x55cf5e472f08
LEAK[2] SCALAR(PV) at 0x55cf5e2c7318
LEAK[2] SCALAR(UV) at 0x55cf5e4b8b78
LEAK[2] HASH(1) at 0x55cf5ab96ce0
LEAK[2] HASH(1) at 0x55cf5b69f8b8
LEAK[2] HASH(1) at 0x55cf604236c8
LEAK[2] UNDEF() at 0x55cf62d869c0
LEAK[2] GLOB($*) at 0x55cf5e039088
LEAK[2] HASH(1) at 0x55cf60a59478
LEAK[2] REF() at 0x55cf62887cf0
LEAK[2] SCALAR(UV) at 0x55cf5e49a430
LEAK[2] REF() at 0x55cf63108b28
LEAK[2] UNDEF() at 0x55cf5d57cfb8
LEAK[2] SCALAR(UV) at 0x55cf5e401578
LEAK[2] UNDEF() at 0x55cf6086c7a0
LEAK[2] REF() at 0x55cf5ecf80c0
LEAK[2] REF() at 0x55cf5e369e00
LEAK[2] UNDEF() at 0x55cf5fa091e8
LEAK[2] HASH(1) at 0x55cf5bc07e48
LEAK[2] UNDEF() at 0x55cf62a23f60
LEAK[2] UNDEF() at 0x55cf5c78a818
LEAK[2] SCALAR(UV) at 0x55cf5e48d5c0
LEAK[2] REF() at 0x55cf6137ebf8

I don’t know what to make of this. The documenation says:

A leaking SV is one that appears and then never gets touched again. To detect them, we need to look for SVs that appear between two files, and then don’t disappear again. Any that do disappear were simply temporaries and can be ignored.

So, all of these are new appearances, and they don’t disappear again. 😟

Let’s look at them, starting at the end:

pmat> identify 0x55cf6137ebf8                                                                                                                      
REF() at 0x55cf6137ebf8 is:
└─value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf6137ec58, which is:
  └─(via RV) value {ptr_0x55cf614d66c0} of HASH(35071) at 0x55cf4ff96aa8, which is:
    └─not found

pmat> identify 0x55cf5e48d5c0                                                                                                                      
SCALAR(UV) at 0x55cf5e48d5c0 is:
└─value {"\\0"} of HASH(1) at 0x55cf5e48d5a8, which is:
  └─(via RV) value {rakkestad} of HASH(725) at 0x55cf5e44f090, which is:
    └─(via RV) value {no} of HASH(1526) at 0x55cf4ffdf998, which is:
      └─(via RV) value {tree} of HASH(2)=IO::Socket::SSL::PublicSuffix at 0x55cf5e546aa0, which is:
        └─(via RV) value {"1"} of HASH(1) at 0x55cf4ffd88a8, which is:
          └─the lexical %default at depth 1 of CODE(PP) at 0x55cf4ffd8440, which is:
            └─the symbol '&IO::Socket::SSL::PublicSuffix::default'

pmat> identify 0x55cf5c78a818                                                                                                                      
UNDEF() at 0x55cf5c78a818 is:
└─pad temporary 29 at depth 1 of CODE(PP,closure) at 0x55cf5c719058, which is:
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c681250, which is:
  │ └─(via RV) value {ptr_0x55cf5c8999e0} of HASH(35071) at 0x55cf4ff96aa8, which is (*A):
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c718b00, which is:
  │ └─(via RV) value {ptr_0x55cf5c891a30} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c86dea8, which is:
  │ └─(via RV) value {ptr_0x55cf5c88cff0} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c86def0, which is:
  │ └─(via RV) value {ptr_0x55cf5c89a180} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c7fc768, which is:
  │ └─(via RV) value {ptr_0x55cf5c871bd0} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c78a7d0, which is:
  │ └─(via RV) value {ptr_0x55cf5c8991c0} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  └─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c7fc7f8, which is:
    └─(via RV) value {ptr_0x55cf5c8995d0} of HASH(35071) at 0x55cf4ff96aa8, which is:
      └─not found

pmat> identify 0x55cf62a23f60                                                                                                                      
UNDEF() at 0x55cf62a23f60 is:
└─the lexical $ctx_store at depth 1 of CODE(PP,closure) at 0x55cf62a24770, which is:
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62abd3d0, which is:
  │ └─(via RV) value {ptr_0x55cf62ba4960} of HASH(35071) at 0x55cf4ff96aa8, which is (*A):
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62991a00, which is:
  │ └─(via RV) value {ptr_0x55cf62b68370} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62667430, which is:
  │ └─(via RV) value {ptr_0x55cf62b628d0} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62667f70, which is:
  │ └─(via RV) value {ptr_0x55cf62ba5100} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62a24b48, which is:
  │ └─(via RV) value {ptr_0x55cf62b72e10} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62a24398, which is:
  │ └─(via RV) value {ptr_0x55cf62ba4550} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  └─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62abd160, which is:
    └─(via RV) value {ptr_0x55cf62ba4140} of HASH(35071) at 0x55cf4ff96aa8, which is:
      └─not found

pmat> values 0x55cf5bc07e48                                                                                                                        
  {"ssleay_verify_callback!!func"} REF() at 0x55cf5bb9e678 => CODE(PP,closure) at 0x55cf5bd3d8d8

OK, I’d it’s pretty clear by now: the “ssleay_verify_callback” stuff is bonkers!

Verify callback – remember the Phoebe code? Also note how “%default” is mentioned above.

IO::Socket::SSL::set_defaults(
  SSL_verify_mode => SSL_VERIFY_PEER,
  SSL_verify_callback => \&verify_fingerprint);

But I just read the IO::Socket::SSL man page. It says: “If you have a server and it looks like you have a memory leak you might check the size of your session cache. Default for Net::SSLeay seems to be 20480, see the example for SSL_create_ctx_callback for how to limit it.”

OK. I’m going to try that instead. The previous code had no effect. I’m limiting the session cache to 128, like in the example provided. Restarted the server. Memory is down below 100 MiB again:

# pgrep -f -l "phoebe"
29000 phoebe
# ps -q 29000 -eo rss,size
  RSS  SIZE
73392 62360

– Alex


I’m surprised. This seems to have helped!

Image 5

Ignoring the spike of Emacs Wiki and just looking Phoebe: we seem to be stable at 250 MiB.

Cool!

Hm… Then again, at the same time, I changed the “phoebe.service” file for systemd:

MemoryMax=250M
MemoryHigh=200M

That is… uncannily close! So here’s what I’m doing now: I decreased the SSH cache from 128 to 64, and I didn’t change the “phoebe.service” file. Let’s see if memory use is halved, i.e. the cache is relevant, or not, i.e. the memory limits are relevant.

– Alex 2021-01-09 15:00 UTC


OK, reducing the cache size didn’t help. We’re back at 250 MiB. This is a memory leak of sorts, but the resource limit I have in the service file is working.

I think I’m going to reduce those limits:

MemoryMax=100M
MemoryHigh=90M

Let’s see whether it works well enough!

– Alex 2021-01-10 09:54 UTC


It works well enough. I added this to the documentation and stopped trying to understand where Mojo::IOLoop::TLS, IO::Socket::SSL, Net::SSLeay, or OpenSSL are failing.

– Alex 2021-01-11 11:26 UTC

Add Comment

2020-12-25 Defending against crawlers

Recently I was writing about my dislike of crawlers. They turned into a kind of necessary evil on the web – but it’s not too late to choose a different future for Gemini. I want to encourage all server authors and crawler authors to think long and hard about alternatives.

One feature I dislike about crawlers is that they follow all the links. Sure, we have a semi-useful “robots.txt” specification but it’s easy to get wrong on both sides. I’ve had bugs in my “robots.txt” file for a long time without noticing them.

Now, if the argument is that I cannot prevent crawlers from leeching my site, then the reply is of course that I will try to defend myself even if it is impossible to get 100% right. The first line of defence is going to be my “robots.txt” file. It’s not perfect, and that’s fine. It’s not perfect because I just need to look at the Apache config file I use to block all the misbehaving bots and user agents.

Ugh, look at the bots hitting my websites:

$ /home/alex/bin/bot-detector < /var/log/apache2/access.log.1
--------------Bandwidth-------Hits-------Actions--Delay
   Everybody      2416M     102520
    All Bots       473M      23063   100%    19%
-------------------------------------------------------
     bingbot    240836K       8157    35%    31%    10s
   YandexBot     36279K       3905    16%     3%    22s
   Googlebot     65808K       3679    15%    34%    23s
      Adsbot     20187K       3115    13%     0%    27s
    Applebot     66607K        908     3%     0%    95s
     Facebot      1611K        390     1%     0%   220s
    PetalBot      1548K        329     1%    12%   257s
         Bot      2101K        308     1%     0%   280s
      robots       525K        231     1%     0%   374s
    Slackbot      1339K        224     0%    96%   382s
  SemrushBot       572K        194     0%     0%   438s

A full 22% of all user agents have something like “bot” in their name. Just look at them! Let’s take the last one, SemrushBot. The user agent also has a link, and if you want, you can take a look. All the goals it lists are disgusting, or benefit corporations and not me, nor other humans. Barf with me as you read statements such as “the Brand Monitoring tool to index and search for articles” or “the On Page SEO Checker and SEO Content template tools reports”. 🤮

Have a look at your own webserver logs. 22% of my CPU resources, of the CO₂ my server produces, of the electricity it eats, for machines that do not have my best interest in mind. I don’t want a web that’s 20% bots crawling all over my site. I don’t want a Gemini space that’s 20% bots crawling all over my capsules.

OK, so let’s talk about defence.

When I look at my Gemini logs, I see that plenty of requests come from Amazon hosts. I take that as a sign of autonomous agents. I might sound like a fool on the Butlerian Jihad, but if I need to block entire networks, then I will. Looking up WHOIS data also costs resources. It would be better if we could identify these bots by looking at their behaviour.

The first mistake crawlers make is that they are too fast. So here’s what I’m currently doing: for every IP, I’m keeping track of the last 30 requests in the last 60s. If there are more requests, the IP number is blocked. Thus, if your average clicking rate is more than 1 click per 2s over a 1min window, you’re probably a bot and you get blocked. I might have to turn this up. Perhaps 1 click per 5s makes more sense for a human.

But there’s more. I see the crawlers clicking on all the links. All the HTML renderings of the pages are already available via Gemini. It makes no sense to request all of these. All the raw wiki text of the pages are available as well. It makes no sense to request all of these, either. All the links to leave a comment are also on every page. It makes no sense to request all of these either.

Here’s what I’m talking about. I picked an IP number from the logs and checked what they’ve been requesting:

2020-12-25 08:32:37 gemini://transjovian.org:1965/page/Linking/2
2020-12-25 08:32:45 gemini://alexschroeder.ch:1965/history/Perl
2020-12-25 08:32:59 gemini://communitywiki.org:1965/page/CategoryWikiProcess
2020-12-25 08:33:18 gemini://transjovian.org:1965/page/Titan/5
2020-12-25 08:33:30 gemini://communitywiki.org/page/CultureOrganis%C3%A9e
2020-12-25 08:33:57 gemini://transjovian.org:1965/history/Spaces
2020-12-25 08:34:23 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/TimurIsmagilov
2020-12-25 08:34:30 gemini://alexschroeder.ch:1965/tag/Hex%20Describe
2020-12-25 08:34:56 gemini://communitywiki.org:1965/page/SoftwareBazaar
2020-12-25 08:35:02 gemini://communitywiki.org:1965/page/DoTank
2020-12-25 08:35:22 gemini://transjovian.org:1965/test/history/Welcome
2020-12-25 08:36:56 gemini://alexschroeder.ch:1965/tag/Gadgets
2020-12-25 08:38:20 gemini://alexschroeder.ch:1965/tag/Games
2020-12-25 08:45:58 gemini://alexschroeder.ch:1965/do/comment/GitHub
2020-12-25 08:46:05 gemini://alexschroeder.ch:1965/html/GitHub
2020-12-25 08:46:12 gemini://alexschroeder.ch:1965/raw/Comments_on_GitHub
2020-12-25 08:46:19 gemini://alexschroeder.ch:1965/raw/GitHub
2020-12-25 08:47:45 gemini://alexschroeder.ch:1965/page/2018-08-24_GitHub
2020-12-25 08:47:51 gemini://alexschroeder.ch:1965/do/comment/Comments_on_2018-08-24_GitHub
2020-12-25 08:47:57 gemini://alexschroeder.ch:1965/html/Comments_on_2018-08-24_GitHub
2020-12-25 09:21:26 gemini://alexschroeder.ch:1965/do/more

See what I mean? This is not a human. This is an unsupervised bot, otherwise the operator would have discovered that this makes no sense.

The solution I’m using for my websites is logging IP numbers and using fail2ban to ban IP numbers that request too many pages. The ban is for 10min, and if you’re a “recidive”, meaning you got banned three times for 10min, then you’re going to be banned for a week. The problem I have is that I would prefer a solution that doesn’t log IP numbers. It’s good for privacy and we should write our software such that privacy comes first.

So I wrote a Phoebe extension called “speed bump”. Here’s what it currently does.

For every IP number, Phoebe records the last 30 requests in the last 60 seconds. If there are more than 30 requests in the last 60 seconds, the IP number is blocked. If somebody is faster on average than two seconds per request, I assume it’s a bot, not a human.

For every IP number, Phoebe records whether the last 30 requests were suspicious or not. A suspicious request is a request that is “disallowed” for bots according to “robots.txt” (more or less). If 10 requests or more of the last 30 requests in the last 60 seconds are suspicious, the IP number is also blocked. That is, even if somebody is as slow as three seconds per request, if they’re all suspicious, I assume it’s a bot, not a human.

When an IP number is blocked, it is blocked for 60s, and there’s a 120s probation time. When you’re blocked, Phoebe responds with a “44” response. This means: slow down!

If the IP number sends another request while it is blocked, or if it gives cause for another block in the probation time, it is blocked again and the blocking time is doubled: the IP is blocked for 120s and probation is extended by 240s. And if it happens again, it is doubled again: blocked for 240s and probabation is extended by 480s.

The “/do/speed-bump/debug” URL (which requires a known client certificate) shows you the raw data, and the “/do/speed-bump/status” URL (which also requires a known client certificate) shows you a human readable summary of what’s going on.

Here’s an example:

Speed Bump Status
 From    To Warns Block Until Probation IP
 n/a   n/a   0/ 0   60s  n/a       100m 3.8.145.31
 n/a   n/a   0/ 0   60s    4h       14h 35.176.162.140
 n/a   n/a   0/ 0   60s  n/a         9h 18.134.198.207
-280s   -1s  7/30  n/a   n/a       n/a  3.10.221.60

All four of these numbers belong to “Amazon Data Services UK”.

If there are numbers in the “From” and “To” columns, that means the IP made a request in the last 60s. The “Warns” column says how many of the requests were considered “suspicious”. “Block” is the block time. As you can see, none of the bots managed to increase the block time. Why is that? The “Probation” column offers a glimpse into what happened: as the bots kept making requests while they were blocked, they kept adding to their own block.

A bit later:

Speed Bump Status
 From    To Warns Block Until Probation IP
 n/a   n/a   0/ 0   60s  n/a        83m 3.8.145.31
 n/a   n/a   0/ 0   60s    4h       13h 35.176.162.140
 n/a   n/a   0/ 0   60s  n/a         9h 18.134.198.207
-219s   -7s  3/30  n/a   n/a       n/a  3.10.221.60

It seems that the last IP number is managing to thread the line.

Clearly, this is all very much in flux. I’m still working on it – and finding bugs in my “robots.txt”, unfortunately. I’ll keep this page updated as I learn more. One idea I’ve been thinking about is the time windows: how many pages would an enthusiastic human read on a new site: 60 pages in an hour, one minute per page? Or maybe twice as much? That would point towards keeping a counter for a long term average: if you’re requesting more than 60 pages in 30min, perhaps a timeout of 30min is appropriate?

The smol net is also a slow net. There’s no need for almost all activity to be crawlers. If at all, crawlers should be the minority! So, if my sites had 95% human activity and 5% robot activity, I’d be more understanding. But right now, it’s crazy. All the CO₂ wasted, for bots.

I’m on The Butlerian Jihad!

Comments on 2020-12-25 Defending against crawlers

Wouldn’t you get most of them by just blocking everything with “[Bb]ot” in the User-Agent?

Adam 2020-12-25 16:15 UTC


It depends on what your goal is, and on the protocol you’re talking about. In the second half of my post I was talking about Gemini. That is a very simply protocol: establish a TCP/IP connection, with TLS, send a URI, get bet a status header line + content. That is, the request does not contain any header lines, unlike HTTP.

As for HTTP, which I mention in the first half: if a search engine were to crawl the new pages on my sites, slowly, then I wouldn’t mind so much, as long as the search engine is one intended for humans (these days that would be Google and Bing, I guess). I’d like to block those that misbehave, or that have goals I disagree with, and I’d like not to block the future search engine that is going to dethrone Google and Bing. I need to keep that hope alive, in any case. So if I want a nuanced result, I need a nuanced response. Slow down bots that can take a hint. Block bots that don’t. Block bots from dubious companies. And so on.

– Alex 2020-12-25 21:47 UTC


Here’s the current status of my “speed bump” extension to Phoebe:

Speed Bump Status
 From    To Warns Block Until Probation IP
 -10m   -9m 11/11  365d  364d      729d 3.11.81.100
 -12h  -12h 11/11  365d  364d      729d 18.130.221.176
 -12h  -12h 11/13  365d  364d      729d 3.9.134.250
 -14h  -14h 11/15  365d  364d      729d 3.8.127.24
 -14h  -14h 11/13  365d  364d      729d 167.114.7.65
 -10h  -10h 11/12  365d  364d      729d 18.134.146.76
 -16m  -14m 11/12  365d  364d      729d 3.10.232.193

All of these IP numbers have blocked themselves for over a year (or until I restart the server). Usign “whois” to identify the organisation (and verifying my guess for tilde.team using “dig”) we get the following:

3.11.81.100     Amazon Data Services UK
18.130.221.176  Amazon Data Services UK
3.9.134.250     Amazon Data Services UK
3.8.127.24      Amazon Data Services UK
167.114.7.65    Tilde Team
18.134.146.76   Amazon Data Services UK
3.10.232.193    Amazon Data Services UK

Oh well. Every new IP number is going to make 10–20 requests and it’s going to add a line. We could improve upon the model: once an IP is blocked for a year (the maximum), then use WHOIS to look up the IP number range. Taking the first number for example, we find that the “NetRange” is 3.8.0.0 - 3.11.255.255 and the “CIDR” is 3.8.0.0/14. Keep watching, once we have three IP numbers from the entire range blocked, there’s no need to block them all individually, we can just block the whole range. In our example, we would have reacted once we had blocked 3.11.81.100, 3.9.134.250, and 3.8.127.24. At that point, 3.10.232.193 would have been blocked preemptively.

Compare this to how GUS works. Indexing runs are made a few times a month. The IP numbers the requests come from a documented. They don’t change like the crawler (or crawlers?) running on Amazon. I’m tempted to say the bot operators hosting their bot on Amazon look like they are actively trying to evade the block. It feels like trespassing and it makes me angry.

– Alex 2020-12-26


Tilde Team is probably people, not a crawler. I gave more details in a reply to your toot.

petard 2020-12-26 19:21 UTC


For those who don’t follow us on Mastodon… 😁 I replied with a screenshot of more or less the following, saying that the requests made from Tilde Team seem to indicate that this is an unsupervised crawler, not humans. The vast majority of requests is from a bot.

2020-12-27 01:20:31 gemini://alexschroeder.ch:1965/2008-05-09_Ontology_of_Twitter
2020-12-27 01:20:40 gemini://alexschroeder.ch:1965/2011-02-14_The_Value_of_a_Web_Site
2020-12-27 01:20:48 gemini://alexschroeder.ch:1965/2013-01-23_Security_of_Code_Downloaded_from_Online_Sources
2020-12-27 01:20:54 gemini://alexschroeder.ch:1965/2016-05-28_nginx_as_a_caching_proxy
2020-12-27 01:21:01 gemini://alexschroeder.ch:1965/Comments_on_2011-02-14_The_Value_of_a_Web_Site
2020-12-27 01:24:54 gemini://transjovian.org:1965/gemini/diff/common%20wiki%20structure/1
2020-12-27 01:25:01 gemini://transjovian.org:1965/gemini/diff/common%20wiki%20structure/2
2020-12-27 01:25:08 gemini://transjovian.org:1965/gemini/diff/common%20wiki%20structure/3
2020-12-27 01:25:15 gemini://transjovian.org:1965/gemini/do/atom
2020-12-27 01:25:23 gemini://transjovian.org:1965/gemini/do/rss
2020-12-27 01:25:29 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/1
2020-12-27 01:25:37 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/2
2020-12-27 01:25:43 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/3
2020-12-27 01:46:49 gemini://communitywiki.org:1965/do/comment/BestPracticesForWikiTheoryBuilding
2020-12-27 01:46:58 gemini://communitywiki.org:1965/html/BestPracticesForWikiTheoryBuilding
2020-12-27 01:47:04 gemini://communitywiki.org:1965/page/PromptingStatement
2020-12-27 01:47:11 gemini://communitywiki.org:1965/page/WeLoveVolunteers
2020-12-27 01:47:18 gemini://communitywiki.org:1965/raw/BestPracticesForWikiTheoryBuilding
2020-12-27 01:47:26 gemini://communitywiki.org:1965/raw/Comments_on_BestPracticesForWikiTheoryBuilding
2020-12-27 01:47:33 gemini://communitywiki.org:1965/tag/inprogress
2020-12-27 01:47:41 gemini://communitywiki.org:1965/tag/practice
2020-12-27 01:47:48 gemini://communitywiki.org:1965/tag/practices
2020-12-27 01:47:56 gemini://communitywiki.org:1965/tag/prescription
2020-12-27 01:48:02 gemini://communitywiki.org:1965/tag/prescriptions
2020-12-27 01:48:11 gemini://communitywiki.org:1965/tag/recommendation
2020-12-27 01:48:16 gemini://communitywiki.org:1965/tag/recommendations
2020-12-27 01:48:23 gemini://communitywiki.org:1965/tag/theorybuilding
2020-12-27 01:51:05 gemini://communitywiki.org:1965/do/comment/HansWobbe
2020-12-27 01:51:08 gemini://communitywiki.org:1965/html/HansWobbe
2020-12-27 01:57:51 gemini://communitywiki.org:1965/page/BlikiNet
2020-12-27 02:17:04 gemini://communitywiki.org:1965/page/ChainVideo
2020-12-27 02:28:46 gemini://communitywiki.org:1965/page/CwbHwoAg
2020-12-27 02:58:36 gemini://communitywiki.org:1965/page/DfxMapping

Suspicious signs:

  • visiting date pages from all over the place (2008, 2011, 2013, 2016)
  • visiting all the old revisions of a page (/1, /2, /3)
  • visiting all the diffs of a page (/1, /2, /3)
  • visiting the comment prompt and not leaving a comment (do/comment)
  • visiting lots of tags (/tag)
  • visiting HTML copies of pages without looking at the Gemini copies (/html)
  • visiting raw copies of pages without looking at the Gemini copies (/raw)

These are not people. This is a crawler verifying its database. And ignoring robots.txt.

I think the main problem is that I run multiple sites served via Gemini with thousands of pages, and all the pages have links to alternate views (page history, page diff, HTML copy, raw copy, comments prompt), so perhaps mine are the only sites where crawlers might actually get to their limits. If somebody new sets up a Gemini server and serves two score static gemtext files, then these crawlers do little harm. But as it stands, there’s a constant barrage on my servers that stands in no relation to the amount of human activity.

Some of these URIs are violating robots.txt. But it’s not just that. I also feel a moral revulsion: all the CO₂ wasted shows a disregard for resources these people are not paying for. This is exactly the problem our civilisation faces, on a small scale.

Thus, where as GoogleBot and BingBot might be nominally useful (the wealth concentration we’ve seen as a consequence of their data gathering notwithstanding), the ratio of change to crawl is and remains important. Once a site is crawled, how often and what URLs should you crawl again? The current system is so wasteful.

Anyway, I have a lot of anger in me.

– Alex 2020-12-27


That’s a good summary of our conversation. My suggestion that requests from Tilde Team were probably people was based on the fact that it’s a public shell host that people use to browse gemini. (I have an account there and use it happily. It’s mostly a nice place with people I like to talk to. I am not otherwise affiliated.)

Seeing that log dump makes it clear that someone on that system is behaving badly.

petard 2020-12-27 14:32 UTC


Current status:

Speed Bump Status
 From    To Warns Block Until Probation IP
 -33m  -33m 30/30   28d   27d       55d 78.47.222.156 78.46.0.0/15
 -17h  -17h 11/11   28d   27d       55d 3.9.165.84 3.8.0.0/14
 -46h  -46h 17/17   28d   26d       54d 18.130.170.163 18.130.0.0/16
  -2d   -2d 11/11   28d   26d       54d 18.134.12.41 18.132.0.0/14
 -44h  -44h 11/11   28d   26d       54d 18.132.209.113 18.132.0.0/14
 -22h  -22h 13/13   28d   27d       55d 35.178.128.94 35.178.0.0/15
 -38h  -38h 12/12   28d   26d       54d 3.8.185.90 3.8.0.0/14
 -17h  -17h 12/12   28d   27d       55d 35.177.73.123 35.176.0.0/15
 -42h  -42h 11/11   28d   26d       54d 18.130.151.101 18.130.0.0/16
  -5h   -5h 13/13   28d   27d       55d 167.114.7.65 167.114.0.0/17
 -17h  -17h 14/14   28d   27d       55d 52.56.225.165 52.56.0.0/16
 -42h  -42h 12/12   28d   26d       54d 18.135.104.61 18.132.0.0/14
  -8h   -8h 12/12   28d   27d       55d 35.179.91.110 35.178.0.0/15
  -4h   -4h 11/11   28d   27d       55d 18.130.166.9 18.130.0.0/16
 -20h  -20h 11/11   28d   27d       55d 52.56.232.202 52.56.0.0/16
 -36h  -36h 13/13   28d   26d       54d 35.178.91.123 35.178.0.0/15
 -36h  -36h 11/11   28d   26d       54d 3.8.195.248 3.8.0.0/14

Until CIDR
  27d 18.130.0.0/16
  27d 3.8.0.0/14
  27d 35.178.0.0/15
  26d 18.132.0.0/14
→ menu

Almost all of them Amazon Data Services UK, a few Hetzner, some OVH Hosting.

Seeing whole net ranges being blocked makes me happy. The code seems to work as expected.

– Alex 2020-12-29 16:35 UTC


The list keeps growing. I decided to write a script that would retrieve this page for me, and call WHOIS for all the networks identified.

#!/usr/bin/perl
use Modern::Perl;
use Net::Whois::IP qw(whoisip_query);
say "Requesting data";
my $data = qx(gemini --cert_file=/home/alex/.emacs.d/elpher-certificates/alex.crt --key_file=/home/alex/.emacs.d/elpher-certificates/alex.key gemini://transjovian.org/do/speed-bump/status);
say "Reading blocked networks";
my %seen;
while ($data =~ /(\d+\.\d+\.\d+\.\d+|[0-9a-f]+:[0-9a-f]+:[0-9a-f:]+)\/\d+/g) {
  my $ip = $1;
  next if $seen{$ip};
  $seen{$ip} = 1;
  my $response = whoisip_query($ip);
  my $name = $response->{OrgName} || $response->{netname} || $response->{Organization};
  if ($name) {
    say "$ip $name";
  } else {
    say "$ip";
    for (keys %$response) {
      say "  $_: $response->{$_}";
    }
  }
}

Result:

Requesting data
Reading blocked networks
3.8.0.0 Amazon Data Services UK
35.178.0.0 Amazon Data Services UK
18.130.0.0 Amazon Data Services UK
35.176.0.0 Amazon Data Services UK
52.56.0.0 Amazon Data Services UK
78.46.0.0 HETZNER-nbg1-dc1
167.114.0.0 OVH Hosting, Inc.
18.132.0.0 Amazon Data Services UK
67.205.144.0 DigitalOcean, LLC

Oh hey, Digital Ocean is new.

– Alex 2020-12-30 11:10 UTC


Let’s check the number of requests blocked, relying on the Phoebe log files. “Looking at <some URL>” is an info log message it prints for every request. Let’s count them:

# journalctl --unit phoebe --since 2020-12-29|grep "Looking at"|wc -l
11700

Let’s see how many are caught by network range blocks:

# journalctl --unit phoebe --since 2020-12-29|grep "Net range is blocked"|wc -l
1812

Let’s see how many of them are just lone IP numbers being blocked:

# journalctl --unit phoebe --since 2020-12-29|grep "IP is blocked"|wc -l
2862

And first time offenders:

# journalctl --unit phoebe --since 2020-12-29|grep "Blocked for"|wc -l
8

I guess that makes 4682 blocked bot requests out of 11700 requests, or 40% of all requests.

The good news is that more than half seem to be legit? Or are they? I’m growing more suspicious all the time.

Let’s check HTTP access!

# journalctl --unit phoebe --since 2020-12-29|grep "HTTP headers"|wc -l
320
# journalctl --unit phoebe --since 2020-12-29|grep "HTTP headers"|perl -e 'while(<STDIN>){m/(\w*bot\w*)/i; print "$1\n"}'|sort|uniq --count
      1 
     22 bingbot
      2 Bot
     80 googlebot
     34 Googlebot
     88 MJ12bot
     32 SeznamBot
     61 YandexBot

That is, of the 11700 requests I’m looking at, I’ve had 320 web requests, of which 319 (!) where bots.

I think the next step will be to change the robots.txt served via the web to disallow them all.

– Alex 2020-12-30 11:40 UTC


Hm, but blocking IPAs the style you mention would e.g. block my hacker space, where I’ve told a bunch of nerds that Gemini is cool, and they should have a look at … your site. And if it isn’t a hacker space, it’s a student’s dorm, or similar, behind NAT.

I understand your anger, but blocking IPAs in the end isn’t better than Hotmail & Google not accepting mail from my host - they think it’s suspicious, because it’s small (it has proper DNS, no blacklist and so on, they just ASSUME it would could be wrong. Internet is “everyone can talk to everyone”, and my approach is to make that happen. Every counter approach is breaking the Internet, IMHO. YMMV.

– Götz 2021-01-05 23:40 UTC


How would you defend against bad actors, then? Simply accept it as a fact of life and add better infrastructure, or put the “smol net” behind a login? If all I have is an IP number of a peer connecting to my server, then all the consequences must relate to the IP number, or there must be no consequences. That’s how I understand the situation.

– Alex Schroeder 2021-01-06 11:09 UTC

Add Comment

2020-12-10 International domain names and Phoebe

Recently, I was wondering about international domain names and my Gemini-wiki, Phoebe, after reading a post in French about international domain names. Today, I decided to give it a try. My laptop is called “melanobombus“ because it’s black and I like bumblebees, so I wanted to give it the alias “mélanobombus” and give it a try.

The first thing to install was a punycode converter. I used “idn”. The punycode for “mélanobombus” is “xn--mlanobombus-bbb”. So this is how my “/etc/hosts” begins:

127.0.0.1       localhost
127.0.1.1       melanobombus
127.0.1.1       xn--mlanobombus-bbb

Then I started Phoebe with that hostname:

phoebe --host=xn--mlanobombus-bbb

Since I trust that Firefox knows how to handle international domain names, I started by pointing it at “https://mélanobombus:1965/” – and it worked. 😁

Using my super simple command line client did not work. When I asked it to connect to “gemini://mélanobombus/” it broke with an ugly error message. When I asked Elpher to connect to the same address, it didn’t work either: timeout. Lagrange also reported a network failure.

OK. At least now I know that this is a client problem because Firefox does the right thing.

Comments on 2020-12-10 International domain names and Phoebe

In Perl, I need to do the following to get the same punycode from a URL provided on the command line:

use Modern::Perl;
my ($url) = @ARGV;
my $iri = IRI->new(value => $url);
say domain_to_ascii(decode_utf8 $iri->host);

But then the client still sends the original IRI to the server which then replies that it won’t proxy, unlike the result of using Firefox.

– 2020-12-10 23:25 UTC


Ah, it’s more complicated, of course. HTTP doesn’t actually send an URI! It sends something like this:

GET /some/path HTTP/1.1
Host: xn--mlanobombus-bbb

Well, I’m working on a branch where my simple command line client and Phoebe work together, at least. I feel like I owe this to my last name. In the previous milenium, I started to write Schroeder instead of Schröder because internationalisation was a big problem. This was before I had ever heard of Unicode and UTF-8.

– 2020-12-11 11:23 UTC


Oh my invisible friend... the changes required aren’t trivial. Still on it!

– 2020-12-11 12:25 UTC


Wow, I’ve been looking at the mailing list. Sooo much discussion! Three threads:

– 2020-12-11 16:22 UTC


Talked about it a bit on the Gemini IRC channel until I got angry. 😒

– 2020-12-11 18:02 UTC


OK, so I abandoned the international domain name (IDN) branch where the client sends gemini://東京.jp/ to the server because I thought it was stupid that the client could send gemini://東京.jp/ but not gemini://東京.jp/日本語. So then I went back to RFC 3987 and read the introduction to section 3, “Relationship between IRIs and URIs”:

IRIs are meant to replace URIs in identifying resources for protocols, formats, and software components that use a UCS-based character repertoire. These protocols and components may never need to use URIs directly, especially when the resource identifier is used simply for identification purposes. However, when the resource identifier is used for resource retrieval, it is in many cases necessary to determine the associated URI, because currently most retrieval mechanisms are only defined for URIs. In this case, IRIs can serve as presentation elements for URI protocol elements. An example would be an address bar in a Web user agent.

This seems ideal. Clients are free to use IRIs to communicate with their users: letting them enter an IRI like gemini://東京.jp/日本語 into the address bar and showing them such URIs. But if these clients communicate with a Gemini server, they need to use URIs. They need to request gemini://xn--1lqs71d.jp:43343/page/%E6%97%A5%E6%9C%AC%E8%AA%9E.

This reduces the problem for Phoebe to a much smaller set of problems.

How does a server administrator start Phoebe such that it serves an international domain name (IDN)? I added code that converts the host name provided from the current locale (which is wha the administrator is using in their shell) to Unicode, and converts that to punycode, and uses that to look up the IP addresses using getaddrinfo(3).

When deciding whether to serve a URL, Phobe checks for the punycode representation of the host names: xn--1lqs71d.jp. Anything else is considered to be a proxy request and is most likely denied.

The part handling the percent encoded paths already exists and already works.

What remains is a usability problem, of course. When users write their gemtext, they need to use some sort of tool to do the conversions. It’s basically delegated to Editor support. I guess I’m fine with that, for the moment. Allowing users to link to IRIs and transparently translating them to URIs as they get sent to the client would be an easy change to make. I’d still have to solve such problems as how to handle a space character. If the server sees “⇒ One Two Three” this is a relative link to “One”. Otherwise the user would have had to write “⇒ One%20Two Three” or “⇒ One%20Two%20Three”. Then again, perhaps I can just leave it as-is because I often copy and paste weird URIs from elsewhere.

In either case, I can definitely delay this. 😁

So now all I need to do to get some closure is to add some sort of IRI handling to my simple command-line client.

My “/etc/hosts” has the punycode encoding of the new hostname:

127.0.0.1	localhost
127.0.1.1	melanobombus
127.0.1.1	xn--mlanobombus-bbb

I start Phoebe using a non-ASCII hostname and a non-ASCII pagename:

script/phoebe --host=mélanobombus --wiki_page=Schröder

Then I use the “gemini” client:

$ script/gemini --verbose gemini://mélanobombus/page/Schröder
Contacting xn--mlanobombus-bbb:1965
Requesting gemini://xn--mlanobombus-bbb:1965/page/Schr%C3%83%C2%B6der
20 text/gemini; charset=UTF-8
# Schröder
This page does not yet exist.

More:
=> gemini://xn--mlanobombus-bbb:1965/history/Schr%C3%83%C2%B6der History
=> gemini://xn--mlanobombus-bbb:1965/raw/Schr%C3%83%C2%B6der Raw text
=> gemini://xn--mlanobombus-bbb:1965/html/Schr%C3%83%C2%B6der HTML

Happy! 🥳🚀🚀🎉

And when I use Firefox, it works as well. 🙂

The log. Notice the host header.

[debug] HTTP headers: referer => 'https://xn--mlanobombus-bbb:1965/', dnt => '1', accept-encoding => 'gzip, deflate, br', upgrade-insecure-requests => '1', user-agent => 'Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0', accept-language => 'de-CH,de;q=0.8,en-US;q=0.5,en;q=0.3', accept => 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', host => 'xn--mlanobombus-bbb:1965', connection => 'keep-alive'
[info] Looking at GET /page/Schr%C3%B6der HTTP/1.1
[info] Serving Schröder as HTML via HTTP

– Alex


OK, I got something!

– 2020-12-14 18:49 UTC


URL Interop, by @bagder. “This document is an attempt to describe where and how RFC 3986 (86), RFC 3987 (87) and the WHATWG URL Specification (TWUS) differ. This might be useful input when trying to interop with URLs on the modern Internet.”

– Alex 2021-01-18 10:25 UTC

Add Comment

2020-12-04 Phoebe 2 with Gemini chat

As Phoebe 2 is based on a streaming server framework, I can now implement a Gemini based chat server!

Here’s a screenshot. Explanation below!

Screenshot

In the top left corner you see the Phoebe server with debug log level. It shows user kensanata joined the chat, how Alex joined the chat, and how one of the chat members sent a URL with a text message.

In the top right corner you see user kensanata connecting with my command line client, using my Astrobotany certificate from the Elpher directory. This user was the first user in the chat and so they see “You are the only one.”

In the bottom left corner you see user Alex connecting with my command line client, using my alex certificate from the Elpher directory. This user was the second user in the chat so they see “Other chat members: kensanata”. At this point, user kensanata in the top right corner sees “Alex joined”.

In the bottom right orner you see user Alex connecting a second time, but this time not to listen but to say something in response to a 10 prompt: “This is a test”. At this point, user kensanata in the top right corner sees “Alex: This is a test.”

🎉🚀🚀

Comments on 2020-12-04 Phoebe 2 with Gemini chat

It seems like my command-line Gemini client isn’t very good at this: it hangs up after a while. Lagrange, on the other hand, seems to work just fine! Hm... Ah, there it is. It needs an inactivity timeout increase and there are two different timeouts: one for the connection establishment and one for inactivity, and I was setting the wrong one. But now it’s fixed.

Here’s how I start it, from the Phoebe work-directory:

script/gemini \
  --cert=/home/alex/.emacs.d/elpher-certificates/alex.crt \
  --key=/home/alex/.emacs.d/elpher-certificates/alex.key \
  gemini://transjovian.org/do/chat/listen

And then when I want to say something:

script/gemini \
  --cert=/home/alex/.emacs.d/elpher-certificates/alex.crt \
  --key=/home/alex/.emacs.d/elpher-certificates/alex.key \
  gemini://transjovian.org/do/chat/say?Hello

Or point a regular client at the URL:

If you’re a programmer interested in this, you could of course write your own specialized chat client. 🙂

– 2020-12-04 15:15 UTC


Now we can reimplement IRC, badly! 😂

I am already wondering: what about spam? Harassment? Moderation? Blocking?

– 2020-12-04 18:19 UTC


There’s now a chat client that does both at the same time: forks and the parent connects to the “listen” URL and the child loops a prompt and sends stuff to the “say” URL. Currently the only issue I have is that sometimes the prompt gets messed up because I don’t know my way around the terminal control sequences. ESC 7, ESC 8, ESC [ 1 G, gaaaaah. But it’s good enough to go to bed, now. 😀

Also, I think clients disconnecting aren’t registered by the server. So now there about ten Alex chat members and they’re all disconnected. Too bad!

– 2020-12-04 22:30 UTC


I think it’s really working! From the manual page of gemini-chat…

First, generate your client certificate for as many or as few days as you like:

openssl req -new -x509 -newkey ec -subj "/CN=Alex" \
  -pkeyopt ec_paramgen_curve:prime256v1 -days 100 \
  -nodes -out alex-cert.pem -keyout alex-key.pem

Then start the program:

gemini-chat --cert=alex-cert.pem --key=alex-key.pem \
  --listen=gemini://transjovian.org/do/chat/listen \
  --say=gemini://transjovian.org/do/chat/say \

Or, if you use a client like Lagrange, just open two tabs:

Currently I’m using the client certificate’s common name as your name on the channel. I think this makes more sense than using the fingerprint because we humans are confused when we see a bunch of users called Alex, each representing a different person.

– Alex 2020-12-05


Sending a timestampe in the HH:MM UTC format as “keep alive” every five minutes.

– Alex 2020-12-05 09:57 UTC


If somebody knows how to write a nice terminal application that prints input to stdout while keep a prompt with readline at the bottom going, let me know. Right now, that doesn’t work.

– Alex 2020-12-05 10:02 UTC

Add Comment

2020-12-01 Phoebe 2 status

I’m still in the process of rewriting Phoebe to use Mojo::IOLoop instead of Net::Server. Things are slowly getting better. So much so that I decided to install the new code base and run it to serve my sites. It’s definitely unfinished, though.

Titan support is still borked, when it comes to editing my Oddmuse based sites. Yeah, when I use Gemini/Titan to edit this blog the edit is accepted by Phoebe and forwarded to Oddmuse.

Titan support works if only Phoebe is involved, i.e. on The Transjovian Council. However, the editing via the web on The Transjovian Council doesn’t work!

It’s trade-offs all over. 🙂

Comments on 2020-12-01 Phoebe 2 status

Oooh, I’m very excited!! Of course, I /still/ need to learn Perl so I can hack away .. maybe I can become a Real Contributor in time for Phoebe 3 🙂

– acdw


I’m hoping to collect code examples to make minor tweaks. Should you want anything, let me know.

While the new code isn’t released, yet, the “plugins” (not really, not yet) can be found in the contrib directory of the mojo branch of the Phoebe git repository. Hopefully that branch is going to be the new main branch. 🙂

– 2020-12-01 20:54 UTC


Release 2.00 is out! 😍

– 2020-12-03 15:04 UTC


To make tests pass on Windows, I changed the code at the end:

say "This is the client waiting 1s for the server to start on port $port...";
sleep 1;
eval { query_gemini('/'); };
if ($@) { say "One more second..."; sleep 1; eval { query_gemini('/') }}
if ($@) { say "Just one more second..."; sleep 1; eval { query_gemini('/') }}
if ($@) { say "Another second..."; sleep 1; eval { query_gemini('/') }}
if ($@) { say "One last second..."; sleep 1; eval { query_gemini('/') }}

– 2020-12-04 09:33 UTC


Regarding international domain names, @makeworld left me this link:

– Alex 2020-12-10 21:25 UTC

Add Comment

2020-11-13 Phoebe is on CPAN

I think this my first CPAN contribution. I added the Gemini wiki Phoebe.

Based on the recommendations of @wim_v12e and @e1e0 I tried to set it all up using Dist::Milla but in the end, the approach based on How to upload a script to CPAN by David Farrell (2016) just worked, without befuddling me with magic that I did not comprehend. OK, to be honest, I also forgot to include the tests in my first release which means that the tests results were “unknown” or “not available” – no wonder, right. But some of the things I didn’t understand about Dist::Milla: it reportedly couldn’t extract the license from the lib and there was no documentation of how it worked. Well, I finally discovered that it was using Software::LicenseUtils by reading the source, and even then knowing that I couldn’t get it to recognise the license unless I put “afford g” in the file. And when one of the tests failed with “milla test” I couldn’t reproduce it. Even if I entered the latest build directory itself and reran the tests, it worked. Too much magic under the hood.

Comments on 2020-11-13 Phoebe is on CPAN

Hm. Something is still wrong. Version 1.1 is considered to be the latest one, and 1.1.1 and 1.1.2 are considered to be lower somehow? I’m going to schedule 1.1 for deletion and then we’ll see whether 1.1.2 ends up being the “current” version.

– 2020-11-14 12:20 UTC


Apparently I made a beginner’s mistake: “That’s obvious, but wrong; version specifiers aren’t numbers. To work correctly, you must quote them …”

– 2020-11-14 20:49 UTC


Here we go, 1.20 is released! 😅

– 2020-11-14 21:18 UTC


And the “CPAN Testers matrix” reports a ton of errors because I didn’t declare File::Slurper in Makefile.PL. Oh well, here goes 1.21!

– 2020-11-14 22:47 UTC


I wonder how one would determine the actual minimum versions of dependencies except through bitter experience...

– 2020-11-15 11:11 UTC


Sadly, on some platforms the SSL certificate generation is now the point where tests are failing:

Generating a 2048 bit EC private key
writing new private key to 't/key.pem'
-----
Could not finalize SSL connection with client handle (SSL accept attempt failed error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher)
Cannot construct client socket: SSL connect attempt failed error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure at ./t/test.pl line 114.

I guess the problem is that I need to specify that a certain openssl version must be around, but only when testing?

– 2020-11-15 12:04 UTC


OK, apparently “some platforms” means RHEL7. It uses OpenSSL 1.0 and thus has no access to TLSv1.3. I’m not sure this is the real culprit, though. IO::Socket::IO seems to share at least some of the ciphers! The OpenSSL wiki makes me think that perhaps there is something else at work. I’ve added more debugging info to release 1.22 and will try again.

– 2020-11-15 19:28 UTC


Yeah, SSL is complicated. But it’s no good to make broad and sweeping statements about complexity. The problem of encryption and certificates on a high level is easy to understand; it’s when you get down to the complexities of the openssl command line, the protocol details (deprecation of some versions), the vague definitions of common name and alternative subjects…

All of this, from a distance, is unnecessary complexity. It’s uncalled for weight explained by history, legacy, and so on.

It’s not an inherent complexity that necessarily needs to crop up somewhere.

Or perhaps that just goes to show that cryptography is a field with an inherent dynamic that I don’t fully understand which necessaril leads to this bewieldering complexity.

And that’s not even talking about the various implementation details. People say that the Python part is tricky, I don’t know. But the framework I was using in my Perl code had a bug but only when handling client certificates which forced me to delve into the Perl framework, the OpenSSL bindings, the OpenSSL documentation, and by the end of it I was ready to burn it all down.


Test results

At last! The automatic testing done by Slaven Rezić is all green. Now I can rest. 🙂

Clicking through to the actual test results, however, it seems that as luck would have it, no RHEL7 was included...

No RHEL7

– 2020-11-16 08:33 UTC


Ah, RHEL7 results are showing up... and the output produced confirms the verdict on the OpenSSL wiki:

If you use a key or certificate without without the OPENSSL_EC_NAMED_CURVE flag (i.e., one that looks like the image on the right), then the SSL connection will fail with the following symptoms: … Server … 140339533272744:error:1408A0C1:SSL routines:SSL3_GET_CLIENT_HELLO:no shared cipher:s3_srvr.c:1353:

For reference, what I’m seeing is:

1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher

I think what I’ll do is I’ll either skip the tests if the OpenSSL version is too old, or not use elliptic curves.

– 2020-11-16 18:02 UTC

Add Comment

2020-08-29 Restarting Phoebe

I have trouble restarting Phoebe and other services I run. I guess I don’t understand how these forking processes work. These processes fork for every request they get, and that works. When they get a lot of requests, however, they enter some sort of failed state. And by “a lot” I mean more than a hundred or so. I know that it’s not a lot but I want these services to work for the small net and so I don’t want to spend any effort in trying to make them handle more. I don’t want to start caching HTTP requests, for example. What irks me most of all is that this doesn’t happen because hundreds of visitors want to know about my stuff. No, I share a link on Mastodon, and my post gets federated, and then every single server on the fediverse tries to get a preview image to display.

I’ve solved this issue for all my services behind Apache by blocking all the fediverse user agents, but Phoebe runs without a web server front-end. I’ve tried to abort as soon as possible, using the same regular expression, but that doesn’t seem to work.

But I have a second layer of defence: Monit watches over my processes. Please forgive the huge start program option. Maybe it’s time to move some of that into a config file. Please just scroll down. I also didn’t want to shorten it, because I think it’s an interesting snapshot of a non-trivial Phoebe setup.

Let’s go through this.

First we have a PID file, where the process ID is going to be. This is how Monit identifies the parent process responsible for the service. The --pid_file option is what tells Phoebe to write the same file. So far so good.

check process phoebe with pidfile /home/alex/farm/phoebe.pid
    start program = "/usr/bin/perl -I/home/alex/phoebe/lib /home/alex/farm/phoebe
 --setsid --user=alex --group=alex
 --log_level=3 --log_file=/home/alex/farm/phoebe.log
 --pid_file=/home/alex/farm/phoebe.pid
 --wiki_dir=/home/alex/phoebe
 --host=transjovian.org --cert_file=/var/lib/dehydrated/certs/transjovian.org/fullchain.pem --key_file=/var/lib/dehydrated/certs/transjovian.org/privkey.pem
 --host=toki.transjovian.org --cert_file=/var/lib/dehydrated/certs/transjovian.org/fullchain.pem --key_file=/var/lib/dehydrated/certs/transjovian.org/privkey.pem
 --host=vault.transjovian.org --cert_file=/var/lib/dehydrated/certs/transjovian.org/fullchain.pem --key_file=/var/lib/dehydrated/certs/transjovian.org/privkey.pem
 --host=communitywiki.org --cert_file=/var/lib/dehydrated/certs/communitywiki.org/fullchain.pem --key_file=/var/lib/dehydrated/certs/communitywiki.org/privkey.pem
 --host=alexschroeder.ch --cert_file=/var/lib/dehydrated/certs/alexschroeder.ch/fullchain.pem --key_file=/var/lib/dehydrated/certs/alexschroeder.ch/privkey.pem
 --host=next.oddmuse.org --cert_file=/var/lib/dehydrated/certs/oddmuse.org/fullchain.pem --key_file=/var/lib/dehydrated/certs/oddmuse.org/privkey.pem
 --wiki_main_page=Welcome --wiki_pages=About
 --wiki_mime_type=image/png --wiki_mime_type=image/jpeg
 --wiki_mime_type=audio/mpeg
 --wiki_space=transjovian.org/test
 --wiki_space=transjovian.org/phoebe
 --wiki_space=transjovian.org/gemini"

OK, with that out of the way, let’s talk about the important stuff: stopping and restarting the process, and determining when to restart the process.

    # leave enough time after a stop for the server to recover before starting
    stop program = "/bin/bash -c 'kill -s SIGKILL `cat /home/alex/farm/phoebe.pid`; sleep 120'"
    if failed
	host transjovian.org
	port 1965
	type tcpssl
        send "gemini://transjovian.org:1965/\r\n"
	expect "20 .*"
	for 5 cycles
	then restart
    if totalmem > 100 MB for 5 cycles then restart
    if 6 restarts within 15 cycles then stop

Monit checks the service using a regular request, once every cycle (5min). If it fails five times in a row (25min), it restarts. It also restarts when total memory is more than 100MB five times in a row. And when it had to restart six times in 15 cycles (75min), then the process gets stopped.

What happens on a restart? First the program is stopped and then it is started. But yesterday for example:

[CEST Aug 29 00:39:05] error    : 'phoebe' total mem amount of 190.0 MB matches resource limit [total mem amount>100 MB]
[CEST Aug 29 00:39:05] info     : 'phoebe' trying to restart
[CEST Aug 29 00:39:05] info     : 'phoebe' stop: '/bin/bash -c kill -s SIGKILL `cat /home/alex/farm/phoebe.pid`; sleep 120'
[CEST Aug 29 00:39:35] info     : 'phoebe' start: '/usr/bin/perl -I/home/alex/phoebe/lib /home/alex/farm/phoebe --setsid --user=alex --group=alex --log_level=3 --log_file=/home/alex/farm/phoebe.log --pid_file=/home/alex/farm/phoebe.pid --wiki_dir=/home/alex/phoebe --host=transjovian.org --cert_file=/va...'
[CEST Aug 29 00:44:42] error    : 'phoebe' process is not running
[CEST Aug 29 00:44:42] info     : 'phoebe' trying to restart
[CEST Aug 29 00:44:42] info     : 'phoebe' start: '/usr/bin/perl -I/home/alex/phoebe/lib /home/alex/farm/phoebe --setsid --user=alex --group=alex --log_level=3 --log_file=/home/alex/farm/phoebe.log --pid_file=/home/alex/farm/phoebe.pid --wiki_dir=/home/alex/phoebe --host=transjovian.org --cert_file=/va...'
[CEST Aug 29 00:49:57] error    : 'phoebe' process is not running
[CEST Aug 29 00:49:57] info     : 'phoebe' trying to restart
[CEST Aug 29 00:49:57] info     : 'phoebe' start: '/usr/bin/perl -I/home/alex/phoebe/lib /home/alex/farm/phoebe --setsid --user=alex --group=alex --log_level=3 --log_file=/home/alex/farm/phoebe.log --pid_file=/home/alex/farm/phoebe.pid --wiki_dir=/home/alex/phoebe --host=transjovian.org --cert_file=/va...'
[CEST Aug 29 00:55:02] error    : 'phoebe' process is not running
[CEST Aug 29 00:55:02] info     : 'phoebe' trying to restart
[CEST Aug 29 00:55:02] info     : 'phoebe' start: '/usr/bin/perl -I/home/alex/phoebe/lib /home/alex/farm/phoebe --setsid --user=alex --group=alex --log_level=3 --log_file=/home/alex/farm/phoebe.log --pid_file=/home/alex/farm/phoebe.pid --wiki_dir=/home/alex/phoebe --host=transjovian.org --cert_file=/va...'
[CEST Aug 29 01:00:12] error    : 'phoebe' process is not running
[CEST Aug 29 01:00:12] info     : 'phoebe' trying to restart
[CEST Aug 29 01:00:12] info     : 'phoebe' start: '/usr/bin/perl -I/home/alex/phoebe/lib /home/alex/farm/phoebe --setsid --user=alex --group=alex --log_level=3 --log_file=/home/alex/farm/phoebe.log --pid_file=/home/alex/farm/phoebe.pid --wiki_dir=/home/alex/phoebe --host=transjovian.org --cert_file=/va...'
[CEST Aug 29 01:05:19] error    : 'phoebe' process is not running
[CEST Aug 29 01:05:19] info     : 'phoebe' trying to restart
[CEST Aug 29 01:05:19] info     : 'phoebe' start: '/usr/bin/perl -I/home/alex/phoebe/lib /home/alex/farm/phoebe --setsid --user=alex --group=alex --log_level=3 --log_file=/home/alex/farm/phoebe.log --pid_file=/home/alex/farm/phoebe.pid --wiki_dir=/home/alex/phoebe --host=transjovian.org --cert_file=/va...'
[CEST Aug 29 01:10:25] error    : 'phoebe' service restarted 6 times within 6 cycles(s) - stop

Why isn’t the process running? Here’s a selection from the other log:

2020/08/29-00:39:38 App::Phoebe (type Net::Server::Fork) starting! pid(19496)
2020/08/29-00:39:38 Cannot connect to SSL port 1965 on 178.209.50.237 [Address already in use]
2020/08/29-00:44:43 App::Phoebe (type Net::Server::Fork) starting! pid(20808)
2020/08/29-00:44:43 Cannot connect to SSL port 1965 on 178.209.50.237 [Address already in use]
2020/08/29-00:49:59 App::Phoebe (type Net::Server::Fork) starting! pid(22125)
2020/08/29-00:49:59 Cannot connect to SSL port 1965 on 178.209.50.237 [Address already in use]
2020/08/29-00:55:05 App::Phoebe (type Net::Server::Fork) starting! pid(23449)
2020/08/29-00:55:05 Cannot connect to SSL port 1965 on 178.209.50.237 [Address already in use]
2020/08/29-01:00:15 App::Phoebe (type Net::Server::Fork) starting! pid(27704)
2020/08/29-01:00:15 Cannot connect to SSL port 1965 on 178.209.50.237 [Address already in use]
2020/08/29-01:05:21 App::Phoebe (type Net::Server::Fork) starting! pid(15897)
2020/08/29-01:05:21 Cannot connect to SSL port 1965 on 178.209.50.237 [Address already in use]

When I returned to the server this morning, that was the state it was in, and when I tried to restart it, same problem. There was a process still running, but the PID file was gone, and so Monit couldn’t stop the process, but the process was also not serving the port it was using.

ps aux | grep phoebe

So what’s the best solution, here? I’m thinking of a variant of “killall”, perhaps? The example below uses “[p]hoebe” to avoid the grep command from listing itself.

ps aux | grep '[p]hoebe' | awk '{print $2}' | xargs kill

What do you think?

Add Comment

Comments


Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. Note: in order to facilitate peer review and fight vandalism, we will store your IP number for a number of days. See Privacy Policy for more information. See Info for text formatting rules. You can edit the comment page if you need to fix typos. You can subscribe to new comments by email without leaving a comment.

To save this page you must answer this question:

Just say HELLO