Comments on 2020-12-22 Crawling

Some more data, now that I’m looking at my logs. These are the top hits on my sites via Phoebe:

  1         Amazon  1062
  2    OVH Hosting   929
  3         Amazon   912
  4         Amazon   730
  5         Amazon   653
  6         Amazon   482
  7         Amazon   284
  8         Amazon   188
  9        Hetzner   171
 10         Amazon   129
 11    OVH Hosting    55

Not a single human in sight, as far as I can tell. Crawlers crawling everywhere.

– Alex 2020-12-23 00:19 UTC


I installed the “surge protection” I’ve been using for Oddmuse, too: If you make more than 20 requests in 20s, you get banned for always increasing periods. Hey, I’m using Gemini status 44 at long last!

I’m thinking about checking whether the last twenty URIs requested are “plausible” – if somebody is requesting a lot of HTML pages, or raw pages, then that’s a sign of a crawler just following all the links and perhaps that deserves to get banned even if it’s slow enough.

– Alex 2020-12-23 00:25 UTC


I don’t want it for Gemini, but Gemini is part of the greater Internet, so I have to deal with autonomous agents. If I didn’t, I wouldn’t have a Gemini server (or a gopher server, or a web server, or ...). Are you familiar with King Canute?

Sean Conner 2020-12-23 06:22 UTC


Yeah, it’s true: we’re out in the open Internet and therefore we always have to defend against bots and crawlers, and I hate it. As for Cnut, he knew of the incoming tide and knew that he was powerless to command it. Yet, he didn’t drown, he didn’t build his house where the tide would wash it away, nor plant his fields where they would drown, and neither do I feel obligated to welcome the crawling tide, or to accommodate the creators of the crawling tide, or bow respectfully as the crawlers eat my CPU and produce more CO₂. Instead, I will build fences to hold back the crawlers, and rebuke their creators, and tell anybody who thinks that building autonomous agents to crawl the net is a solution for a problem that either their problem does not need solving or that their solution is lazy and that they should try harder.

I liked it better when I wrote emails back and forth to the creator of the only crawler.

Perhaps I should write up a different proposal.

To add your site to this new search engine, you provide the URL of your own index. The index is a gzipped Berkley DB where the keys are words (stemming and all that is optional on the search engine side, the index does not have to do this) and the values are URIs, furthermore, the URIs themselves are also keys, with values being the ISO language code. I’d have to check how well that works, since I know nothing of search engines.

Even if the search engine wants to do trigram search, they can still do it, I think.

If we don’t want to tie ourselves down, we could use a simple gemtext format:

=> URI all the unique words separated by spaces in any order

If the language is very important, we could use the language of the header. I still think compression is probably important so I’d say we use something like “text/gemini+gzip; lang=de-CH; charset=utf-8”.

Let’s give this a quick try:

#!/usr/bin/env perl
use Modern::Perl;
use File::Slurper qw(read_dir read_text);
use URI::Escape;
binmode STDOUT, ":utf8";
my $dir = shift or die "No directory provided\n";
my @files = read_dir($dir);
for my $file (@files) {
  my $data = read_text("$dir/$file");
  my %result;
  # parsing Oddmuse data files like mail or HTTP headers
  while ($data =~ /(\S+?): (.*?)(?=\n[^ \t]|\Z)/gs) {
    my ($key, $value) = ($1, $2);
    $value =~ s/\n\t/\n/g;
    $result{$key} = $value;
  }
  my $text = $result{text};
  next unless $text;
  my %words;
  $words{$_}++ for $text =~ /\w+/g;
  my $id = $file;
  $id =~ s/\.pg$//;
  $id = uri_escape($id);
  say "=> gemini://alexschroeder.ch/page/$id " . join(" ", keys %words);
}

Running it on a backup copy of my site:

index ~/Documents/Sibirocobombus/home/alex/alexschroeder/page \
| gzip > alexschroeder.gmi.gz

“ls -lh alexschroeder.gmi.gz” tells me the resulting file is 149MB in size and “zcat alexschroeder.gmi.gz | wc -l” tells me it has 8441 lines.

I would have to build a proof of concept search engine to check whether this is actually a reasonable format for self-indexing and submitting indexes to search engines.

– Alex 2020-12-23 14:44 UTC


Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. Note: in order to facilitate peer review and fight vandalism, we will store your IP number for a number of days. See Privacy Policy for more information. See Info for text formatting rules. You can edit this page if you need to fix typos. You can subscribe to updates by email without leaving a comment.

To save this page you must answer this question:

Just say HELLO