Diary

Welcome! 🙂

This is both a wiki (a website editable by all) and a blog (an online diary about the stuff Alex Schroeder reads and does). If you’re a friend or relative, you might be interested in reading Life instead of this page. If you’ve come here from an RPG blog, you might want to head over to RPG. There are other similar categories to be found on the SiteMap.

Für Rollenspieler gibt es ebenfalls eine eigene RSP Kategorie.

2021-01-16 How to use Gemtext for writing

Here’s something I do quite often: I talk about a thing, I quote a longer passage somebody else wrote, and I link to the source, hopefully naming the author and title, or something like it. How do you do that, given the tools that Gemini markup (also known as gemtext) provides, and how do you render it on your client? Screenshots, HTML, CSS, plain text, it’s all good. How do you write it, and how do you want it to look?

When I write for the web, I know what it looks like:

This is the quote. – _Title of the piece_, by the named author

“Title of the piece” is a link.

If I were to use gemtext, this is what I’d write:

> This is the quote.
=> Title of the piece, by the named author

Unfortunately, viewed via the web, this looks weird:

    This is the quote.
 • _Title of the piece, by the named author_

Perhaps I can improve this using simple CSS?

Sadly, it doesn’t look too good using Elpher, either.

> This is the quote.

→ _Title of the piece, by the named author_

Perhaps the look would be improved if I didn’t treat these links as list items but simply as link “paragraphs”? I don’t know.

Comments on 2021-01-16 How to use Gemtext for writing

Here’s how I do it:

=> protocol://link something by someone:
> Quote excerpt quote excerpt

That colon is important!

TimurIsmagilov 2021-01-16 19:34 UTC


Convert those lines differently. Linkline followed by contiguous set of quote lines: transpose, delete the newline, add the emdash. Other linklines render as li a.

– Sandra Snan 2021-01-16 22:44 UTC

Add Comment

2021-01-16 Facebook and WhatsApp

“Is there a good link to a simple text in German or English that explains why Facebook cannot be trusted, for a layperson?” I asked this question on Mastodon and got back a ton of links. I’ll use the opportunity to start a link collection.

@ColinTheMathmo, @deejoe, and @dredmorbius linked that story where a company is analysing your purchasing history and ends up knowing that a woman is pregnant before she tells her family.

Target Knows You're Pregnant, by Kelly Bourdet, for VICE: “A man walks into a Target store outside Minneapolis and angrily confronts the store manager. He shows him an advertisement that was sent to his high school daughter, filled with maternity clothing and baby items. The perplexed manager apologizes and even calls the man at home the following week to further apologize for the advertising faux pas. However, when he does the man sheepishly admits he’s found out that his daughter is, in fact, pregnant.”

@alienghic linked an opinion piece. I’m not quite sure I agree with the exact wording of the conclusion. I’m not convinced the business model actually works. For the purposes of explaining Facebook’s monopolist power, however, that doesn’t matter. All that matters is that the advertizers think it works.

Facebook’s Ad Scandal Isn’t a ‘Fail,’ It’s a Feature, by Zeynep Tufekci, for The New York Times. “Facebook has become the go-to site for anyone hoping to reach a big audience — whether to sell shoes or to sell politics, and it’s become profitable by doing so. That is because most of its systems are either largely or entirely automated. This lets the site scale up — it is up to two billion monthly users now — and keeps costs down … — ad-targeting through deep surveillance, emaciated work force, automation and the use of algorithms to find and highlight content that entice people to stay on the site or click on ads or share pay-for-play messages — works.”

@deejoe reminded me of this experiment, where somebody tries to live without the “big five” tech giants out of their life. It’s hard.

I Cut the 'Big Five' Tech Giants From My Life. It Was Hell. Week 6: Blocking them all, by Kashmir Hill, for Gizmodo. “The tech giants laid down all the basic infrastructure for our data to be trafficked. They got us to put our information into public profiles, to carry tracking devices in our pockets, and to download apps to those tracking devices that secretly siphon data from them. … The tech giants were long revered for making the world more connected, making information more accessible, and making commerce easier and cheaper. Now, suddenly, they are the targets of anger for assisting the spread of propaganda and misinformation, making us dangerously dependent on their services, and turning our personal information into the currency of a surveillance economy.”

@deejoe also recommended Shoshana Zuboff’s book, The Age of Surveillance Capitalism. I must confess that I haven’t read it. All I’ve read are comments here and there, and I’ve seen her in the movie The Social Dilemma (2020).

@maikek linked the Wikipedia pages collecting the criticism of Facebook: Kritik an Facebook, and Criticism of Facebook. Just look at a selection of the table of contents:

  • Data mining
  • Inability to voluntarily terminate accounts
  • Photo recognition and face tagging
  • Tracking of non-members of Facebook
  • Facebook and Cambridge Analytica data scandal
  • Sharing private messages and contacts’ details without consent
  • Denial of location privacy, regardless of user settings
  • Health data from apps sent to Facebook without user consent
  • International lobbying against privacy protections
  • Unencrypted password storage
  • Promotion of service as “free”
  • Providing ads through “snooping”

Uuuuugh. 🤮

@jamesmullarkey linked a report from Amnesty International. Facebook and Google’s pervasive surveillance poses an unprecedented danger to human rights. “As a first step, governments must enact laws to ensure companies including Google and Facebook are prevented from making access to their service conditional on individuals ‘consenting’ to the collection, processing or sharing of their personal data for marketing or advertising. Companies including Google and Facebook also have a responsibility to respect human rights wherever and however they operate.”

The report is available for download at the end of the article. In the executive summary: “But despite the real value of the services they provide, Google and Facebook’s platforms come at a systemic cost. The companies’ surveillance-based business model forces people to make a Faustian bargain, whereby they are only able to enjoy their human rights online by submitting to a system predicated on human rights abuse. Firstly, an assault on the right to privacy on an unprecedented scale, and then a series of knock-on effects that pose a serious risk to a range of other rights, from freedom of expression and opinion, to freedom of thought and the right to non-discrimination.”

Indeed: “This isn’t the internet people signed up for.” 🤮

@jamesmullarkey also reminded me of something that I had practically forgotten. A Genocide Incited on Facebook, With Posts From Myanmar’s Military, by Paul Mozur, for The New York Times. “The Facebook posts were not from everyday internet users. Instead, they were from Myanmar military personnel who turned the social network into a tool for ethnic cleansing, according to former military officials, researchers and civilian officials in the country. Members of the Myanmar military were the prime operatives behind a systematic campaign on Facebook that stretched back half a decade and that targeted the country’s mostly Muslim Rohingya minority group, the people said.”

@ashwinvis linked me Richard Stallman’s Reasons not to be used by Facebook page which led me to Get your loved ones off Facebook. “Through its labyrinth of re-definitions of words like information’, ‘content’ and ‘data’, you’re allowing Facebook to collect all kinds of information about you and expose that to advertisers. With your permission only they say, but the definition of ‘permission’ includes using apps and who knows what else.”

Facebook Is a Doomsday Machine, by Adrienne LaFrance, for The Atlantic. “In the days after the 2020 presidential election, Zuckerberg authorized a tweak to the Facebook algorithm so that high-accuracy news sources … would receive preferential visibility in people’s feeds, and hyper-partisan pages … would be buried, … offering proof that Facebook could, if it wanted to, turn a dial to reduce disinformation—and offering a reminder that Facebook has the power to flip a switch and change what billions of people see online.”

@trianderror linked resources in German, collected by Netzcourage.

@dredmorbius linked Why Zuckerberg’s 14-Year Apology Tour Hasn’t Fixed Facebook, by Zeynep Tufekci, for Wired. “By 2008, Zuckerberg had written only four posts on Facebook’s blog: Every single one of them was an apology or an attempt to explain a decision that had upset users. … I’m going to run out of space here, so let’s jump to 2018 and skip over all the other mishaps and apologies and promises to do better … in the intervening years.”

@michel_slm mentioned Does Facebook Use Sensitive Data for Advertising Purposes? by José González Cabañas, Àngel Cuevas, Aritz Arrate, Rubén Cuevas, in Communications of the ACM. “we demonstrated that Facebook (FB) labels 73% of users within the EU with potentially sensitive interests (referred to as ad preferences as well), which may contravene the GDPR. FB assigns user’s different ad preferences based on their online activity within this social network. Advertisers running ad campaigns can target groups of users that have been assigned a particular ad preference (for example, target FB users interested in Starbucks). Some of these ad preferences may suggest political opinions (for example, Socialist party), sexual orientation (for example, homosexuality), personal health issues (for example, breast cancer awareness), and other potentially sensitive attributes. In the vast majority of the cases, the referred sensitive ad preferences are inferred from the user behavior in FB without obtaining explicit consent from the user. Then advertisers may reach FB users based on ad preferences tightly linked to sensitive information.”

I might be adding to this list in the future.

Add Comment

2021-01-15 Traveller Madness

Yikes, in a moment of madness I bought the two Bundles of Holding for Classic Traveller. I’d say I’m set for life when it comes to Traveller, now. 😅

Maybe I should think about running a game. Slowly my itch to run games is coming back. I’m thinking that perhaps I’d like to try smaller groups, and something other than Halberds and Helmets for a bit.

Option 1, of course, is Classic Traveller. I still have an old Subsector lying around. The benefit of reusing old setting material is that you get an in-game setting history and plenty of non-player characters for free, buried in the campaign wiki.

Option 2, as is often the case, would be something like Burning Wheel, mostly because Judd Karlman keeps mentioning it in his podcast. There are many clunky bits I don’t particularly look forward to, but sometimes I remember elements of that one mini-campaign I ran, and I really liked those: rolling for contacts using the circles attribute; rolling to buy stuff using the resources attribute; one-roll conflict resolution, also known as Bloody Versus.

Then again, when I look at the list of things I enjoyed, perhaps it’s actually not in the rules per se, but in the approach to the game… I keep thinking that a way to play it using Just Halberds should be possible.

Perhaps the first thing to do is to find two people willing to play a game with, via a conference call, and then get excited about something via mutual inspiration.

Comments on 2021-01-15 Traveller Madness

Have you listened to the podcasts with Sean Nittner and Judd as they play Burning Wheel? It’s been wonderful listening to how they use the rules of BW to facilitate the conversation of the game.

In the first four sessions there are two dual of wits and one circles test. Most of it is conversations at the table.

Jeremy Friesen 2021-01-15 21:59 UTC


No, do you have a link?

– Alex 2021-01-16 00:19 UTC


It’s at: https://anchor.fm/actualplay

The Shoeless Peasant is the BW campaign. There’s also a Youtube or Twitch stream for this.

Jeremy Friesen 2021-01-16 15:01 UTC

Add Comment

2021-01-14 Player Types

I posted a link to a blog post about player types on Mastodon and added: “Thinking about play styles… I dunno, I used to be a lot more interested in these things. These days I guess everybody should read such a typology at least once so that we can all at least use the same words. But then my take is that enjoyment is a multifaceted thing that I enjoy all the things, some more so, some less so, some more when I do it, some more when others do it. And I think casual players are important for cohesion (like at the work-place, actually).”

Happy surprise, @jaranta is a game scholar! “Post-doc at JYU & Coe GameCult. Specialized in philosophy, game studies and digital culture.” Some of my favourite subjects. Jonne told me that actual studies do exist. We don’t need to rely on our own individual observations (valuable as they may be). So now I’m reading a study. 🙂

Add Comment

2021-01-11 Hex Describe

I’ve been at it again... Text Mapper can generate Gridmapper-compatible maps. Sadly, that means I need an algorithm to create the maps, and that’s not easy. Recently, I needed some caves. What I did was take the existing mini-dungeon generator and just turn all the walls into rock walls. It doesn’t look bad, actually!

I’m using them for the post-apocalyptic mini-settings I’m working on and off with Josh Johnston. Visit Hex Describe → click “random Apocalypse” → pick “Josh Johnston (best for Apocalypse maps)” → click submit. If you’re lucky, you’ll spot some vat people caves, now!

I’m hoping the vat people caves are easy to extend with the structure in place, which I lifted from the structure that ktrey parker and had developed years ago to generate random dungeons. And still so good!

For the post-apocalypse I had developed a book name generator which I then used to populate libraries, and to create little library related book quests: finding book thieves, for example. I liked this so much that I wanted that for my Alpine mini settings, too. So recently I’ve been working on fantasy book titles, too. And missions to go along with them.

I need to improve those book missions, though.

Next up: more secret agents. 😀

Comments on 2021-01-11 Hex Describe

You may be interested in looking at Caves of Qud for book generation. It’s been a while since I played but while I remember the contents of the books being barely legible (whatever algorithm they were using was not convincing at all) the titles were all very evocative for the post-apocalyptic setting the game features. Dwarf Fortress is similarly a good source to mine for the fantasy generator.

Also, is there a possibility of the Hex Describe books referring to things generated in the mini-setting - perhaps even other books? One of my favourite tidbits of history is the anti-philosophy book “Incoherence of the Philosophers”, which prompted one philosopher to write a book rebutting it titled “Incoherence of the Incoherence”. Generating the context in which books were written could provide plenty of adventure seeds.

– Malcolm 2021-01-13 09:29 UTC


Indeed, referring to things in the setting is important! So currently I’m doing the following:

  • referring to magic schools and elements (that one was easy)
  • golem types that exist in the setting
  • referring to implied past wars based on magic items
  • referring to past empires of the implied setting, which are also used for ruins, wights, and the like
  • using the legend generator that is also used by bards and elsewhere
  • refer to the existing secret societies in the setting

Things I don’t do, but which I should be doing, perhaps:

  • generate a dozen scholars beforehand and have them write on topics (and refute each other) – an excellent idea!
  • instead of generating random elf and dwarf names for magic items, pick them from a pregenerated list of maybe a dozen each and then use the same names to generate books about them
  • refer to the warring factions in the setting

– Alex 2021-01-13 11:32 UTC


As for adventure seeds... perhaps I need some unique magic items, and legends about them, and thus songs and book titles could be a good place to “tell their story” (tiny as it may be). Perhaps other rare elements could be hinted at via book titles, like the passages to other realms. The teleporting monoliths, or the alien space ships come to mind.

– Alex 2021-01-14 13:07 UTC

Add Comment

2021-01-01 Oddmuse memory issues

Not really! It’s Phoebe… 😭

I just noticed that three of my server processes seem to consume a lot of memory. The following three wikis were reporting more than 2G used, each:

  • Community Wiki
  • Emacs Wiki
  • Oddmuse Wiki

Strangely, this site, or Campaign Wiki seemed not to be affected. I don’t see any performance problems.

Memory issues started on week 51, 2020

I’m suspecting that something else is going on. I restarted the the three processes using Hypnotoad, a hot deployment. But htop is still telling me that we’re using a lot of memory. When I sort by memory, however, I see it’s Phoebe at the very top.

Phoebe is consuming 40% if all memory

So now I’m wondering:

  • did I install some memory leaking version of Phoebe in week 51 of 2020?
  • is Phoebe being counted for the three sites instead of counting as Phoebe?

Something to investigate in the upcoming days. For now, I’m going to restart Phoebe.

Memory use is halved, now

I fixed the regular expressions for my Munin setup so that all the command line parameters I’m using to start Phoebe don’t trigger matches for other processes such as Community Wiki, Emacs Wiki, or Oddmuse Wiki.

That still leaves the question: What did I install, what changes did I make around December 14? I have no idea...

Comments on 2021-01-01 Oddmuse memory issues

Yep, Phoebe is at fault.

Memory use increasing for Phoebe only

So now I have to find a way to reproduce this result locally. Ideally we’ll find that there’s a single config file at fault... If I’m unlucky it’s because Phoebe 2 switched from Net::Server to Mojo::IOLoop and I’m somehow not closing the connections correctly, or some reference remains that prevents them from closing 100%. Yikes.

– Alex 2021-01-03 09:57 UTC


OK, I think I have a test setup that works. Let me know if you see an error in my thinking.

The test starts a server in the background, listening to a random port ($port) like all the other tests. I need its process id so that I can determine the memory it uses:

lsof -i:$port -F p

Given the process id ($pid), I can now ask for the memory size of the process:

ps -q $pid -o size

So what I’m doing is I get this size after starting the server, then I run 100 requests, and I get the size again. I run just Phoebe, no problem. If I install the speed bump extension (see 2020-12-25 Defending against crawlers), size goes up.

Well, I guess I’m fine with it going up a bit since we need to keep some numbers in memory, right? There problem is only truly a problem if the memory keeps going up. Let’s try.

100 requests:

$ make test TEST_FILES=t/leak.t
PERL_DL_NONLAZY=1 "/home/alex/perl5/perlbrew/perls/perl-5.30.0/bin/perl" "-MExtUtils::Command::MM" "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')" t/leak.t
t/leak.t .. 1/?
#   Failed test 'No memory lost (42524 = 43256)'
#   at t/leak.t line 33.
#          got: '42524'
#     expected: '43256'
# Looks like you failed 1 test of 1.
t/leak.t .. Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/1 subtests

Test Summary Report
-------------------
t/leak.t (Wstat: 256 Tests: 1 Failed: 1)
  Failed test:  1
  Non-zero exit status: 1
Files=1, Tests=1,  2 wallclock secs ( 0.02 usr  0.00 sys +  0.71 cusr  0.07 csys =  0.80 CPU)
Result: FAIL
Failed 1/1 test programs. 1/1 subtests failed.
make: *** [Makefile:940: test_dynamic] Error 1

1000:

#   Failed test 'No memory lost (42528 = 43256)'

10000:

#   Failed test 'No memory lost (42524 = 43256)'

OK, clearly it just stays the same. Hm...

Perhaps it’s time to take a look at Devel::MAT. Too bad Devel::MAT::Dumper doesn’t install on the server. Something about xlocale.h missing. Installing libnewlib-dev (which provides /usr/include/newlib/xlocale.h) appears to have no effect:

Building Devel-MAT-Dumper
cc -I/home/alex/perl5/perlbrew/perls/perl-5.26.1/lib/5.26.1/x86_64-linux/CORE -DVERSION="0.42" -DXS_VERSION="0.42" -fPIC -c -fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2 -O2 -o lib/Devel/MAT/Dumper.o lib/Devel/MAT/Dumper.c
In file included from lib/Devel/MAT/Dumper.xs:8:
/home/alex/perl5/perlbrew/perls/perl-5.26.1/lib/5.26.1/x86_64-linux/CORE/perl.h:738:13: fatal error: xlocale.h: No such file or directory
 #   include <xlocale.h>
             ^~~~~~~~~~~
compilation terminated.

OK. Before doing anything else, I’m going to use Perlbrew to upgrade Perl 5.26 to 5.32. Then we’ll see about installing Devel::MAT.

– Alex 2021-01-03 17:38 UTC


OK, that worked! I wrote a little extension that allows me to write a heap dump to disk on the server.

Install it in your conf.d directory, set the known fingerprints to the hash of your favourite fingerprint in your config file (or edit the heap-dump.pl file):

our @known_fingerprints = qw(
  sha256$54c0b95dd56aebac1432a3665107d3aec0d4e28fef905020ed6762db49e84ee1);

Install the Devel::MAT::Dump package on your server, and Devel::MAT on your development machine. Restart Phoebe.

Finally, visit the path /do/heap-dump on your Phoebe instance. You’ll be prompted for your client certificate. Point your Gemini client to the one whose fingerprint you added your config above, and now it should say: “Heap Dump Saved” – copy the file it created to your local machine and examine it, using the user guide linked above as your guide.

Here’s the output right after I restarted it:

$ pmat phoebe.pmat
Perl memory dumpfile from perl 5.32.0 non-threaded
Heap contains 217470 objects
pmat> largest
HASH(24948) at 0x558c89875548=strtab: 1.7 MiB
HASH(5672) at 0x558c89b3c450: 261.0 KiB
SCALAR(PV) at 0x558c89beaa18: 215.7 KiB
ARRAY(1,!REAL) at 0x558c89899098: 92.5 KiB
HASH(2231) at 0x558c8aac5240: 84.3 KiB
others: 15.2 MiB

I’m going to let it run over night and we’ll see how it does tomorrow.

– Alex 2021-01-03 21:34 UTC


OK, a few hours later...

$ pmat phoebe.pmat
Perl memory dumpfile from perl 5.32.0 non-threaded
Heap contains 313361 objects
pmat> largest
HASH(39010) at 0x558c89875548=strtab: 2.4 MiB
HASH(8940) at 0x558c89b8f2a8: 337.6 KiB
HASH(5672) at 0x558c89b3c450: 261.0 KiB
SCALAR(PV) at 0x558c98eaf6e8: 215.7 KiB
SCALAR(PV) at 0x558c89beaa18: 215.7 KiB
others: 21.0 MiB

That is not very encouraging. There apparently is no single thing that grew in limitless ways. 😢

pmat> count
  Kind        Count (blessed)        Bytes  (blessed)
  ARRAY       11108         7      1.2 MiB  520 bytes
  CODE        10387                1.3 MiB
  GLOB        14796        16      2.1 MiB    2.4 KiB
  HASH        23305      3182      7.4 MiB  849.6 KiB
  INVLIST       655              674.4 KiB
  IO             33        33      5.2 KiB    5.2 KiB
  PAD          7870                1.5 MiB
  REF         41199         1    965.6 KiB   24 bytes
  REGEXP       2503       253    528.0 KiB   53.4 KiB
  SCALAR     200878        12      8.0 MiB 1022 bytes
  STASH         627              801.6 KiB
  -------    ------ ---------    --------- ----------
  (total)    313361      3504     24.5 MiB  912.1 KiB

What I don’t really understand is that the numbers I’m seeing are all around 20 MiB, and yet Munin is telling me that the minimum memory is 97.31 MiB and the maximum is 266.54 MiB (also the current value). So where are the 170 MiB I’m missing?

Hm. Let’s take a look at the two scalar values we saw up in the list of “largest” SVs. Surprisingly, they are exactly the same size. Let’s examine them.

...
SCALAR(PV) at 0x558c98eaf6e8: 215.7 KiB
SCALAR(PV) at 0x558c89beaa18: 215.7 KiB
...

pmat> show 0x558c98eaf6e8
SCALAR(PV) at 0x558c98eaf6e8 with refcount 1
  size 215.7 KiB (220840 bytes)
  PV="// This Source Code Form is subject to the terms of the Mozilla "...
  PVLEN 220798

pmat> show 0x558c89beaa18
SCALAR(PV) at 0x558c89beaa18 with refcount 1
  size 215.7 KiB (220840 bytes)
  PV="// This Source Code Form is subject to the terms of the Mozilla "...
  PVLEN 220798

OK, that is surprising. I don’t recognize this text. Perhaps there are more like it? After all, the count shows 200878 scalars. What’s with the refcount? Who is keeping those references?

pmat> inrefs --all 0x558c98eaf6e8

pmat> inrefs --all 0x558c89beaa18
s  a constant  CODE(PP) at 0x558c89b8f080

OK, nobody is referring to the first one, which is even more surprising. Perhaps it’s about to be garbage collected? The second value gives us a hint:

pmat> show 0x558c89b8f080
CODE(PP) at 0x558c89b8f080 with refcount 1
  size 128 bytes
  named as &IO::Socket::SSL::PublicSuffix::_builtin_data
  no hekname
  stash=STASH(19) at 0x558c89bd0f98
  glob=GLOB(&*) at 0x558c89bea460
  location=/home/alex/perl5/perlbrew/perls/perl-5.32.0/lib/site_perl/5.32.0/IO/Socket/SSL/PublicSuffix.pm line 348
  scope=CODE() at 0x558c89c4f820
  pad[0]=PAD(1) at 0x558c89b8f188

The SSL stuff. 😭 This reminds me of the only test in gemini-diagnostics Phoebe is currently failing: “Server should send a close_notify alert before closing the connection.”

Perhaps that’s my problem. The SSL/TLS connections aren’t closed correctly, so they hang around.

I’ll have to think about this some more.

– Alex 2021-01-04 08:12 UTC


Ten hours later, Munin says 418.04 MiB, and the heap dump says:

pmat> largest
HASH(46066) at 0x558c89875548=strtab: 3.3 MiB
HASH(15996) at 0x558c89b8f2a8: 631.0 KiB
SCALAR(PV) at 0x558c8af5d8a0: 305.0 KiB
HASH(5672) at 0x558c89b3c450: 261.0 KiB
SCALAR(PV) at 0x558c98cc4d48: 232.0 KiB
others: 24.5 MiB

pmat> count
  Kind        Count (blessed)        Bytes  (blessed)
  ARRAY       12116         7      1.3 MiB  520 bytes
  CODE        11395                1.4 MiB
  GLOB        14796        16      2.1 MiB    2.4 KiB
  HASH        30361      3182      9.6 MiB  849.6 KiB
  INVLIST       655              674.4 KiB
  IO             33        33      5.2 KiB    5.2 KiB
  PAD          8878                1.8 MiB
  REF         56319         1      1.3 MiB   24 bytes
  REGEXP       2510       253    529.5 KiB   53.4 KiB
  SCALAR     228143        12      9.7 MiB 1022 bytes
  STASH         627              801.6 KiB
  -------    ------ ---------    --------- ----------
  (total)    365833      3504     29.2 MiB  912.1 KiB

– Alex 2021-01-04 17:40 UTC


Ok, let’s look at the ever growing hash at 0x558c8d32a720:

pmat> values 0x558c89b8f2a8
  ...
  {ptr_0x558c8d3d5f40} REF() at 0x558c8d29c850 => HASH(1) at 0x558c8d2ab990
  {ptr_0x558c8d3eabc0} REF() at 0x558c8d2ec190 => HASH(1) at 0x558c8d29fc48
  {ptr_0x558c8d3eb470} REF() at 0x558c8d3bdc18 => HASH(1) at 0x558c8d3bdc30
  {ptr_0x558c8d3f6cd0} REF() at 0x558c8d2f3e00 => HASH(1) at 0x558c8d32a720
  ... (15946 more)

pmat [more]> values 0x558c8d32a720
  {"ssleay_verify_callback!!func"} REF() at 0x558c8d3a4758 => CODE(PP,closure) at 0x558c8d32aac8

pmat> values 0x558c8d3bdc30
  {"ssleay_verify_callback!!func"} REF() at 0x558c8d3bdca8 => CODE(PP,closure) at 0x558c8d3bd8e8

pmat> values 0x558c8d29fc48
  {"ssleay_verify_callback!!func"} REF() at 0x558c8d2ac260 => CODE(PP,closure) at 0x558c8d32aac8

pmat> values 0x558c8d2ab990
  {"ssleay_verify_callback!!func"} REF() at 0x558c8d29d078 => CODE(PP,closure) at 0x558c8d32aac8

Ugh, SSL again! Verify callback – that also rings a bell. Phoebe code:

IO::Socket::SSL::set_defaults(
  SSL_verify_mode => SSL_VERIFY_PEER,
  SSL_verify_callback => \&verify_fingerprint);

Hm. So now I’m thinking: a hash with 16,000 SSL sockets?

– Alex 2021-01-04 20:05 UTC


I’ve made a small change: as the stream closes, I undefine the $data reference. I figured that something close to this loop (”for every socket?”) must be keeping references around.

Mojo::IOLoop->server({
  address => $address,
  port => $port,
  tls => 1,
  tls_cert => $server->{cert_file},
  tls_key  => $server->{key_file},
} => sub {
  my ($loop, $stream) = @_;
  my $data = { buffer => '', handler => \&handle_request };
  $stream->on(read => sub {
    my ($stream, $bytes) = @_;
    $log->debug("Received " . length($bytes) . " bytes");
    $data->{buffer} .= $bytes;
    $data->{handler}->($stream, $data) });
  $stream->on(close => sub {
    my $stream = shift;
    undef($data);
   }) });

Twelve hours later and Munin reports a current RSS of 206 MiB. That’s much better than anticipated! I’ll keep it running for another day or so, see how it develops.

– Alex 2021-01-06 07:55 UTC


A few more hours and memory is down to 174 MiB, so memory actually went down. Amazing! 😄

pmat> largest
HASH(29739) at 0x55cf4fc7c548=strtab: 2.0 MiB
HASH(5672) at 0x55cf4ff43d60: 261.0 KiB
SCALAR(PV) at 0x55cf4fff2238: 215.7 KiB
HASH(4782) at 0x55cf4ff96aa8: 176.1 KiB
ARRAY(1,!REAL) at 0x55cf4fca01d8: 92.5 KiB
others: 17.2 MiB

pmat> count
  Kind        Count (blessed)        Bytes  (blessed)
  ARRAY       10198         7      1.2 MiB  520 bytes
  CODE         9603                1.2 MiB
  GLOB        14374        16      2.1 MiB    2.4 KiB
  HASH        11712      3181      5.0 MiB  884.5 KiB
  INVLIST       583              652.3 KiB
  IO             33        33      5.2 KiB    5.2 KiB
  PAD          7079                1.3 MiB
  REF         24869         1    582.9 KiB   24 bytes
  REGEXP       2303       253    485.8 KiB   53.4 KiB
  SCALAR     173493        12      6.8 MiB 1022 bytes
  STASH         610              776.3 KiB
  -------    ------ ---------    --------- ----------
  (total)    254857      3503     19.9 MiB  946.9 KiB

– Alex 2021-01-06 12:14 UTC


Too bad with memory climbing again. That’s wasn’t the answer, or not the entire answer, sadly! Back at 325 MiB.

Phoebe rising

When I look at this, at all the complications, and the memory used, and compare it to the implementations based on Net::Server (”old style”) with it’s 23 MiB (used for the Gopher and Finger server on ports 70 and 79, as well as the SSL enabled Gopher on port 7334) – I don’t have happy thoughts.

pmat> count --blessed
  Kind                                             Count (blessed)        Bytes  (blessed)
  ARRAY                                            11506         7      1.3 MiB  520 bytes
      Math::BigInt::Calc                               0         3      0 bytes  216 bytes
      Net::DNS::RR::OPT                                0         2      0 bytes  160 bytes
      Class::Struct::Tie_ISA                           0         1      0 bytes   64 bytes
      Net::DNS::RR                                     0         1      0 bytes   80 bytes
  CODE                                             10786                1.3 MiB
  GLOB                                             14792        16      2.1 MiB    2.4 KiB
      IO::Socket::IP                                   0        15      0 bytes    2.2 KiB
      IO::Socket::SSL                                  0         1      0 bytes  152 bytes
  HASH                                             26086      3182      8.2 MiB  884.6 KiB
      Specio::DeclaredAt                               0      1537      0 bytes  324.2 KiB
      Specio::Constraint::Simple                       0      1428      0 bytes  483.8 KiB
      DateTime::Format::Builder::Parser::Regex         0        67      0 bytes   15.9 KiB
      Specio::Constraint::Parameterizable              0        60      0 bytes   22.1 KiB
      Mojo::IOLoop::Server                             0        15      0 bytes    3.7 KiB
      Specio::Constraint::ObjectIsa                    0        15      0 bytes    5.5 KiB
      Specio::Constraint::Enum                         0        10      0 bytes    3.8 KiB
      Specio::Constraint::Union                        0         7      0 bytes    2.2 KiB
      Specio::Constraint::AnyCan                       0         6      0 bytes    2.2 KiB
      Specio::Constraint::ObjectCan                    0         5      0 bytes    1.8 KiB
      (others)                                         0        32      0 bytes   19.5 KiB
  INVLIST                                            651              662.6 KiB
  IO                                                  33        33      5.2 KiB    5.2 KiB
      IO::File                                         0        33      0 bytes    5.2 KiB
  PAD                                               8269                1.6 MiB
  REF                                              47156         1      1.1 MiB   24 bytes
      IO::Socket::SSL::SSL_HANDLE                      0         1      0 bytes   24 bytes
  REGEXP                                            2491       253    525.4 KiB   53.4 KiB
      Regexp                                           0       253      0 bytes   53.4 KiB
  SCALAR                                          211663        12      8.2 MiB 1022 bytes
      Encode::XS                                       0         5      0 bytes  360 bytes
      JSON::PP::Boolean                                0         4      0 bytes  288 bytes
      Cpanel::JSON::XS                                 0         2      0 bytes  292 bytes
      Math::BigInt                                     0         1      0 bytes   82 bytes
  STASH                                              627              803.2 KiB
  --------------------------------------------    ------ ---------    --------- ----------
  (total)                                         334060      3504     25.8 MiB  947.1 KiB

I don’t understand what this means… I looks like everything is OK?

Comparing the two heap dumps I have:

$ pmat-counts phoebe-2021-01-06T*
phoebe-2021-01-06T13:14.pmat
  Kind        Count (blessed)        Bytes  (blessed)
  ARRAY       10198         7      1.2 MiB  520 bytes
  CODE         9603                1.2 MiB
  GLOB        14374        16      2.1 MiB    2.4 KiB
  HASH        11712      3181      5.0 MiB  884.5 KiB
  INVLIST       583              652.3 KiB
  IO             33        33      5.2 KiB    5.2 KiB
  PAD          7079                1.3 MiB
  REF         24869         1    582.9 KiB   24 bytes
  REGEXP       2303       253    485.8 KiB   53.4 KiB
  SCALAR     173493        12      6.8 MiB 1022 bytes
  STASH         610              776.3 KiB
  -------    ------ ---------    --------- ----------
  (total)    254857      3503     19.9 MiB  946.9 KiB
phoebe-2021-01-06T22:35.pmat
  Kind        Count         (blessed)            Bytes              (blessed)
  ARRAY       11506(+1308)          7          1.3 MiB(+119.4 KiB)  520 bytes
  CODE        10786(+1183)                     1.3 MiB(+147.9 KiB)
  GLOB        14792(+418)          16          2.1 MiB(+62.0 KiB)     2.4 KiB
  HASH        26086(+14374)      3182(+1)      8.2 MiB(+3.2 MiB)    884.6 KiB(+168 bytes)
  INVLIST       651(+68)                     662.6 KiB(+10.3 KiB)
  IO             33                33          5.2 KiB                5.2 KiB
  PAD          8269(+1190)                     1.6 MiB(+330.7 KiB)
  REF         47156(+22287)         1          1.1 MiB(+522.4 KiB)   24 bytes
  REGEXP       2491(+188)         253        525.4 KiB(+39.7 KiB)    53.4 KiB
  SCALAR     211663(+38170)        12          8.2 MiB(+1.5 MiB)   1022 bytes
  STASH         627(+17)                     803.2 KiB(+26.8 KiB)
  -------    -------------- -------------    --------------------- ----------------------
  (total)    334060(+79203)      3504(+1)     25.8 MiB(+5.9 MiB)    947.1 KiB(+168 bytes)

The difference seems minimal. And yet, ps -e rss reports 325 MiB instead of 174 MiB used.

– Alex 2021-01-06 21:30 UTC


Well… What do we have? I have three heap dump files for the current process:

23M 2021-01-06 13:14 phoebe-2021-01-06T13:14.pmat
29M 2021-01-06 22:35 phoebe-2021-01-06T22:35.pmat
42M 2021-01-07 21:03 phoebe-2021-01-07T21:03.pmat

RSS reported by ps is 174 MiB, 325 MiB, and 822 Mib. It’s not looking good. What amazes me, however, is that the memory usage as reported by Munin using “/proc/meminfo” doesn’t seem to reflect that. Or maybe it does?

Anyway, I’m experimenting with pmat tools. The output of “pmat-leakreport” is extremly long. It’s pages and pages of this:

$ pmat-leakreport phoebe-2021-01-0[67]*
…
LEAK[2] HASH(1) at 0x55cf5e4d3e00
LEAK[2] REF() at 0x55cf5e472f08
LEAK[2] SCALAR(PV) at 0x55cf5e2c7318
LEAK[2] SCALAR(UV) at 0x55cf5e4b8b78
LEAK[2] HASH(1) at 0x55cf5ab96ce0
LEAK[2] HASH(1) at 0x55cf5b69f8b8
LEAK[2] HASH(1) at 0x55cf604236c8
LEAK[2] UNDEF() at 0x55cf62d869c0
LEAK[2] GLOB($*) at 0x55cf5e039088
LEAK[2] HASH(1) at 0x55cf60a59478
LEAK[2] REF() at 0x55cf62887cf0
LEAK[2] SCALAR(UV) at 0x55cf5e49a430
LEAK[2] REF() at 0x55cf63108b28
LEAK[2] UNDEF() at 0x55cf5d57cfb8
LEAK[2] SCALAR(UV) at 0x55cf5e401578
LEAK[2] UNDEF() at 0x55cf6086c7a0
LEAK[2] REF() at 0x55cf5ecf80c0
LEAK[2] REF() at 0x55cf5e369e00
LEAK[2] UNDEF() at 0x55cf5fa091e8
LEAK[2] HASH(1) at 0x55cf5bc07e48
LEAK[2] UNDEF() at 0x55cf62a23f60
LEAK[2] UNDEF() at 0x55cf5c78a818
LEAK[2] SCALAR(UV) at 0x55cf5e48d5c0
LEAK[2] REF() at 0x55cf6137ebf8

I don’t know what to make of this. The documenation says:

A leaking SV is one that appears and then never gets touched again. To detect them, we need to look for SVs that appear between two files, and then don’t disappear again. Any that do disappear were simply temporaries and can be ignored.

So, all of these are new appearances, and they don’t disappear again. 😟

Let’s look at them, starting at the end:

pmat> identify 0x55cf6137ebf8                                                                                                                      
REF() at 0x55cf6137ebf8 is:
└─value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf6137ec58, which is:
  └─(via RV) value {ptr_0x55cf614d66c0} of HASH(35071) at 0x55cf4ff96aa8, which is:
    └─not found

pmat> identify 0x55cf5e48d5c0                                                                                                                      
SCALAR(UV) at 0x55cf5e48d5c0 is:
└─value {"\\0"} of HASH(1) at 0x55cf5e48d5a8, which is:
  └─(via RV) value {rakkestad} of HASH(725) at 0x55cf5e44f090, which is:
    └─(via RV) value {no} of HASH(1526) at 0x55cf4ffdf998, which is:
      └─(via RV) value {tree} of HASH(2)=IO::Socket::SSL::PublicSuffix at 0x55cf5e546aa0, which is:
        └─(via RV) value {"1"} of HASH(1) at 0x55cf4ffd88a8, which is:
          └─the lexical %default at depth 1 of CODE(PP) at 0x55cf4ffd8440, which is:
            └─the symbol '&IO::Socket::SSL::PublicSuffix::default'

pmat> identify 0x55cf5c78a818                                                                                                                      
UNDEF() at 0x55cf5c78a818 is:
└─pad temporary 29 at depth 1 of CODE(PP,closure) at 0x55cf5c719058, which is:
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c681250, which is:
  │ └─(via RV) value {ptr_0x55cf5c8999e0} of HASH(35071) at 0x55cf4ff96aa8, which is (*A):
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c718b00, which is:
  │ └─(via RV) value {ptr_0x55cf5c891a30} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c86dea8, which is:
  │ └─(via RV) value {ptr_0x55cf5c88cff0} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c86def0, which is:
  │ └─(via RV) value {ptr_0x55cf5c89a180} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c7fc768, which is:
  │ └─(via RV) value {ptr_0x55cf5c871bd0} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c78a7d0, which is:
  │ └─(via RV) value {ptr_0x55cf5c8991c0} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  └─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf5c7fc7f8, which is:
    └─(via RV) value {ptr_0x55cf5c8995d0} of HASH(35071) at 0x55cf4ff96aa8, which is:
      └─not found

pmat> identify 0x55cf62a23f60                                                                                                                      
UNDEF() at 0x55cf62a23f60 is:
└─the lexical $ctx_store at depth 1 of CODE(PP,closure) at 0x55cf62a24770, which is:
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62abd3d0, which is:
  │ └─(via RV) value {ptr_0x55cf62ba4960} of HASH(35071) at 0x55cf4ff96aa8, which is (*A):
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62991a00, which is:
  │ └─(via RV) value {ptr_0x55cf62b68370} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62667430, which is:
  │ └─(via RV) value {ptr_0x55cf62b628d0} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62667f70, which is:
  │ └─(via RV) value {ptr_0x55cf62ba5100} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62a24b48, which is:
  │ └─(via RV) value {ptr_0x55cf62b72e10} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  ├─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62a24398, which is:
  │ └─(via RV) value {ptr_0x55cf62ba4550} of HASH(35071) at 0x55cf4ff96aa8, which is:
  │   └─not found
  └─(via RV) value {"ssleay_verify_callback!!func"} of HASH(1) at 0x55cf62abd160, which is:
    └─(via RV) value {ptr_0x55cf62ba4140} of HASH(35071) at 0x55cf4ff96aa8, which is:
      └─not found

pmat> values 0x55cf5bc07e48                                                                                                                        
  {"ssleay_verify_callback!!func"} REF() at 0x55cf5bb9e678 => CODE(PP,closure) at 0x55cf5bd3d8d8

OK, I’d it’s pretty clear by now: the “ssleay_verify_callback” stuff is bonkers!

Verify callback – remember the Phoebe code? Also note how “%default” is mentioned above.

IO::Socket::SSL::set_defaults(
  SSL_verify_mode => SSL_VERIFY_PEER,
  SSL_verify_callback => \&verify_fingerprint);

But I just read the IO::Socket::SSL man page. It says: “If you have a server and it looks like you have a memory leak you might check the size of your session cache. Default for Net::SSLeay seems to be 20480, see the example for SSL_create_ctx_callback for how to limit it.”

OK. I’m going to try that instead. The previous code had no effect. I’m limiting the session cache to 128, like in the example provided. Restarted the server. Memory is down below 100 MiB again:

# pgrep -f -l "phoebe"
29000 phoebe
# ps -q 29000 -eo rss,size
  RSS  SIZE
73392 62360

– Alex


I’m surprised. This seems to have helped!

Image 5

Ignoring the spike of Emacs Wiki and just looking Phoebe: we seem to be stable at 250 MiB.

Cool!

Hm… Then again, at the same time, I changed the “phoebe.service” file for systemd:

MemoryMax=250M
MemoryHigh=200M

That is… uncannily close! So here’s what I’m doing now: I decreased the SSH cache from 128 to 64, and I didn’t change the “phoebe.service” file. Let’s see if memory use is halved, i.e. the cache is relevant, or not, i.e. the memory limits are relevant.

– Alex 2021-01-09 15:00 UTC


OK, reducing the cache size didn’t help. We’re back at 250 MiB. This is a memory leak of sorts, but the resource limit I have in the service file is working.

I think I’m going to reduce those limits:

MemoryMax=100M
MemoryHigh=90M

Let’s see whether it works well enough!

– Alex 2021-01-10 09:54 UTC


It works well enough. I added this to the documentation and stopped trying to understand where Mojo::IOLoop::TLS, IO::Socket::SSL, Net::SSLeay, or OpenSSL are failing.

– Alex 2021-01-11 11:26 UTC

Add Comment

2020-12-31 My feed aggregators

@solderpunk recently announced an interesting change to CAPCOM:

“On the first day of each month, a cron job runs the … pipeline to randomly select 100 unique feeds … Those feeds are then aggregated by CAPCOM for the remainder of the month. This way the scope of the CAPCOM instance can grow without limit over time, but the actual amount of new content to be found each day should remain fairly constant over time.”

I like it!

As for my own feed agregators, I actually wrote two of them in 2020.

For the traditional web, I wrote a replacement for Planet Venus which was powering the RPG Planet because Planet Venus is written in Python 2 and I was unable to port it to Python 3. Writing my own was more fun!

It powers all three of the RPG Planets I run:

I also use it for a list of other blogs I’m following that aren’t directly RPG related:

As you can see I didn’t bother to change the theme, haha! 😄

Since I also started getting interested in Gemini, I rewrote moku pona, the Gopher “aggregator” I had. The reason I’m using quotes here is that moku pona 1 was just a URL watcher: given a list of URLs, it would check them for changes and move changed resources to the top of the list. When I added support for Gemini feeds, I realised that this was not very different from supporting all sorts of feeds, so now moku pona supports URL watching, as well as RSS and Atom feeds, via Gopher, Gemini, and HTTP. The fact that it works amazes me every day. 😆

The main difference in terms of user interface is the granularity. Jupiter a “traditional” aggregator that shows an excerpt of every post: site name, post title, post except. Mona pona on the other hand simply shows the date of the last change per site, with the nickname you provided.

Add Comment

2020-12-31 Best of 2020

OK, this is a year that definitely needs a best-of list! 😃

Best of 2020, what do you have?

For me: we spent two weeks in the Galápagos. I did not expect to ever be able to do this. Of course, we had to leave two days early because of the pandemic onset, but by that time we had already left the ship. It was the most fantastic thing. Then again, my wife and I love plants and animals.

Music: I found a way to download all the MP3s from the The David W. Niven Collection of Early Jazz Legends, 1921-1991. Thank you Internet Archive! If you want a copy of those 84G and aren’t technically minded, leave a comment and I’m sure we can work something out.

Work: I hate open plan offices, like ours. I hate working on premise with clients, like for my current project. I hate commuting. I hate the stupidity revealed by the pandemic. But with the pandemic I had the excuse I needed to work from home. It was weird, and I gained a bit of weight, and I worked a bit more than was necessary, but I enjoyed it much, much more. I love working in the same flat as my wife. I love eating lunch with her. I love going on walks with her. So happy. 😍

Add Comment

2020-12-30 Influence Industry

While the European Union has not witnessed the full-blown, American-scale ‘datafication’ of its politics, there are clear signs that politics across the bloc are trending in that direction. … Elections in Europe are … gravitating even more towards becoming competitions of digital strategy and farther away from the democratic ideal of a pure contest of ideas.

Reflections on the European Democracy Action Plan: An Easily-Overlooked Concern, by Varoon Bashyakarla, for the Heinrich Böll Stiftung

Comments on 2020-12-30 Influence Industry

As a USAian, I want to say that the impact of campaign datafication is easy to overstate – the 2016 Clinton campaign was the most data-driven campaign in history, and lost to a reality TV star. The real impact of datafication in the US is fine-tuning the process of gerrymandering.

– elpherHbgARa 2020-12-31 15:04 UTC


Good point!

– Alex 2020-12-31 20:43 UTC

Add Comment

2020-12-25 Defending against crawlers

Recently I was writing about my dislike of crawlers. They turned into a kind of necessary evil on the web – but it’s not too late to choose a different future for Gemini. I want to encourage all server authors and crawler authors to think long and hard about alternatives.

One feature I dislike about crawlers is that they follow all the links. Sure, we have a semi-useful “robots.txt” specification but it’s easy to get wrong on both sides. I’ve had bugs in my “robots.txt” file for a long time without noticing them.

Now, if the argument is that I cannot prevent crawlers from leeching my site, then the reply is of course that I will try to defend myself even if it is impossible to get 100% right. The first line of defence is going to be my “robots.txt” file. It’s not perfect, and that’s fine. It’s not perfect because I just need to look at the Apache config file I use to block all the misbehaving bots and user agents.

Ugh, look at the bots hitting my websites:

$ /home/alex/bin/bot-detector < /var/log/apache2/access.log.1
--------------Bandwidth-------Hits-------Actions--Delay
   Everybody      2416M     102520
    All Bots       473M      23063   100%    19%
-------------------------------------------------------
     bingbot    240836K       8157    35%    31%    10s
   YandexBot     36279K       3905    16%     3%    22s
   Googlebot     65808K       3679    15%    34%    23s
      Adsbot     20187K       3115    13%     0%    27s
    Applebot     66607K        908     3%     0%    95s
     Facebot      1611K        390     1%     0%   220s
    PetalBot      1548K        329     1%    12%   257s
         Bot      2101K        308     1%     0%   280s
      robots       525K        231     1%     0%   374s
    Slackbot      1339K        224     0%    96%   382s
  SemrushBot       572K        194     0%     0%   438s

A full 22% of all user agents have something like “bot” in their name. Just look at them! Let’s take the last one, SemrushBot. The user agent also has a link, and if you want, you can take a look. All the goals it lists are disgusting, or benefit corporations and not me, nor other humans. Barf with me as you read statements such as “the Brand Monitoring tool to index and search for articles” or “the On Page SEO Checker and SEO Content template tools reports”. 🤮

Have a look at your own webserver logs. 22% of my CPU resources, of the CO₂ my server produces, of the electricity it eats, for machines that do not have my best interest in mind. I don’t want a web that’s 20% bots crawling all over my site. I don’t want a Gemini space that’s 20% bots crawling all over my capsules.

OK, so let’s talk about defence.

When I look at my Gemini logs, I see that plenty of requests come from Amazon hosts. I take that as a sign of autonomous agents. I might sound like a fool on the Butlerian Jihad, but if I need to block entire networks, then I will. Looking up WHOIS data also costs resources. It would be better if we could identify these bots by looking at their behaviour.

The first mistake crawlers make is that they are too fast. So here’s what I’m currently doing: for every IP, I’m keeping track of the last 30 requests in the last 60s. If there are more requests, the IP number is blocked. Thus, if your average clicking rate is more than 1 click per 2s over a 1min window, you’re probably a bot and you get blocked. I might have to turn this up. Perhaps 1 click per 5s makes more sense for a human.

But there’s more. I see the crawlers clicking on all the links. All the HTML renderings of the pages are already available via Gemini. It makes no sense to request all of these. All the raw wiki text of the pages are available as well. It makes no sense to request all of these, either. All the links to leave a comment are also on every page. It makes no sense to request all of these either.

Here’s what I’m talking about. I picked an IP number from the logs and checked what they’ve been requesting:

2020-12-25 08:32:37 gemini://transjovian.org:1965/page/Linking/2
2020-12-25 08:32:45 gemini://alexschroeder.ch:1965/history/Perl
2020-12-25 08:32:59 gemini://communitywiki.org:1965/page/CategoryWikiProcess
2020-12-25 08:33:18 gemini://transjovian.org:1965/page/Titan/5
2020-12-25 08:33:30 gemini://communitywiki.org/page/CultureOrganis%C3%A9e
2020-12-25 08:33:57 gemini://transjovian.org:1965/history/Spaces
2020-12-25 08:34:23 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/TimurIsmagilov
2020-12-25 08:34:30 gemini://alexschroeder.ch:1965/tag/Hex%20Describe
2020-12-25 08:34:56 gemini://communitywiki.org:1965/page/SoftwareBazaar
2020-12-25 08:35:02 gemini://communitywiki.org:1965/page/DoTank
2020-12-25 08:35:22 gemini://transjovian.org:1965/test/history/Welcome
2020-12-25 08:36:56 gemini://alexschroeder.ch:1965/tag/Gadgets
2020-12-25 08:38:20 gemini://alexschroeder.ch:1965/tag/Games
2020-12-25 08:45:58 gemini://alexschroeder.ch:1965/do/comment/GitHub
2020-12-25 08:46:05 gemini://alexschroeder.ch:1965/html/GitHub
2020-12-25 08:46:12 gemini://alexschroeder.ch:1965/raw/Comments_on_GitHub
2020-12-25 08:46:19 gemini://alexschroeder.ch:1965/raw/GitHub
2020-12-25 08:47:45 gemini://alexschroeder.ch:1965/page/2018-08-24_GitHub
2020-12-25 08:47:51 gemini://alexschroeder.ch:1965/do/comment/Comments_on_2018-08-24_GitHub
2020-12-25 08:47:57 gemini://alexschroeder.ch:1965/html/Comments_on_2018-08-24_GitHub
2020-12-25 09:21:26 gemini://alexschroeder.ch:1965/do/more

See what I mean? This is not a human. This is an unsupervised bot, otherwise the operator would have discovered that this makes no sense.

The solution I’m using for my websites is logging IP numbers and using fail2ban to ban IP numbers that request too many pages. The ban is for 10min, and if you’re a “recidive”, meaning you got banned three times for 10min, then you’re going to be banned for a week. The problem I have is that I would prefer a solution that doesn’t log IP numbers. It’s good for privacy and we should write our software such that privacy comes first.

So I wrote a Phoebe extension called “speed bump”. Here’s what it currently does.

For every IP number, Phoebe records the last 30 requests in the last 60 seconds. If there are more than 30 requests in the last 60 seconds, the IP number is blocked. If somebody is faster on average than two seconds per request, I assume it’s a bot, not a human.

For every IP number, Phoebe records whether the last 30 requests were suspicious or not. A suspicious request is a request that is “disallowed” for bots according to “robots.txt” (more or less). If 10 requests or more of the last 30 requests in the last 60 seconds are suspicious, the IP number is also blocked. That is, even if somebody is as slow as three seconds per request, if they’re all suspicious, I assume it’s a bot, not a human.

When an IP number is blocked, it is blocked for 60s, and there’s a 120s probation time. When you’re blocked, Phoebe responds with a “44” response. This means: slow down!

If the IP number sends another request while it is blocked, or if it gives cause for another block in the probation time, it is blocked again and the blocking time is doubled: the IP is blocked for 120s and probation is extended by 240s. And if it happens again, it is doubled again: blocked for 240s and probabation is extended by 480s.

The “/do/speed-bump/debug” URL (which requires a known client certificate) shows you the raw data, and the “/do/speed-bump/status” URL (which also requires a known client certificate) shows you a human readable summary of what’s going on.

Here’s an example:

Speed Bump Status
 From    To Warns Block Until Probation IP
 n/a   n/a   0/ 0   60s  n/a       100m 3.8.145.31
 n/a   n/a   0/ 0   60s    4h       14h 35.176.162.140
 n/a   n/a   0/ 0   60s  n/a         9h 18.134.198.207
-280s   -1s  7/30  n/a   n/a       n/a  3.10.221.60

All four of these numbers belong to “Amazon Data Services UK”.

If there are numbers in the “From” and “To” columns, that means the IP made a request in the last 60s. The “Warns” column says how many of the requests were considered “suspicious”. “Block” is the block time. As you can see, none of the bots managed to increase the block time. Why is that? The “Probation” column offers a glimpse into what happened: as the bots kept making requests while they were blocked, they kept adding to their own block.

A bit later:

Speed Bump Status
 From    To Warns Block Until Probation IP
 n/a   n/a   0/ 0   60s  n/a        83m 3.8.145.31
 n/a   n/a   0/ 0   60s    4h       13h 35.176.162.140
 n/a   n/a   0/ 0   60s  n/a         9h 18.134.198.207
-219s   -7s  3/30  n/a   n/a       n/a  3.10.221.60

It seems that the last IP number is managing to thread the line.

Clearly, this is all very much in flux. I’m still working on it – and finding bugs in my “robots.txt”, unfortunately. I’ll keep this page updated as I learn more. One idea I’ve been thinking about is the time windows: how many pages would an enthusiastic human read on a new site: 60 pages in an hour, one minute per page? Or maybe twice as much? That would point towards keeping a counter for a long term average: if you’re requesting more than 60 pages in 30min, perhaps a timeout of 30min is appropriate?

The smol net is also a slow net. There’s no need for almost all activity to be crawlers. If at all, crawlers should be the minority! So, if my sites had 95% human activity and 5% robot activity, I’d be more understanding. But right now, it’s crazy. All the CO₂ wasted, for bots.

I’m on The Butlerian Jihad!

Comments on 2020-12-25 Defending against crawlers

Wouldn’t you get most of them by just blocking everything with “[Bb]ot” in the User-Agent?

Adam 2020-12-25 16:15 UTC


It depends on what your goal is, and on the protocol you’re talking about. In the second half of my post I was talking about Gemini. That is a very simply protocol: establish a TCP/IP connection, with TLS, send a URI, get bet a status header line + content. That is, the request does not contain any header lines, unlike HTTP.

As for HTTP, which I mention in the first half: if a search engine were to crawl the new pages on my sites, slowly, then I wouldn’t mind so much, as long as the search engine is one intended for humans (these days that would be Google and Bing, I guess). I’d like to block those that misbehave, or that have goals I disagree with, and I’d like not to block the future search engine that is going to dethrone Google and Bing. I need to keep that hope alive, in any case. So if I want a nuanced result, I need a nuanced response. Slow down bots that can take a hint. Block bots that don’t. Block bots from dubious companies. And so on.

– Alex 2020-12-25 21:47 UTC


Here’s the current status of my “speed bump” extension to Phoebe:

Speed Bump Status
 From    To Warns Block Until Probation IP
 -10m   -9m 11/11  365d  364d      729d 3.11.81.100
 -12h  -12h 11/11  365d  364d      729d 18.130.221.176
 -12h  -12h 11/13  365d  364d      729d 3.9.134.250
 -14h  -14h 11/15  365d  364d      729d 3.8.127.24
 -14h  -14h 11/13  365d  364d      729d 167.114.7.65
 -10h  -10h 11/12  365d  364d      729d 18.134.146.76
 -16m  -14m 11/12  365d  364d      729d 3.10.232.193

All of these IP numbers have blocked themselves for over a year (or until I restart the server). Usign “whois” to identify the organisation (and verifying my guess for tilde.team using “dig”) we get the following:

3.11.81.100     Amazon Data Services UK
18.130.221.176  Amazon Data Services UK
3.9.134.250     Amazon Data Services UK
3.8.127.24      Amazon Data Services UK
167.114.7.65    Tilde Team
18.134.146.76   Amazon Data Services UK
3.10.232.193    Amazon Data Services UK

Oh well. Every new IP number is going to make 10–20 requests and it’s going to add a line. We could improve upon the model: once an IP is blocked for a year (the maximum), then use WHOIS to look up the IP number range. Taking the first number for example, we find that the “NetRange” is 3.8.0.0 - 3.11.255.255 and the “CIDR” is 3.8.0.0/14. Keep watching, once we have three IP numbers from the entire range blocked, there’s no need to block them all individually, we can just block the whole range. In our example, we would have reacted once we had blocked 3.11.81.100, 3.9.134.250, and 3.8.127.24. At that point, 3.10.232.193 would have been blocked preemptively.

Compare this to how GUS works. Indexing runs are made a few times a month. The IP numbers the requests come from a documented. They don’t change like the crawler (or crawlers?) running on Amazon. I’m tempted to say the bot operators hosting their bot on Amazon look like they are actively trying to evade the block. It feels like trespassing and it makes me angry.

– Alex 2020-12-26


Tilde Team is probably people, not a crawler. I gave more details in a reply to your toot.

petard 2020-12-26 19:21 UTC


For those who don’t follow us on Mastodon… 😁 I replied with a screenshot of more or less the following, saying that the requests made from Tilde Team seem to indicate that this is an unsupervised crawler, not humans. The vast majority of requests is from a bot.

2020-12-27 01:20:31 gemini://alexschroeder.ch:1965/2008-05-09_Ontology_of_Twitter
2020-12-27 01:20:40 gemini://alexschroeder.ch:1965/2011-02-14_The_Value_of_a_Web_Site
2020-12-27 01:20:48 gemini://alexschroeder.ch:1965/2013-01-23_Security_of_Code_Downloaded_from_Online_Sources
2020-12-27 01:20:54 gemini://alexschroeder.ch:1965/2016-05-28_nginx_as_a_caching_proxy
2020-12-27 01:21:01 gemini://alexschroeder.ch:1965/Comments_on_2011-02-14_The_Value_of_a_Web_Site
2020-12-27 01:24:54 gemini://transjovian.org:1965/gemini/diff/common%20wiki%20structure/1
2020-12-27 01:25:01 gemini://transjovian.org:1965/gemini/diff/common%20wiki%20structure/2
2020-12-27 01:25:08 gemini://transjovian.org:1965/gemini/diff/common%20wiki%20structure/3
2020-12-27 01:25:15 gemini://transjovian.org:1965/gemini/do/atom
2020-12-27 01:25:23 gemini://transjovian.org:1965/gemini/do/rss
2020-12-27 01:25:29 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/1
2020-12-27 01:25:37 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/2
2020-12-27 01:25:43 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/3
2020-12-27 01:46:49 gemini://communitywiki.org:1965/do/comment/BestPracticesForWikiTheoryBuilding
2020-12-27 01:46:58 gemini://communitywiki.org:1965/html/BestPracticesForWikiTheoryBuilding
2020-12-27 01:47:04 gemini://communitywiki.org:1965/page/PromptingStatement
2020-12-27 01:47:11 gemini://communitywiki.org:1965/page/WeLoveVolunteers
2020-12-27 01:47:18 gemini://communitywiki.org:1965/raw/BestPracticesForWikiTheoryBuilding
2020-12-27 01:47:26 gemini://communitywiki.org:1965/raw/Comments_on_BestPracticesForWikiTheoryBuilding
2020-12-27 01:47:33 gemini://communitywiki.org:1965/tag/inprogress
2020-12-27 01:47:41 gemini://communitywiki.org:1965/tag/practice
2020-12-27 01:47:48 gemini://communitywiki.org:1965/tag/practices
2020-12-27 01:47:56 gemini://communitywiki.org:1965/tag/prescription
2020-12-27 01:48:02 gemini://communitywiki.org:1965/tag/prescriptions
2020-12-27 01:48:11 gemini://communitywiki.org:1965/tag/recommendation
2020-12-27 01:48:16 gemini://communitywiki.org:1965/tag/recommendations
2020-12-27 01:48:23 gemini://communitywiki.org:1965/tag/theorybuilding
2020-12-27 01:51:05 gemini://communitywiki.org:1965/do/comment/HansWobbe
2020-12-27 01:51:08 gemini://communitywiki.org:1965/html/HansWobbe
2020-12-27 01:57:51 gemini://communitywiki.org:1965/page/BlikiNet
2020-12-27 02:17:04 gemini://communitywiki.org:1965/page/ChainVideo
2020-12-27 02:28:46 gemini://communitywiki.org:1965/page/CwbHwoAg
2020-12-27 02:58:36 gemini://communitywiki.org:1965/page/DfxMapping

Suspicious signs:

  • visiting date pages from all over the place (2008, 2011, 2013, 2016)
  • visiting all the old revisions of a page (/1, /2, /3)
  • visiting all the diffs of a page (/1, /2, /3)
  • visiting the comment prompt and not leaving a comment (do/comment)
  • visiting lots of tags (/tag)
  • visiting HTML copies of pages without looking at the Gemini copies (/html)
  • visiting raw copies of pages without looking at the Gemini copies (/raw)

These are not people. This is a crawler verifying its database. And ignoring robots.txt.

I think the main problem is that I run multiple sites served via Gemini with thousands of pages, and all the pages have links to alternate views (page history, page diff, HTML copy, raw copy, comments prompt), so perhaps mine are the only sites where crawlers might actually get to their limits. If somebody new sets up a Gemini server and serves two score static gemtext files, then these crawlers do little harm. But as it stands, there’s a constant barrage on my servers that stands in no relation to the amount of human activity.

Some of these URIs are violating robots.txt. But it’s not just that. I also feel a moral revulsion: all the CO₂ wasted shows a disregard for resources these people are not paying for. This is exactly the problem our civilisation faces, on a small scale.

Thus, where as GoogleBot and BingBot might be nominally useful (the wealth concentration we’ve seen as a consequence of their data gathering notwithstanding), the ratio of change to crawl is and remains important. Once a site is crawled, how often and what URLs should you crawl again? The current system is so wasteful.

Anyway, I have a lot of anger in me.

– Alex 2020-12-27


That’s a good summary of our conversation. My suggestion that requests from Tilde Team were probably people was based on the fact that it’s a public shell host that people use to browse gemini. (I have an account there and use it happily. It’s mostly a nice place with people I like to talk to. I am not otherwise affiliated.)

Seeing that log dump makes it clear that someone on that system is behaving badly.

petard 2020-12-27 14:32 UTC


Current status:

Speed Bump Status
 From    To Warns Block Until Probation IP
 -33m  -33m 30/30   28d   27d       55d 78.47.222.156 78.46.0.0/15
 -17h  -17h 11/11   28d   27d       55d 3.9.165.84 3.8.0.0/14
 -46h  -46h 17/17   28d   26d       54d 18.130.170.163 18.130.0.0/16
  -2d   -2d 11/11   28d   26d       54d 18.134.12.41 18.132.0.0/14
 -44h  -44h 11/11   28d   26d       54d 18.132.209.113 18.132.0.0/14
 -22h  -22h 13/13   28d   27d       55d 35.178.128.94 35.178.0.0/15
 -38h  -38h 12/12   28d   26d       54d 3.8.185.90 3.8.0.0/14
 -17h  -17h 12/12   28d   27d       55d 35.177.73.123 35.176.0.0/15
 -42h  -42h 11/11   28d   26d       54d 18.130.151.101 18.130.0.0/16
  -5h   -5h 13/13   28d   27d       55d 167.114.7.65 167.114.0.0/17
 -17h  -17h 14/14   28d   27d       55d 52.56.225.165 52.56.0.0/16
 -42h  -42h 12/12   28d   26d       54d 18.135.104.61 18.132.0.0/14
  -8h   -8h 12/12   28d   27d       55d 35.179.91.110 35.178.0.0/15
  -4h   -4h 11/11   28d   27d       55d 18.130.166.9 18.130.0.0/16
 -20h  -20h 11/11   28d   27d       55d 52.56.232.202 52.56.0.0/16
 -36h  -36h 13/13   28d   26d       54d 35.178.91.123 35.178.0.0/15
 -36h  -36h 11/11   28d   26d       54d 3.8.195.248 3.8.0.0/14

Until CIDR
  27d 18.130.0.0/16
  27d 3.8.0.0/14
  27d 35.178.0.0/15
  26d 18.132.0.0/14
→ menu

Almost all of them Amazon Data Services UK, a few Hetzner, some OVH Hosting.

Seeing whole net ranges being blocked makes me happy. The code seems to work as expected.

– Alex 2020-12-29 16:35 UTC


The list keeps growing. I decided to write a script that would retrieve this page for me, and call WHOIS for all the networks identified.

#!/usr/bin/perl
use Modern::Perl;
use Net::Whois::IP qw(whoisip_query);
say "Requesting data";
my $data = qx(gemini --cert_file=/home/alex/.emacs.d/elpher-certificates/alex.crt --key_file=/home/alex/.emacs.d/elpher-certificates/alex.key gemini://transjovian.org/do/speed-bump/status);
say "Reading blocked networks";
my %seen;
while ($data =~ /(\d+\.\d+\.\d+\.\d+|[0-9a-f]+:[0-9a-f]+:[0-9a-f:]+)\/\d+/g) {
  my $ip = $1;
  next if $seen{$ip};
  $seen{$ip} = 1;
  my $response = whoisip_query($ip);
  my $name = $response->{OrgName} || $response->{netname} || $response->{Organization};
  if ($name) {
    say "$ip $name";
  } else {
    say "$ip";
    for (keys %$response) {
      say "  $_: $response->{$_}";
    }
  }
}

Result:

Requesting data
Reading blocked networks
3.8.0.0 Amazon Data Services UK
35.178.0.0 Amazon Data Services UK
18.130.0.0 Amazon Data Services UK
35.176.0.0 Amazon Data Services UK
52.56.0.0 Amazon Data Services UK
78.46.0.0 HETZNER-nbg1-dc1
167.114.0.0 OVH Hosting, Inc.
18.132.0.0 Amazon Data Services UK
67.205.144.0 DigitalOcean, LLC

Oh hey, Digital Ocean is new.

– Alex 2020-12-30 11:10 UTC


Let’s check the number of requests blocked, relying on the Phoebe log files. “Looking at <some URL>” is an info log message it prints for every request. Let’s count them:

# journalctl --unit phoebe --since 2020-12-29|grep "Looking at"|wc -l
11700

Let’s see how many are caught by network range blocks:

# journalctl --unit phoebe --since 2020-12-29|grep "Net range is blocked"|wc -l
1812

Let’s see how many of them are just lone IP numbers being blocked:

# journalctl --unit phoebe --since 2020-12-29|grep "IP is blocked"|wc -l
2862

And first time offenders:

# journalctl --unit phoebe --since 2020-12-29|grep "Blocked for"|wc -l
8

I guess that makes 4682 blocked bot requests out of 11700 requests, or 40% of all requests.

The good news is that more than half seem to be legit? Or are they? I’m growing more suspicious all the time.

Let’s check HTTP access!

# journalctl --unit phoebe --since 2020-12-29|grep "HTTP headers"|wc -l
320
# journalctl --unit phoebe --since 2020-12-29|grep "HTTP headers"|perl -e 'while(<STDIN>){m/(\w*bot\w*)/i; print "$1\n"}'|sort|uniq --count
      1 
     22 bingbot
      2 Bot
     80 googlebot
     34 Googlebot
     88 MJ12bot
     32 SeznamBot
     61 YandexBot

That is, of the 11700 requests I’m looking at, I’ve had 320 web requests, of which 319 (!) where bots.

I think the next step will be to change the robots.txt served via the web to disallow them all.

– Alex 2020-12-30 11:40 UTC


Hm, but blocking IPAs the style you mention would e.g. block my hacker space, where I’ve told a bunch of nerds that Gemini is cool, and they should have a look at … your site. And if it isn’t a hacker space, it’s a student’s dorm, or similar, behind NAT.

I understand your anger, but blocking IPAs in the end isn’t better than Hotmail & Google not accepting mail from my host - they think it’s suspicious, because it’s small (it has proper DNS, no blacklist and so on, they just ASSUME it would could be wrong. Internet is “everyone can talk to everyone”, and my approach is to make that happen. Every counter approach is breaking the Internet, IMHO. YMMV.

– Götz 2021-01-05 23:40 UTC


How would you defend against bad actors, then? Simply accept it as a fact of life and add better infrastructure, or put the “smol net” behind a login? If all I have is an IP number of a peer connecting to my server, then all the consequences must relate to the IP number, or there must be no consequences. That’s how I understand the situation.

– Alex Schroeder 2021-01-06 11:09 UTC

Add Comment

More...

Comments

You probably want to contact me via one of the means listed on the Contact page. This is probably the wrong place to do it. 😄

– Alex Schroeder 2020-05-22 12:19 UTC

Referrers: Diary Diary