Gemini

Gemini is a light-weight text protocol, a spiritual successor to Gopher.

Phoebe is a wiki that serves its pages via Gemini and the web and that can be edited via Titan (and via the web, with some extra code in your config file). It powers The Transjovian Council.

Moku Pona is a feed aggregator and change monitor that collects data from feeds and detects page changes via Gopher, Gemini, and the web.

Lupa Pona is a simple static Gemini file server. You can use it to serve the Moku Pona output, for example.

If you’re an Emacs user, I have an Elpher setup that allows me to edit pages via Gemini and Titan!

This site is available via Gemini, too. The wiki raw text is transformed into Gemini format (”Gemtext”) on a best-effort basis.

2021-01-16 How to use Gemtext for writing

Here’s something I do quite often: I talk about a thing, I quote a longer passage somebody else wrote, and I link to the source, hopefully naming the author and title, or something like it. How do you do that, given the tools that Gemini markup (also known as gemtext) provides, and how do you render it on your client? Screenshots, HTML, CSS, plain text, it’s all good. How do you write it, and how do you want it to look?

When I write for the web, I know what it looks like:

This is the quote. – _Title of the piece_, by the named author

“Title of the piece” is a link.

If I were to use gemtext, this is what I’d write:

> This is the quote.
=> Title of the piece, by the named author

Unfortunately, viewed via the web, this looks weird:

    This is the quote.
 • _Title of the piece, by the named author_

Perhaps I can improve this using simple CSS?

Sadly, it doesn’t look too good using Elpher, either.

> This is the quote.

→ _Title of the piece, by the named author_

Perhaps the look would be improved if I didn’t treat these links as list items but simply as link “paragraphs”? I don’t know.

Comments on 2021-01-16 How to use Gemtext for writing

Here’s how I do it:

=> protocol://link something by someone:
> Quote excerpt quote excerpt

That colon is important!

TimurIsmagilov 2021-01-16 19:34 UTC


Convert those lines differently. Linkline followed by contiguous set of quote lines: transpose, delete the newline, add the emdash. Other linklines render as li a.

– Sandra Snan 2021-01-16 22:44 UTC

Add Comment

2020-12-25 Defending against crawlers

Recently I was writing about my dislike of crawlers. They turned into a kind of necessary evil on the web – but it’s not too late to choose a different future for Gemini. I want to encourage all server authors and crawler authors to think long and hard about alternatives.

One feature I dislike about crawlers is that they follow all the links. Sure, we have a semi-useful “robots.txt” specification but it’s easy to get wrong on both sides. I’ve had bugs in my “robots.txt” file for a long time without noticing them.

Now, if the argument is that I cannot prevent crawlers from leeching my site, then the reply is of course that I will try to defend myself even if it is impossible to get 100% right. The first line of defence is going to be my “robots.txt” file. It’s not perfect, and that’s fine. It’s not perfect because I just need to look at the Apache config file I use to block all the misbehaving bots and user agents.

Ugh, look at the bots hitting my websites:

$ /home/alex/bin/bot-detector < /var/log/apache2/access.log.1
--------------Bandwidth-------Hits-------Actions--Delay
   Everybody      2416M     102520
    All Bots       473M      23063   100%    19%
-------------------------------------------------------
     bingbot    240836K       8157    35%    31%    10s
   YandexBot     36279K       3905    16%     3%    22s
   Googlebot     65808K       3679    15%    34%    23s
      Adsbot     20187K       3115    13%     0%    27s
    Applebot     66607K        908     3%     0%    95s
     Facebot      1611K        390     1%     0%   220s
    PetalBot      1548K        329     1%    12%   257s
         Bot      2101K        308     1%     0%   280s
      robots       525K        231     1%     0%   374s
    Slackbot      1339K        224     0%    96%   382s
  SemrushBot       572K        194     0%     0%   438s

A full 22% of all user agents have something like “bot” in their name. Just look at them! Let’s take the last one, SemrushBot. The user agent also has a link, and if you want, you can take a look. All the goals it lists are disgusting, or benefit corporations and not me, nor other humans. Barf with me as you read statements such as “the Brand Monitoring tool to index and search for articles” or “the On Page SEO Checker and SEO Content template tools reports”. 🤮

Have a look at your own webserver logs. 22% of my CPU resources, of the CO₂ my server produces, of the electricity it eats, for machines that do not have my best interest in mind. I don’t want a web that’s 20% bots crawling all over my site. I don’t want a Gemini space that’s 20% bots crawling all over my capsules.

OK, so let’s talk about defence.

When I look at my Gemini logs, I see that plenty of requests come from Amazon hosts. I take that as a sign of autonomous agents. I might sound like a fool on the Butlerian Jihad, but if I need to block entire networks, then I will. Looking up WHOIS data also costs resources. It would be better if we could identify these bots by looking at their behaviour.

The first mistake crawlers make is that they are too fast. So here’s what I’m currently doing: for every IP, I’m keeping track of the last 30 requests in the last 60s. If there are more requests, the IP number is blocked. Thus, if your average clicking rate is more than 1 click per 2s over a 1min window, you’re probably a bot and you get blocked. I might have to turn this up. Perhaps 1 click per 5s makes more sense for a human.

But there’s more. I see the crawlers clicking on all the links. All the HTML renderings of the pages are already available via Gemini. It makes no sense to request all of these. All the raw wiki text of the pages are available as well. It makes no sense to request all of these, either. All the links to leave a comment are also on every page. It makes no sense to request all of these either.

Here’s what I’m talking about. I picked an IP number from the logs and checked what they’ve been requesting:

2020-12-25 08:32:37 gemini://transjovian.org:1965/page/Linking/2
2020-12-25 08:32:45 gemini://alexschroeder.ch:1965/history/Perl
2020-12-25 08:32:59 gemini://communitywiki.org:1965/page/CategoryWikiProcess
2020-12-25 08:33:18 gemini://transjovian.org:1965/page/Titan/5
2020-12-25 08:33:30 gemini://communitywiki.org/page/CultureOrganis%C3%A9e
2020-12-25 08:33:57 gemini://transjovian.org:1965/history/Spaces
2020-12-25 08:34:23 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/TimurIsmagilov
2020-12-25 08:34:30 gemini://alexschroeder.ch:1965/tag/Hex%20Describe
2020-12-25 08:34:56 gemini://communitywiki.org:1965/page/SoftwareBazaar
2020-12-25 08:35:02 gemini://communitywiki.org:1965/page/DoTank
2020-12-25 08:35:22 gemini://transjovian.org:1965/test/history/Welcome
2020-12-25 08:36:56 gemini://alexschroeder.ch:1965/tag/Gadgets
2020-12-25 08:38:20 gemini://alexschroeder.ch:1965/tag/Games
2020-12-25 08:45:58 gemini://alexschroeder.ch:1965/do/comment/GitHub
2020-12-25 08:46:05 gemini://alexschroeder.ch:1965/html/GitHub
2020-12-25 08:46:12 gemini://alexschroeder.ch:1965/raw/Comments_on_GitHub
2020-12-25 08:46:19 gemini://alexschroeder.ch:1965/raw/GitHub
2020-12-25 08:47:45 gemini://alexschroeder.ch:1965/page/2018-08-24_GitHub
2020-12-25 08:47:51 gemini://alexschroeder.ch:1965/do/comment/Comments_on_2018-08-24_GitHub
2020-12-25 08:47:57 gemini://alexschroeder.ch:1965/html/Comments_on_2018-08-24_GitHub
2020-12-25 09:21:26 gemini://alexschroeder.ch:1965/do/more

See what I mean? This is not a human. This is an unsupervised bot, otherwise the operator would have discovered that this makes no sense.

The solution I’m using for my websites is logging IP numbers and using fail2ban to ban IP numbers that request too many pages. The ban is for 10min, and if you’re a “recidive”, meaning you got banned three times for 10min, then you’re going to be banned for a week. The problem I have is that I would prefer a solution that doesn’t log IP numbers. It’s good for privacy and we should write our software such that privacy comes first.

So I wrote a Phoebe extension called “speed bump”. Here’s what it currently does.

For every IP number, Phoebe records the last 30 requests in the last 60 seconds. If there are more than 30 requests in the last 60 seconds, the IP number is blocked. If somebody is faster on average than two seconds per request, I assume it’s a bot, not a human.

For every IP number, Phoebe records whether the last 30 requests were suspicious or not. A suspicious request is a request that is “disallowed” for bots according to “robots.txt” (more or less). If 10 requests or more of the last 30 requests in the last 60 seconds are suspicious, the IP number is also blocked. That is, even if somebody is as slow as three seconds per request, if they’re all suspicious, I assume it’s a bot, not a human.

When an IP number is blocked, it is blocked for 60s, and there’s a 120s probation time. When you’re blocked, Phoebe responds with a “44” response. This means: slow down!

If the IP number sends another request while it is blocked, or if it gives cause for another block in the probation time, it is blocked again and the blocking time is doubled: the IP is blocked for 120s and probation is extended by 240s. And if it happens again, it is doubled again: blocked for 240s and probabation is extended by 480s.

The “/do/speed-bump/debug” URL (which requires a known client certificate) shows you the raw data, and the “/do/speed-bump/status” URL (which also requires a known client certificate) shows you a human readable summary of what’s going on.

Here’s an example:

Speed Bump Status
 From    To Warns Block Until Probation IP
 n/a   n/a   0/ 0   60s  n/a       100m 3.8.145.31
 n/a   n/a   0/ 0   60s    4h       14h 35.176.162.140
 n/a   n/a   0/ 0   60s  n/a         9h 18.134.198.207
-280s   -1s  7/30  n/a   n/a       n/a  3.10.221.60

All four of these numbers belong to “Amazon Data Services UK”.

If there are numbers in the “From” and “To” columns, that means the IP made a request in the last 60s. The “Warns” column says how many of the requests were considered “suspicious”. “Block” is the block time. As you can see, none of the bots managed to increase the block time. Why is that? The “Probation” column offers a glimpse into what happened: as the bots kept making requests while they were blocked, they kept adding to their own block.

A bit later:

Speed Bump Status
 From    To Warns Block Until Probation IP
 n/a   n/a   0/ 0   60s  n/a        83m 3.8.145.31
 n/a   n/a   0/ 0   60s    4h       13h 35.176.162.140
 n/a   n/a   0/ 0   60s  n/a         9h 18.134.198.207
-219s   -7s  3/30  n/a   n/a       n/a  3.10.221.60

It seems that the last IP number is managing to thread the line.

Clearly, this is all very much in flux. I’m still working on it – and finding bugs in my “robots.txt”, unfortunately. I’ll keep this page updated as I learn more. One idea I’ve been thinking about is the time windows: how many pages would an enthusiastic human read on a new site: 60 pages in an hour, one minute per page? Or maybe twice as much? That would point towards keeping a counter for a long term average: if you’re requesting more than 60 pages in 30min, perhaps a timeout of 30min is appropriate?

The smol net is also a slow net. There’s no need for almost all activity to be crawlers. If at all, crawlers should be the minority! So, if my sites had 95% human activity and 5% robot activity, I’d be more understanding. But right now, it’s crazy. All the CO₂ wasted, for bots.

I’m on The Butlerian Jihad!

Comments on 2020-12-25 Defending against crawlers

Wouldn’t you get most of them by just blocking everything with “[Bb]ot” in the User-Agent?

Adam 2020-12-25 16:15 UTC


It depends on what your goal is, and on the protocol you’re talking about. In the second half of my post I was talking about Gemini. That is a very simply protocol: establish a TCP/IP connection, with TLS, send a URI, get bet a status header line + content. That is, the request does not contain any header lines, unlike HTTP.

As for HTTP, which I mention in the first half: if a search engine were to crawl the new pages on my sites, slowly, then I wouldn’t mind so much, as long as the search engine is one intended for humans (these days that would be Google and Bing, I guess). I’d like to block those that misbehave, or that have goals I disagree with, and I’d like not to block the future search engine that is going to dethrone Google and Bing. I need to keep that hope alive, in any case. So if I want a nuanced result, I need a nuanced response. Slow down bots that can take a hint. Block bots that don’t. Block bots from dubious companies. And so on.

– Alex 2020-12-25 21:47 UTC


Here’s the current status of my “speed bump” extension to Phoebe:

Speed Bump Status
 From    To Warns Block Until Probation IP
 -10m   -9m 11/11  365d  364d      729d 3.11.81.100
 -12h  -12h 11/11  365d  364d      729d 18.130.221.176
 -12h  -12h 11/13  365d  364d      729d 3.9.134.250
 -14h  -14h 11/15  365d  364d      729d 3.8.127.24
 -14h  -14h 11/13  365d  364d      729d 167.114.7.65
 -10h  -10h 11/12  365d  364d      729d 18.134.146.76
 -16m  -14m 11/12  365d  364d      729d 3.10.232.193

All of these IP numbers have blocked themselves for over a year (or until I restart the server). Usign “whois” to identify the organisation (and verifying my guess for tilde.team using “dig”) we get the following:

3.11.81.100     Amazon Data Services UK
18.130.221.176  Amazon Data Services UK
3.9.134.250     Amazon Data Services UK
3.8.127.24      Amazon Data Services UK
167.114.7.65    Tilde Team
18.134.146.76   Amazon Data Services UK
3.10.232.193    Amazon Data Services UK

Oh well. Every new IP number is going to make 10–20 requests and it’s going to add a line. We could improve upon the model: once an IP is blocked for a year (the maximum), then use WHOIS to look up the IP number range. Taking the first number for example, we find that the “NetRange” is 3.8.0.0 - 3.11.255.255 and the “CIDR” is 3.8.0.0/14. Keep watching, once we have three IP numbers from the entire range blocked, there’s no need to block them all individually, we can just block the whole range. In our example, we would have reacted once we had blocked 3.11.81.100, 3.9.134.250, and 3.8.127.24. At that point, 3.10.232.193 would have been blocked preemptively.

Compare this to how GUS works. Indexing runs are made a few times a month. The IP numbers the requests come from a documented. They don’t change like the crawler (or crawlers?) running on Amazon. I’m tempted to say the bot operators hosting their bot on Amazon look like they are actively trying to evade the block. It feels like trespassing and it makes me angry.

– Alex 2020-12-26


Tilde Team is probably people, not a crawler. I gave more details in a reply to your toot.

petard 2020-12-26 19:21 UTC


For those who don’t follow us on Mastodon… 😁 I replied with a screenshot of more or less the following, saying that the requests made from Tilde Team seem to indicate that this is an unsupervised crawler, not humans. The vast majority of requests is from a bot.

2020-12-27 01:20:31 gemini://alexschroeder.ch:1965/2008-05-09_Ontology_of_Twitter
2020-12-27 01:20:40 gemini://alexschroeder.ch:1965/2011-02-14_The_Value_of_a_Web_Site
2020-12-27 01:20:48 gemini://alexschroeder.ch:1965/2013-01-23_Security_of_Code_Downloaded_from_Online_Sources
2020-12-27 01:20:54 gemini://alexschroeder.ch:1965/2016-05-28_nginx_as_a_caching_proxy
2020-12-27 01:21:01 gemini://alexschroeder.ch:1965/Comments_on_2011-02-14_The_Value_of_a_Web_Site
2020-12-27 01:24:54 gemini://transjovian.org:1965/gemini/diff/common%20wiki%20structure/1
2020-12-27 01:25:01 gemini://transjovian.org:1965/gemini/diff/common%20wiki%20structure/2
2020-12-27 01:25:08 gemini://transjovian.org:1965/gemini/diff/common%20wiki%20structure/3
2020-12-27 01:25:15 gemini://transjovian.org:1965/gemini/do/atom
2020-12-27 01:25:23 gemini://transjovian.org:1965/gemini/do/rss
2020-12-27 01:25:29 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/1
2020-12-27 01:25:37 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/2
2020-12-27 01:25:43 gemini://transjovian.org:1965/gemini/page/common%20wiki%20structure/3
2020-12-27 01:46:49 gemini://communitywiki.org:1965/do/comment/BestPracticesForWikiTheoryBuilding
2020-12-27 01:46:58 gemini://communitywiki.org:1965/html/BestPracticesForWikiTheoryBuilding
2020-12-27 01:47:04 gemini://communitywiki.org:1965/page/PromptingStatement
2020-12-27 01:47:11 gemini://communitywiki.org:1965/page/WeLoveVolunteers
2020-12-27 01:47:18 gemini://communitywiki.org:1965/raw/BestPracticesForWikiTheoryBuilding
2020-12-27 01:47:26 gemini://communitywiki.org:1965/raw/Comments_on_BestPracticesForWikiTheoryBuilding
2020-12-27 01:47:33 gemini://communitywiki.org:1965/tag/inprogress
2020-12-27 01:47:41 gemini://communitywiki.org:1965/tag/practice
2020-12-27 01:47:48 gemini://communitywiki.org:1965/tag/practices
2020-12-27 01:47:56 gemini://communitywiki.org:1965/tag/prescription
2020-12-27 01:48:02 gemini://communitywiki.org:1965/tag/prescriptions
2020-12-27 01:48:11 gemini://communitywiki.org:1965/tag/recommendation
2020-12-27 01:48:16 gemini://communitywiki.org:1965/tag/recommendations
2020-12-27 01:48:23 gemini://communitywiki.org:1965/tag/theorybuilding
2020-12-27 01:51:05 gemini://communitywiki.org:1965/do/comment/HansWobbe
2020-12-27 01:51:08 gemini://communitywiki.org:1965/html/HansWobbe
2020-12-27 01:57:51 gemini://communitywiki.org:1965/page/BlikiNet
2020-12-27 02:17:04 gemini://communitywiki.org:1965/page/ChainVideo
2020-12-27 02:28:46 gemini://communitywiki.org:1965/page/CwbHwoAg
2020-12-27 02:58:36 gemini://communitywiki.org:1965/page/DfxMapping

Suspicious signs:

  • visiting date pages from all over the place (2008, 2011, 2013, 2016)
  • visiting all the old revisions of a page (/1, /2, /3)
  • visiting all the diffs of a page (/1, /2, /3)
  • visiting the comment prompt and not leaving a comment (do/comment)
  • visiting lots of tags (/tag)
  • visiting HTML copies of pages without looking at the Gemini copies (/html)
  • visiting raw copies of pages without looking at the Gemini copies (/raw)

These are not people. This is a crawler verifying its database. And ignoring robots.txt.

I think the main problem is that I run multiple sites served via Gemini with thousands of pages, and all the pages have links to alternate views (page history, page diff, HTML copy, raw copy, comments prompt), so perhaps mine are the only sites where crawlers might actually get to their limits. If somebody new sets up a Gemini server and serves two score static gemtext files, then these crawlers do little harm. But as it stands, there’s a constant barrage on my servers that stands in no relation to the amount of human activity.

Some of these URIs are violating robots.txt. But it’s not just that. I also feel a moral revulsion: all the CO₂ wasted shows a disregard for resources these people are not paying for. This is exactly the problem our civilisation faces, on a small scale.

Thus, where as GoogleBot and BingBot might be nominally useful (the wealth concentration we’ve seen as a consequence of their data gathering notwithstanding), the ratio of change to crawl is and remains important. Once a site is crawled, how often and what URLs should you crawl again? The current system is so wasteful.

Anyway, I have a lot of anger in me.

– Alex 2020-12-27


That’s a good summary of our conversation. My suggestion that requests from Tilde Team were probably people was based on the fact that it’s a public shell host that people use to browse gemini. (I have an account there and use it happily. It’s mostly a nice place with people I like to talk to. I am not otherwise affiliated.)

Seeing that log dump makes it clear that someone on that system is behaving badly.

petard 2020-12-27 14:32 UTC


Current status:

Speed Bump Status
 From    To Warns Block Until Probation IP
 -33m  -33m 30/30   28d   27d       55d 78.47.222.156 78.46.0.0/15
 -17h  -17h 11/11   28d   27d       55d 3.9.165.84 3.8.0.0/14
 -46h  -46h 17/17   28d   26d       54d 18.130.170.163 18.130.0.0/16
  -2d   -2d 11/11   28d   26d       54d 18.134.12.41 18.132.0.0/14
 -44h  -44h 11/11   28d   26d       54d 18.132.209.113 18.132.0.0/14
 -22h  -22h 13/13   28d   27d       55d 35.178.128.94 35.178.0.0/15
 -38h  -38h 12/12   28d   26d       54d 3.8.185.90 3.8.0.0/14
 -17h  -17h 12/12   28d   27d       55d 35.177.73.123 35.176.0.0/15
 -42h  -42h 11/11   28d   26d       54d 18.130.151.101 18.130.0.0/16
  -5h   -5h 13/13   28d   27d       55d 167.114.7.65 167.114.0.0/17
 -17h  -17h 14/14   28d   27d       55d 52.56.225.165 52.56.0.0/16
 -42h  -42h 12/12   28d   26d       54d 18.135.104.61 18.132.0.0/14
  -8h   -8h 12/12   28d   27d       55d 35.179.91.110 35.178.0.0/15
  -4h   -4h 11/11   28d   27d       55d 18.130.166.9 18.130.0.0/16
 -20h  -20h 11/11   28d   27d       55d 52.56.232.202 52.56.0.0/16
 -36h  -36h 13/13   28d   26d       54d 35.178.91.123 35.178.0.0/15
 -36h  -36h 11/11   28d   26d       54d 3.8.195.248 3.8.0.0/14

Until CIDR
  27d 18.130.0.0/16
  27d 3.8.0.0/14
  27d 35.178.0.0/15
  26d 18.132.0.0/14
→ menu

Almost all of them Amazon Data Services UK, a few Hetzner, some OVH Hosting.

Seeing whole net ranges being blocked makes me happy. The code seems to work as expected.

– Alex 2020-12-29 16:35 UTC


The list keeps growing. I decided to write a script that would retrieve this page for me, and call WHOIS for all the networks identified.

#!/usr/bin/perl
use Modern::Perl;
use Net::Whois::IP qw(whoisip_query);
say "Requesting data";
my $data = qx(gemini --cert_file=/home/alex/.emacs.d/elpher-certificates/alex.crt --key_file=/home/alex/.emacs.d/elpher-certificates/alex.key gemini://transjovian.org/do/speed-bump/status);
say "Reading blocked networks";
my %seen;
while ($data =~ /(\d+\.\d+\.\d+\.\d+|[0-9a-f]+:[0-9a-f]+:[0-9a-f:]+)\/\d+/g) {
  my $ip = $1;
  next if $seen{$ip};
  $seen{$ip} = 1;
  my $response = whoisip_query($ip);
  my $name = $response->{OrgName} || $response->{netname} || $response->{Organization};
  if ($name) {
    say "$ip $name";
  } else {
    say "$ip";
    for (keys %$response) {
      say "  $_: $response->{$_}";
    }
  }
}

Result:

Requesting data
Reading blocked networks
3.8.0.0 Amazon Data Services UK
35.178.0.0 Amazon Data Services UK
18.130.0.0 Amazon Data Services UK
35.176.0.0 Amazon Data Services UK
52.56.0.0 Amazon Data Services UK
78.46.0.0 HETZNER-nbg1-dc1
167.114.0.0 OVH Hosting, Inc.
18.132.0.0 Amazon Data Services UK
67.205.144.0 DigitalOcean, LLC

Oh hey, Digital Ocean is new.

– Alex 2020-12-30 11:10 UTC


Let’s check the number of requests blocked, relying on the Phoebe log files. “Looking at <some URL>” is an info log message it prints for every request. Let’s count them:

# journalctl --unit phoebe --since 2020-12-29|grep "Looking at"|wc -l
11700

Let’s see how many are caught by network range blocks:

# journalctl --unit phoebe --since 2020-12-29|grep "Net range is blocked"|wc -l
1812

Let’s see how many of them are just lone IP numbers being blocked:

# journalctl --unit phoebe --since 2020-12-29|grep "IP is blocked"|wc -l
2862

And first time offenders:

# journalctl --unit phoebe --since 2020-12-29|grep "Blocked for"|wc -l
8

I guess that makes 4682 blocked bot requests out of 11700 requests, or 40% of all requests.

The good news is that more than half seem to be legit? Or are they? I’m growing more suspicious all the time.

Let’s check HTTP access!

# journalctl --unit phoebe --since 2020-12-29|grep "HTTP headers"|wc -l
320
# journalctl --unit phoebe --since 2020-12-29|grep "HTTP headers"|perl -e 'while(<STDIN>){m/(\w*bot\w*)/i; print "$1\n"}'|sort|uniq --count
      1 
     22 bingbot
      2 Bot
     80 googlebot
     34 Googlebot
     88 MJ12bot
     32 SeznamBot
     61 YandexBot

That is, of the 11700 requests I’m looking at, I’ve had 320 web requests, of which 319 (!) where bots.

I think the next step will be to change the robots.txt served via the web to disallow them all.

– Alex 2020-12-30 11:40 UTC


Hm, but blocking IPAs the style you mention would e.g. block my hacker space, where I’ve told a bunch of nerds that Gemini is cool, and they should have a look at … your site. And if it isn’t a hacker space, it’s a student’s dorm, or similar, behind NAT.

I understand your anger, but blocking IPAs in the end isn’t better than Hotmail & Google not accepting mail from my host - they think it’s suspicious, because it’s small (it has proper DNS, no blacklist and so on, they just ASSUME it would could be wrong. Internet is “everyone can talk to everyone”, and my approach is to make that happen. Every counter approach is breaking the Internet, IMHO. YMMV.

– Götz 2021-01-05 23:40 UTC


How would you defend against bad actors, then? Simply accept it as a fact of life and add better infrastructure, or put the “smol net” behind a login? If all I have is an IP number of a peer connecting to my server, then all the consequences must relate to the IP number, or there must be no consequences. That’s how I understand the situation.

– Alex Schroeder 2021-01-06 11:09 UTC

Add Comment

2020-12-22 Crawling

Today I added some numbers to my firewall block lists again. I feel somewhat bad about them, because I guess my robots.txt was not setup up correctly. At the same time, I feel like I don’t owe anything to unwatched crawlers.

So for the moment I banned:

  • three Amazon numbers
  • two OVH numbers from France
  • an OVH number from Canada
  • a Hetzner number from Germany

What’s the country for global companies that don’t pay taxes? “From the Internet‽”

I’m still not quite sure what to do now. I guess I just don’t know how I feel about crawling in general. What would a network look like that doesn’t crawl? Crawling means that somebody is accumulating data. Valuable data. Toxic data. Haven’t we been through all this? I got along well with the operator of GUS when we exchanged a few emails. And yet, the crawling makes me uneasy.

Data parsimony demands that we don’t collect the data we don’t need; that we don’t store the data we collect; that we don’t keep the data we store. Delete that shit! One day somebody inherits, steal, leaks, or buys that data store and does things with it that we don’t want. I hate it that defending against leeches (eager crawlers I feel are misbehaving) means I need to start tracking visitors. Logging IP numbers. Seeing what pages the active IP numbers are looking at. Are they too fast for a human? Is the sequence of links they are following a natural reading sequence? I hate that I’m being forced to do this every now and then. And what if I don’t? Perhaps somebody is going to use Soweli Lukin to index Gopherspace? Perhaps somebody is going to use The Transjovian Vault to index Wikipedia via Gemini? Unsupervised crawlers will do anything.

There’s something about the whole situation that’s struggling to come out. I’m having trouble putting to words.

Like… There’s a certain lack of imagination out there.

People say: that’s the only way a search engine can work. Maybe? Maybe not? What if sites sent updates, compiled databases? A bit like the Sitemap format? A sort of compiled and compressed word/URI index? And if then very few people actually sent in those indexes, would that not be a statement in itself? Now people don’t object because it takes effort. But perhaps they wouldn’t opt-in either!

People say: anything you published is there for the taking. Well, maybe if you’re a machine. But if there is a group of people sitting around a cookie jar, you wouldn’t say “nobody is stopping me from taking them all.” Human behaviour can be nuanced and if we cannot imagine technical soltions that are nuanced, then I don’t feel like it’s on me to reduce my expectations. Perhaps it’s on implementors to design more nuanced solutions! And yes, those solutions are going to be more complicated. Obviously so! We’ll have to design ways to negotiate consent, privacy, data ownership.

It’s a failure of design if “anything you publish is there for the taking” is the only option. Since I don’t want this, I think it’s on me and others who dislike this attitude to confidently set boundaries. I use fail2ban to ban user agents who make too many requests, for example. Somebody might say: “why don’t you use a caching proxy?” The answer is that I don’t feel like it is on me to build a technical solution that scales to the corpocaca net; I should be free to run a site built for the smol net. If you don’t behave like human on the smol net, I feel free to defend my vision of the net as I see fit – and I encourage you to do the same.

People say: ah, I understand – you’re using a tiny computer. I like tiny computers. That’s why you want us to treat your server like it was smol. No. I want you to treat my server like it was smol because we’re on the smol net.

For my websites, I took a look at my log files and saw that at the very least (!) 21% of my hits are bots (18253 / 88862). Of these, 20% are by the Google bot, 19% are by the Bing bot, 10% are by the Yandex bot, 5% are by the Apple bot, and so on. And that is considering a long robots.txt, and a huge Apache config file to block a gazillion more user agents! Is this what you want for Gemini? The corpocaca Gemini? Not me!

Comments on 2020-12-22 Crawling

Some more data, now that I’m looking at my logs. These are the top hits on my sites via Phoebe:

  1         Amazon  1062
  2    OVH Hosting   929
  3         Amazon   912
  4         Amazon   730
  5         Amazon   653
  6         Amazon   482
  7         Amazon   284
  8         Amazon   188
  9        Hetzner   171
 10         Amazon   129
 11    OVH Hosting    55

Not a single human in sight, as far as I can tell. Crawlers crawling everywhere.

– Alex 2020-12-23 00:19 UTC


I installed the “surge protection” I’ve been using for Oddmuse, too: If you make more than 20 requests in 20s, you get banned for always increasing periods. Hey, I’m using Gemini status 44 at long last!

I’m thinking about checking whether the last twenty URIs requested are “plausible” – if somebody is requesting a lot of HTML pages, or raw pages, then that’s a sign of a crawler just following all the links and perhaps that deserves to get banned even if it’s slow enough.

– Alex 2020-12-23 00:25 UTC


I don’t want it for Gemini, but Gemini is part of the greater Internet, so I have to deal with autonomous agents. If I didn’t, I wouldn’t have a Gemini server (or a gopher server, or a web server, or ...). Are you familiar with King Canute?

Sean Conner 2020-12-23 06:22 UTC


Yeah, it’s true: we’re out in the open Internet and therefore we always have to defend against bots and crawlers, and I hate it. As for Cnut, he knew of the incoming tide and knew that he was powerless to command it. Yet, he didn’t drown, he didn’t build his house where the tide would wash it away, nor plant his fields where they would drown, and neither do I feel obligated to welcome the crawling tide, or to accommodate the creators of the crawling tide, or bow respectfully as the crawlers eat my CPU and produce more CO₂. Instead, I will build fences to hold back the crawlers, and rebuke their creators, and tell anybody who thinks that building autonomous agents to crawl the net is a solution for a problem that either their problem does not need solving or that their solution is lazy and that they should try harder.

I liked it better when I wrote emails back and forth to the creator of the only crawler.

Perhaps I should write up a different proposal.

To add your site to this new search engine, you provide the URL of your own index. The index is a gzipped Berkley DB where the keys are words (stemming and all that is optional on the search engine side, the index does not have to do this) and the values are URIs, furthermore, the URIs themselves are also keys, with values being the ISO language code. I’d have to check how well that works, since I know nothing of search engines.

Even if the search engine wants to do trigram search, they can still do it, I think.

If we don’t want to tie ourselves down, we could use a simple gemtext format:

=> URI all the unique words separated by spaces in any order

If the language is very important, we could use the language of the header. I still think compression is probably important so I’d say we use something like “text/gemini+gzip; lang=de-CH; charset=utf-8”.

Let’s give this a quick try:

#!/usr/bin/env perl
use Modern::Perl;
use File::Slurper qw(read_dir read_text);
use URI::Escape;
binmode STDOUT, ":utf8";
my $dir = shift or die "No directory provided\n";
my @files = read_dir($dir);
for my $file (@files) {
  my $data = read_text("$dir/$file");
  my %result;
  # parsing Oddmuse data files like mail or HTTP headers
  while ($data =~ /(\S+?): (.*?)(?=\n[^ \t]|\Z)/gs) {
    my ($key, $value) = ($1, $2);
    $value =~ s/\n\t/\n/g;
    $result{$key} = $value;
  }
  my $text = $result{text};
  next unless $text;
  my %words;
  $words{$_}++ for $text =~ /\w+/g;
  my $id = $file;
  $id =~ s/\.pg$//;
  $id = uri_escape($id);
  say "=> gemini://alexschroeder.ch/page/$id " . join(" ", keys %words);
}

Running it on a backup copy of my site:

index ~/Documents/Sibirocobombus/home/alex/alexschroeder/page \
| gzip > alexschroeder.gmi.gz

“ls -lh alexschroeder.gmi.gz” tells me the resulting file is 149MB in size and “zcat alexschroeder.gmi.gz | wc -l” tells me it has 8441 lines.

I would have to build a proof of concept search engine to check whether this is actually a reasonable format for self-indexing and submitting indexes to search engines.

– Alex 2020-12-23 14:44 UTC

Add Comment

2020-12-16 Unban Tilde Team

@tinyrabbit recently contacted me and said they had noticed how my site appread to be blocked from tilde.team (getting timeouts). And indeed, so it was! I had added the IP number to the firewall’s list after noticing a misbehaving Gemini crawler. Let’s hope the crawler is no longer misbehaving.

I also noticed that the Gemini mailing list archive has a very long discussion about robots.txt and all that. Oh my. It was literally too long and I didn’t read it.

Add Comment

2020-12-10 International domain names and Phoebe

Recently, I was wondering about international domain names and my Gemini-wiki, Phoebe, after reading a post in French about international domain names. Today, I decided to give it a try. My laptop is called “melanobombus“ because it’s black and I like bumblebees, so I wanted to give it the alias “mélanobombus” and give it a try.

The first thing to install was a punycode converter. I used “idn”. The punycode for “mélanobombus” is “xn--mlanobombus-bbb”. So this is how my “/etc/hosts” begins:

127.0.0.1       localhost
127.0.1.1       melanobombus
127.0.1.1       xn--mlanobombus-bbb

Then I started Phoebe with that hostname:

phoebe --host=xn--mlanobombus-bbb

Since I trust that Firefox knows how to handle international domain names, I started by pointing it at “https://mélanobombus:1965/” – and it worked. 😁

Using my super simple command line client did not work. When I asked it to connect to “gemini://mélanobombus/” it broke with an ugly error message. When I asked Elpher to connect to the same address, it didn’t work either: timeout. Lagrange also reported a network failure.

OK. At least now I know that this is a client problem because Firefox does the right thing.

Comments on 2020-12-10 International domain names and Phoebe

In Perl, I need to do the following to get the same punycode from a URL provided on the command line:

use Modern::Perl;
my ($url) = @ARGV;
my $iri = IRI->new(value => $url);
say domain_to_ascii(decode_utf8 $iri->host);

But then the client still sends the original IRI to the server which then replies that it won’t proxy, unlike the result of using Firefox.

– 2020-12-10 23:25 UTC


Ah, it’s more complicated, of course. HTTP doesn’t actually send an URI! It sends something like this:

GET /some/path HTTP/1.1
Host: xn--mlanobombus-bbb

Well, I’m working on a branch where my simple command line client and Phoebe work together, at least. I feel like I owe this to my last name. In the previous milenium, I started to write Schroeder instead of Schröder because internationalisation was a big problem. This was before I had ever heard of Unicode and UTF-8.

– 2020-12-11 11:23 UTC


Oh my invisible friend... the changes required aren’t trivial. Still on it!

– 2020-12-11 12:25 UTC


Wow, I’ve been looking at the mailing list. Sooo much discussion! Three threads:

– 2020-12-11 16:22 UTC


Talked about it a bit on the Gemini IRC channel until I got angry. 😒

– 2020-12-11 18:02 UTC


OK, so I abandoned the international domain name (IDN) branch where the client sends gemini://東京.jp/ to the server because I thought it was stupid that the client could send gemini://東京.jp/ but not gemini://東京.jp/日本語. So then I went back to RFC 3987 and read the introduction to section 3, “Relationship between IRIs and URIs”:

IRIs are meant to replace URIs in identifying resources for protocols, formats, and software components that use a UCS-based character repertoire. These protocols and components may never need to use URIs directly, especially when the resource identifier is used simply for identification purposes. However, when the resource identifier is used for resource retrieval, it is in many cases necessary to determine the associated URI, because currently most retrieval mechanisms are only defined for URIs. In this case, IRIs can serve as presentation elements for URI protocol elements. An example would be an address bar in a Web user agent.

This seems ideal. Clients are free to use IRIs to communicate with their users: letting them enter an IRI like gemini://東京.jp/日本語 into the address bar and showing them such URIs. But if these clients communicate with a Gemini server, they need to use URIs. They need to request gemini://xn--1lqs71d.jp:43343/page/%E6%97%A5%E6%9C%AC%E8%AA%9E.

This reduces the problem for Phoebe to a much smaller set of problems.

How does a server administrator start Phoebe such that it serves an international domain name (IDN)? I added code that converts the host name provided from the current locale (which is wha the administrator is using in their shell) to Unicode, and converts that to punycode, and uses that to look up the IP addresses using getaddrinfo(3).

When deciding whether to serve a URL, Phobe checks for the punycode representation of the host names: xn--1lqs71d.jp. Anything else is considered to be a proxy request and is most likely denied.

The part handling the percent encoded paths already exists and already works.

What remains is a usability problem, of course. When users write their gemtext, they need to use some sort of tool to do the conversions. It’s basically delegated to Editor support. I guess I’m fine with that, for the moment. Allowing users to link to IRIs and transparently translating them to URIs as they get sent to the client would be an easy change to make. I’d still have to solve such problems as how to handle a space character. If the server sees “⇒ One Two Three” this is a relative link to “One”. Otherwise the user would have had to write “⇒ One%20Two Three” or “⇒ One%20Two%20Three”. Then again, perhaps I can just leave it as-is because I often copy and paste weird URIs from elsewhere.

In either case, I can definitely delay this. 😁

So now all I need to do to get some closure is to add some sort of IRI handling to my simple command-line client.

My “/etc/hosts” has the punycode encoding of the new hostname:

127.0.0.1	localhost
127.0.1.1	melanobombus
127.0.1.1	xn--mlanobombus-bbb

I start Phoebe using a non-ASCII hostname and a non-ASCII pagename:

script/phoebe --host=mélanobombus --wiki_page=Schröder

Then I use the “gemini” client:

$ script/gemini --verbose gemini://mélanobombus/page/Schröder
Contacting xn--mlanobombus-bbb:1965
Requesting gemini://xn--mlanobombus-bbb:1965/page/Schr%C3%83%C2%B6der
20 text/gemini; charset=UTF-8
# Schröder
This page does not yet exist.

More:
=> gemini://xn--mlanobombus-bbb:1965/history/Schr%C3%83%C2%B6der History
=> gemini://xn--mlanobombus-bbb:1965/raw/Schr%C3%83%C2%B6der Raw text
=> gemini://xn--mlanobombus-bbb:1965/html/Schr%C3%83%C2%B6der HTML

Happy! 🥳🚀🚀🎉

And when I use Firefox, it works as well. 🙂

The log. Notice the host header.

[debug] HTTP headers: referer => 'https://xn--mlanobombus-bbb:1965/', dnt => '1', accept-encoding => 'gzip, deflate, br', upgrade-insecure-requests => '1', user-agent => 'Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0', accept-language => 'de-CH,de;q=0.8,en-US;q=0.5,en;q=0.3', accept => 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', host => 'xn--mlanobombus-bbb:1965', connection => 'keep-alive'
[info] Looking at GET /page/Schr%C3%B6der HTTP/1.1
[info] Serving Schröder as HTML via HTTP

– Alex


OK, I got something!

– 2020-12-14 18:49 UTC


URL Interop, by @bagder. “This document is an attempt to describe where and how RFC 3986 (86), RFC 3987 (87) and the WHATWG URL Specification (TWUS) differ. This might be useful input when trying to interop with URLs on the modern Internet.”

– Alex 2021-01-18 10:25 UTC

Add Comment

2020-12-04 Phoebe 2 with Gemini chat

As Phoebe 2 is based on a streaming server framework, I can now implement a Gemini based chat server!

Here’s a screenshot. Explanation below!

Screenshot

In the top left corner you see the Phoebe server with debug log level. It shows user kensanata joined the chat, how Alex joined the chat, and how one of the chat members sent a URL with a text message.

In the top right corner you see user kensanata connecting with my command line client, using my Astrobotany certificate from the Elpher directory. This user was the first user in the chat and so they see “You are the only one.”

In the bottom left corner you see user Alex connecting with my command line client, using my alex certificate from the Elpher directory. This user was the second user in the chat so they see “Other chat members: kensanata”. At this point, user kensanata in the top right corner sees “Alex joined”.

In the bottom right orner you see user Alex connecting a second time, but this time not to listen but to say something in response to a 10 prompt: “This is a test”. At this point, user kensanata in the top right corner sees “Alex: This is a test.”

🎉🚀🚀

Comments on 2020-12-04 Phoebe 2 with Gemini chat

It seems like my command-line Gemini client isn’t very good at this: it hangs up after a while. Lagrange, on the other hand, seems to work just fine! Hm... Ah, there it is. It needs an inactivity timeout increase and there are two different timeouts: one for the connection establishment and one for inactivity, and I was setting the wrong one. But now it’s fixed.

Here’s how I start it, from the Phoebe work-directory:

script/gemini \
  --cert=/home/alex/.emacs.d/elpher-certificates/alex.crt \
  --key=/home/alex/.emacs.d/elpher-certificates/alex.key \
  gemini://transjovian.org/do/chat/listen

And then when I want to say something:

script/gemini \
  --cert=/home/alex/.emacs.d/elpher-certificates/alex.crt \
  --key=/home/alex/.emacs.d/elpher-certificates/alex.key \
  gemini://transjovian.org/do/chat/say?Hello

Or point a regular client at the URL:

If you’re a programmer interested in this, you could of course write your own specialized chat client. 🙂

– 2020-12-04 15:15 UTC


Now we can reimplement IRC, badly! 😂

I am already wondering: what about spam? Harassment? Moderation? Blocking?

– 2020-12-04 18:19 UTC


There’s now a chat client that does both at the same time: forks and the parent connects to the “listen” URL and the child loops a prompt and sends stuff to the “say” URL. Currently the only issue I have is that sometimes the prompt gets messed up because I don’t know my way around the terminal control sequences. ESC 7, ESC 8, ESC [ 1 G, gaaaaah. But it’s good enough to go to bed, now. 😀

Also, I think clients disconnecting aren’t registered by the server. So now there about ten Alex chat members and they’re all disconnected. Too bad!

– 2020-12-04 22:30 UTC


I think it’s really working! From the manual page of gemini-chat…

First, generate your client certificate for as many or as few days as you like:

openssl req -new -x509 -newkey ec -subj "/CN=Alex" \
  -pkeyopt ec_paramgen_curve:prime256v1 -days 100 \
  -nodes -out alex-cert.pem -keyout alex-key.pem

Then start the program:

gemini-chat --cert=alex-cert.pem --key=alex-key.pem \
  --listen=gemini://transjovian.org/do/chat/listen \
  --say=gemini://transjovian.org/do/chat/say \

Or, if you use a client like Lagrange, just open two tabs:

Currently I’m using the client certificate’s common name as your name on the channel. I think this makes more sense than using the fingerprint because we humans are confused when we see a bunch of users called Alex, each representing a different person.

– Alex 2020-12-05


Sending a timestampe in the HH:MM UTC format as “keep alive” every five minutes.

– Alex 2020-12-05 09:57 UTC


If somebody knows how to write a nice terminal application that prints input to stdout while keep a prompt with readline at the bottom going, let me know. Right now, that doesn’t work.

– Alex 2020-12-05 10:02 UTC

Add Comment

2020-11-30 Gemini specification changes in a disagreeable way

For a moment I thought that the scheme is now mandatory in gemtext links. As it turns out, that was not the intent. In any case I’m happy to see many people on the mailing list liking scheme-less links as well. These are “network-path references” and they are cool for multi-hosting things.

Leaving the scheme away provided for a nice feature: Let’s say we have three resources: A is only reachable via Gemini, B is only reachable via HTTPS, and C is reachable by both. We’re writing gemtext that is going to be served to Gemini clients as-is and to web browsers via a simple transformation to HTML.

We can write these three links using the following:

=> gemini://A  A
=> https://B   B
=> //C         C

All URL processing clients know how to handle relative links, based on the current URL the client is looking at.

A client encountering these three links on a Gemini page links C to a Gemini resource generates the following three links:

gemini://A
https://B
gemini://C

A browser encountering these three links on a HTTPS page links C to a HTTPS resource:

gemini://A
https://B
https://C

The problem is that I cannot assign a scheme to C when I write the gemtext because if I do, one of the results is going to be borked: either the Gemini client sees a link to C using HTTPS or the web browser sees a link to C using Gemini, and I can’t handle it on the server side because if I use the gemini scheme for C and the server is serving a HTTPS request, then the server has to somehow know that A cannot be reached via HTTPS but C can. There is nothing in the gemtext to help the server make the decision. The only agent qualified to make this decision is the client based on the base URL.

Now, as gemtext already allows relative URLs this functionality comes “for free” (once you realize that your client has to use a decent URL parsing library).

Phoebe uses this.

For reference, see RFC 3986, “Uniform Resource Identifier (URI): Generic Syntax”.

  • Section 4.2, Relative Reference: “A relative reference that begins with two slash characters is termed a network-path reference; such references are rarely used.”
  • Section 5.1, Establishing a Base URI: “The term “relative” implies that a “base URI” exists against which the relative reference is applied.”

Comments on 2020-11-30 Gemini specification changes in a disagreeable way

The page used to say “Sadly, the scheme is now mandatory in gemtext. … This is unfortunate, because … I don’t see the benefit of the simplification. We’re simplifying part of the relative URL resolution and losing interesting functionality.”

– Alex


Strong agree that this is a step in the wrong dir.

– Sandra Snan 2020-11-30 17:21 UTC


Let me get this clear. So if I write “⇒ tldr.txt tldr text file” is that a valid link or not?

– Sandra Snan 2020-11-30 18:42 UTC


I was under the impression that the spec update /only/ referred to the over-the-wire requests and responses? I.e., an author could still link to //example.com, but the browser would have to resolve that to gemini://example.com (or http://example.com) before requesting it from the server. Also, the regular rules regarding URI references would apply (so Sandra’s ⇒ tldr.txt tldr would resolve to the tldr.txt in the current directory).

I hope I’m not wrong, because the way you’re writing about it is a bad direction – but I think the way I’m writing about it is the accurate picture.

– Case D (acdw), 2020-11-30 13:01 CST (UTC-6)


Let’s hope it was just people misunderstanding the email!

And yes, Sandra, that would be legal. Example:

– 2020-11-30 19:49 UTC


In that case I don’t mind. My apologies to Solderpunk.

– Sandra Snan 2020-11-30 21:38 UTC


Yeah, looks like this is going to change.

That’s what I get for uncharitable reading. I’ll make some changes to the blog post itself.

– Alex 2020-12-01 06:01 UTC

Add Comment

2020-11-25 The UI

I was talking to @Sandra about user interfaces. It all started with a link by @yhancik to an issue on GitHub for the CSS working group:

“Our primary goal with this document is to draw the attention of the developers of all major browser engines, Blink, WebKit and Gecko and invite them and anyone else interested to share their opinions about this proposal and the idea of a new, unified open web standard built around JavaScript.”

I think the proposal does touch on something meaningful (otherwise it would be much less interesting, of course): why do we need HTML, and CSS, and JS? And, given that this appears to meet a need, can this need be satisfied with a simpler solution, one that is easier to teach, to learn, less error prone. If I arrived on the web today, I would also wonder: what is this shit of garbled intertwined same same but different tentacularity‽ JS & JSON-über-alles‽ maybe, maybe not. But an interesting question nonetheless.

I mean, replacing all the various layers and their specific syntax with a unified JS+JSON representation is an interesting idea. And yes, I also share in the big nerd dream of a protocol based only on semantics. The first problem that arises is where the semantics end and the presentation begins. Sometimes the presentation also carries meaning. But more importantly, if anything print has shown us that there is an intense need for design: fonts, layouts, images, colours; these are all important to the newspaper and magazine industry, to the pamphlet makers and information visualisation designers, to the governments, the public relation officers, the clubs and associations and to book authors, and so on. For them, I suspect semantics-first is a bit like lyrics-first for a music sharing site: it’s not going to happen.

So between the dream of a semantic first web of a global information sharing network and the interest of everybody else in rich media for communication, I don’t know what to say.

But in my heart, I carry my own dream: to have a client that offers a written natural language UI: there is no need for forms because it information between client and servers is negotiated in a back and forth natural language, text mediated exchange until the system has the information it needs. UIs are designed Inform 7 style. And for that kind of setup, the Gemini protocol and markup would be good enough. We’d just need a more interesting clients and servers:

  • clients need a presentation area where the last result is shown (Gemini servers already do this)
  • clients need a small presentation area where the last exchange is shown (a Gemini streaming server could provide this on a special URL, within a session established via client certificate)
  • clients need a small input area (this is the single text area input Gemini already provides, except that some of the clients I know will stop showing the current page as they ask for input)
  • servers need good parsing capabilities for text input

Maybe somebody somewhere knows how to hook up Inform 7... I want it!

Perhaps you think people like to pick things from a list, but realistically, how often is this true?

Perhaps it’s only true for short lists in the four to ten items range, though.

  • How should we address you in letters? Mr. If the list is really short, pointing at specific areas on the screen with the mouse is difficult for computer illiterate people and for nerds that like their keyboard-driven user interfaces.
  • What city do you live in? Zürich. The number of named localities with a postal code in Switzerland is several thousand items long.
  • In what country? Switzerland. Even the list of countries is long and I hate scrolling through lists to find that one country near Sweden and Swaziland.

For all these questions, picking an answer from a list radio-button or drop-down style is painful. A drop-down with autocomplete might work. And now we’re not far from a simple text interface! Or should I call it prompt interface? We could autocomplete on the client, and we usually do in text editors that offer completion of all sorts of words.

I guess what I’m also implying is that no site can satisfy the need for an answer to a really specific problem where you just want to talk to a human. At least my hypothetical site would say: “I’m sorry but that sounds really specific and I have now idea what to say. If you call 123456789 you can talk to a human who can hopefully help. What do you think?” / “OK, will do.” / “Cheers, good luck.” Or something like that.

Comments on 2020-11-25 The UI

A later, and there’s an interesting defence of the tentacularity of the web: it basically says we need structure (HTML), presentation (CSS), interactivity (JS) all separated and goes on to list all ways in which these needs make sense, and I agree. That, to me, is not the problem. The problem is that we have a mess of three different languages and many more standards to learn. The proposed solution in the article involves adding more layers to the mountain, mentioning XSLT, Less, and many more. All allow you to generate the three we need from some other kind of format, except it isn’t standardized and the tech stack is even more byzantine than before. This is the kind of quick fix that doesn’t address the fundamental problem: the current technologies (HTML, CSS, and JS) are hard to teach and hard to learn, therefore adding to the stack is not going to make the problem go away.

– 2020-11-26 09:13 UTC

Add Comment

2020-11-20 Moku pona is on CPAN

Now that I’m slowly learning how to package code for CPAN – all of that after realizing that I could in fact upload tools and scripts without regular Perl modules for others to use – I’ve packaged and uploaded moku pona as well.

Thus, if you’re a Perl user, here’s how you’d install it:

cpan install App::mokupona

I hope it works for you. 🙂

Moku pona is a Gemini based feed reader. It can monitor URLs to feeds or regular pages for changes and keeps and updated list of these in a Gemini list. Moku pona knows how to fetch Gopher URLs, Gemini URLs, and regular web URLs.

You manage your subscriptions using the command-line, with moku pona.

You serve the resulting file using a Gemini server.

You read it all using your Gemini client.

Comments on 2020-11-20 Moku pona is on CPAN

I added lupa pona to CPAN as well (App::lupapona), which should help you self-host a directory of files, such as the moku pona directory.

– Alex 2020-11-21 23:29 UTC

Add Comment

2020-11-18 Feed reader, page watcher, via Gopher, Gemini, or the Web

Yeah, it’s moku pona 2! It’s not quite ready for release, yet. But if you want to try it, the repo is still the same. And if you only care about Gopher, you shouldn’t ugrade. Just stay on release 1.1 and you’ll be fine. There haven’t been any updates in a very long time and I don’t think there’s anything that needs changing. But for Gemini, I needed some changes:

  • I want to use my Gemini client to browse the moku pona updates
  • I want to subscribe to Gopher and Gemini pages
  • I want to subscribe to Atom and RSS feeds
  • I want to subscribe to feeds served via HTTP, HTTPS, and Gemini

Moku pona was close, but not quite there yet. If you look at the current repository, you’ll see that the main branch is still struggling with producing the correct output. It seems hard to believe but my problem is that the page with the updates has bugs. For a while now the code can’t seem drop older updates of the same feed and it drives me nuts.

Well, I’m still working on it. But it you’re interested, give it a look.

Eventually, I’m hoping to get it onto CPAN…

Add Comment

More...

Comments


Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. Note: in order to facilitate peer review and fight vandalism, we will store your IP number for a number of days. See Privacy Policy for more information. See Info for text formatting rules. You can edit the comment page if you need to fix typos. You can subscribe to new comments by email without leaving a comment.

To save this page you must answer this question:

Just say HELLO