Diary

Welcome to my Wall of Text! 🙂

This is both a wiki (a website editable by all) and a blog (an online diary about the stuff Alex Schroeder reads and does). If you’re a friend or relative, you might be interested in reading Life instead of this page. If you’ve come here from an RPG blog, you might want to head over to RPG. There are other similar categories to be found on the SiteMap.

Für Rollenspieler gibt es ebenfalls eine eigene RSP Kategorie.

2021-07-30 The web server locking me out

How strange. Today, once again, I suddenly find myself locked out of my websites. I cannot connect from my phone or my laptop. Both are on the same wifi, so both have the same IP number, and temporary 10min lockouts are a sign of fail2ban adding the IP number to the firewall rules. This usually happens when a single IP overloads the server. Hm… This does sound familiar.

Oh, here it is:

Recently I have noticed that I’m sometimes banned from my own websites. That is, the site is not reachable, but when I check any of the “is the site down for everybody or is it just me?” sites, it’s always just me. I also cannot SSH to the machine unless I use the IPv6 address directly. – 2020-01-31 Banning myself with fail2ban

While I wrote the above access came back. Let’s investigate!

As it turns out, my website is reachable again but ssh still refuses. How weird!

But IPv6 works:

ssh -p 882 root@2a02:418:6a04:178:209:50:237:1

Strangely enough, the IPv4 isn’t mentioned in the fail2ban logfile:

grep 178.209.50.237 /var/log/fail2ban.log

My IP number is also not listed in any of the firewall rules:

fail2ban-client status
fail2ban-client status alex-apache
fail2ban-client status recidive
fail2ban-client status sshd

Nothing suspicious.

But this is still the situation: I can now visit my sites using the web (ports 80, 443), Gemini (port 1965), but not SSH (port 882). I’m guessing that the tools I use (curl, Firefox, Elpher) all use IPv6?

alex@melanobombus ~> ssh sibirocobombus
ssh: connect to host alexschroeder.ch port 882: Connection refused
alex@melanobombus ~> ssh -p 882 178.209.50.237
ssh: connect to host 178.209.50.237 port 882: Connection refused
alex@melanobombus ~> ssh -p 882 2a02:418:6a04:178:209:50:237:1
Linux sibirocobombus 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u5 (2017-09-19) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
You have new mail.
Last login: Fri Jul 30 00:06:04 2021 from 85.195.244.167

Ah! I checked my “~/.ssh/config” file:

 Host sibirocobombus
   HostName alexschroeder.ch
   Port 882
   User alex
   AddressFamily inet

That “AddressFamily inet” is part of the answer: it mandates IPv4. The default is “any”. So now ssh works again.

Back to check the phone. Yes, I can connect again. I guess… I guess there was an IPv4 traffic hiccup somewhere?

Comments on 2021-07-30 The web server locking me out

I like to trace the route in such situations to at least see at which point along the way the problem is.

– deshipu 2021-07-30 09:16 UTC


Good point! Must remember that for next time.

– Alex 2021-07-30 09:23 UTC

Add Comment

2021-07-30 Forged in the Dark

I’m liking the Forged in the Dark games I’ve recently encountered. I’m two sessions into a John Harper’s Blades in the Dark game with my wife and GM Jörg, did our first score, and I must have listened to thirty hours of Sean Nittner’s Actual Play podcast based on the YouTube channel based on the Twitch stream, where he plays with GM Stras Acimovic, Jenn Martin, Misha B, and Jahmal Brown, and they play Stras Acimovic and John LeBoef-Little’s Band of Blades.

I don’t think the setting is all that important. We all know that rules and setting can be separate, and we all know that some books have rules and setting tightly intertwined. In this case, Blades in the Dark is about a gang in a sort of magic steam & electricity haunted London, and Band of Blades is about a group of soldiers in a sort of magic guns & undead military fantasy campaign (think Glenn Cook’s The Black Company). Both use the mechanics first introduced by John Harper. I feel that like in the brilliantly short Lady Blackbird, Harper manages to pull together the strands that are all out there in the various games being played right now and transforming it into something new but vaguely recognisable.

In short, the game uses d6 dice pools. Your traits determine how many dice you use, add one if somebody is helping you, add one if you’re pushing yourself, multiple people roll their own dice if it’s a group action. All these extra dice are paid for with “stress”: helping another character costs 1 stress, pushing yourself costs 2 stress, group actions cost the leader 1 stress for each failed roll by a group member. The highest roll in the dice pool determines the result. 1–3 is a failure, 4–5 is a success with a consequence, 6 is a success, two sixes is a critical success. Consequences can be resisted with more rolls, and if you don’t get a success in the resistance roll, that causes even more stress. And when you’re stressed out, you start acquiring permanent “trauma”: changes to your character that end up putting your character out of the game.

The three outcomes are not new: succeed, fail, and mixed results are something we’ve seen in Powered by the Apocalypse games. Instead of being very specifically tied to Moves listed on a character sheet, the consequences here are tied to rulings made by the game master (GM). I get the feeling Chris McDowell wrote about the simplicity of his Into the Odd system where the dice providing a simple 50:50 chance was good enough since all that you needed to make it work was to adjudicate the consequences accordingly. The Forged in the Dark games do just that: there is some necessarily vague guidance on how to adjudicate based on “Position” (controlled, risky, desperate – i.e. how bad is it going to get when you fail) and “Effect” (limited, standard, great – i.e. how good is it going to get when you succeed). Since you also get that dice pool instead of a single die, the simplicity McDowell was talking about is lost, but the core idea remains: since the GM is going to adjudicate anyway, why not entrust them with it and just provide some guidance so that the table knows what’s going on.

Another fun element are the encounter roll and flashbacks. The goal here is to skip all the planning of a heist or an operation. Start playing immediately! Do a simple the encounter roll and there’s your starting position (controlled, risky, desperate). The GM improvises the first scene of the adventure based on the starting position rolled and some info the players provided beforehand. If the players would have benefited from some planning, they can always call for a flashback scene where they did just that, and based on how obvious or improbable it seems, all they pay for it is more stress. Stress is the measure of all things.

I think that basically covers it. The above should also make obvious that most encounters on the adventure are handled by a very small number of rolls. Sometimes, just a single roll is enough for the scene: you explain how you overcome the obstacle, roll some dice, maybe take some damage (“harm”) or face some other consequences, perhaps the fiction changes, and the game moves on. If you want to resist the consequences, there’s a second roll to made. For big bad bosses or more complicated activities, multiple rolls might be necessary, but there’s practically no difference between those rolls being in the same scene or in another scene. It reminds me of the simple “Bloody Vs” tests in Luke Crane’s Burning Wheel.

I used to be fond of saying that I aim for the fights in my classic D&D games to be short: two rounds is ideal. I hate endless slog fests. It’s why I think that Save or Die effects are so important in D&D. You don’t actually die if you run these fights a certain way, you just start using up “neutralise poison” and “raise dead” spells, and you force players to avoid simple hack & slash fights. When their hit points are low, they fear the sword; when they have more hit points, they fear the level drain; when they have more levels, they fear the poison; and on it goes. The simple “Bloody Vs” tests in Burning Wheel, or these rolls in Powered by the Apocalypse and Forged in the Dark games, are even better: they mandate a resolution in one or two rolls, and then you just move on. Next scene!

My enthusiasm is perhaps not entirely surprising given that I just finished a D&D 5E Humblewood campaign with plenty of casual players that didn’t know the rules like the back of their hands (and they also struggled with no translation of the rules being available, I think) resulting in drawn out fights where everything just slows to a crawl. I’m easily bored as a player and this doesn’t work well for me. I start volunteering for session report write-ups that I do while the game is still ongoing, that’s how many spare cycles I have.

More interesting combat rules don’t make a slow fight more interesting. The fix to a slow fight is less roles and moving on to the next scene, I feel.

Another thing I’m trying right now is to have less people in the game. We’re just three people now: GM and two players. Perhaps we had too many people in that D&D game and that exacerbated the problem.

Comments on 2021-07-30 Forged in the Dark

Great read, thanks!

– digitalsin 2021-07-30 15:14 UTC


@Sandra wrote a long blog post starting out by describing how Blades in the Dark can be super slow as every enemy gets a clock and every roll invites a lot of discussion, and how D&D 5E fights can be super fast and over in a few minutes, with examples for all her points, and then she pivots and discussed how this is not necessarily so. It depends on the presence of those aforementioned clocks, on the number of characters per player, on playing online or at the table, on the ceremony one upholds around initiative, rolls, keeping information secret (which in all my D&D 5E games has been optimized for long and boring fights, for my taste, unfortunately).

All of the above needs to be nuanced by two hard facts♥ 1. As the aforementioned Blades campaign went on, fights became way quicker. 2. In our own D&D game, fights are now super slow – Why fights take a long time

– Alex 2021-07-31 10:02 UTC


Yeah, combats are a bit of a slog, especially from the player’s perspective. Although I still find 5e better than 3e in that regard with all the rules comblications. It’s also influencing pacing if each fight takes an hour or two (or sometimes three). I often skip planned fights as a DM when we already had one because then nothing would get accomplished storywise.

I am looking forward to go back to in person gaming, I remember combats to be a bit faster or at least they felt more exciting than staring at a screen all the time. There was also more room for side talk.

– Peter 2021-07-31 13:27 UTC

Add Comment

2021-07-29 Creative projects, perpetually work in progress

Sometimes I get a bit overwhelmed because many of my creative projects are things that I can’t ever “finish.” Like detailing a fantasy world, or mapping the real world, or anything of importance, probably, projects are forever unfinished. It’s us humans that decide to add “Fin” to the end of track, but we all know the work continues. It is perpetually work in progress. Sometimes we just decide to no longer work on it.

Face Generator generates faces from elements like eyes, ears, noses, mouths, hair, and so on. I could add new collections, or try and find artists who wanted to contribute, or I could add to my own collection. I sometimes load gallery of random faces and wonder. I’d like to add more elements to generate recognizably black men and women; I would like to add more good looking hair styles for women; I would like to make elves better in some way; I’d like to add more intelligent beasts. The spider faces are particularly cool, but there’s probably other types that would make more sense. On the technical side, I also know that there’s a problem: the code tries to figure out if an element has transparent pixels or if “white” is actually transparent. For my images, that is the case: white is transparent. Therefore, there’s no way to add “white hair” – because underlying lines show through (head silhouettes, ears). So I could also spend some time drawing “white hair” with actual white pixels and real transparency, and make sure the code knows how to mix these two modes. At the same time, I feel there’s the problem of diminishing returns, here. I always wonder: is it worth the extra effort? Perhaps the most enjoyable job would be to just add a new type of images and start drawing again, right from the beginning.

Text Mapper generates maps for tabletop role-playing games. I’m happy with the Alpine algorithm. It just works. I’d love it if it could generate steeper valleys, but I’m not quite sure what I’m looking for, so I’m happy enough as it is. I’m happy with Erin D. Smale’s algorithm but it doesn’t have rivers; so one could add them. That would require an altitude model, which I currently don’t have. I’m happy with the Gridmapper algorithm, but I’d also love to generate two or three levels for a dungeon so that the stairs line up. I have code that does it, but it isn’t quite done. It was surprisingly tricky and I’ve lost steam. The Apocalypse algorithm is simple and cool. Perhaps I could think of more details to add? The Traveller algorithm generates beautiful curved routes. The Island algorithm is new and interesting but I ran out of steam. What now? What next? The Archipelage algorithm was supposed to offer something new, but I forgot what it was and now I just see an unfinished project.

Hex Describe is the worst. It contains many hundreds of random tables, big and small. It takes a Text Mapper map and uses the random tables to generate a mini setting of tabletop role-playing games. When it works well, reading it makes me want to run games in these worlds. I love that. I love the Alex Schroeder tables together with the Alpine maps. I should get back to these tables and add more details. A while ago, I wanted to detail all sorts of extra-planar explorations but I ran out of steam. The Traveller tables with Traveller maps combo works well enough, but I’m thinking the level of detail is uneven. The crews of a type M subsidized liner have a bit too much detail, for example. I’m not sure about the skill selections. Many ship types are missing. I’m not sure about ship type distribution. It needs more missions. In actual play, I’m finding some of the missions to be too boring; I never use them. But I also don’t delete them from the generator, so what’s the deal? I don’t know.

Gridmapper is an SVG+XHTML+JavaScript single-file application. It could use an overhaul. Compressing and base64 encoding the maps. Organizing the saved dungeon maps on the wiki in some way. Rewriting the code in a style that I like better. This is still my first and only JavaScript project but I know that I don’t like how the code is organized.

Traveller Subsector Generator is a separate web app that also generates a Traveller subsector or sector map. That’s right. The algorithm from Text Mapper, rewritten. I guess I’m happy it’s on CPAN, at least. I’m not sure what to think of the traveller name generation, of the “realms” and the color codes. Was it really worth it?

Character Sheet Generator is a web application to fill SVG files with variables. In a way, better than fillable PDFs because a computer can easily churn out a ton of character sheets, and I do it for Halberds & Helmets. The promise was, however, to do this for more rule sets, to have more templates, and I never went back to it; and I never made it easy for other people to contribute their rules, either.

Halberds & Helmets podcast is aways in need of new episodes, I guess. But contemplating how many episodes it will take for the podcast to be “finished?” Impossible.

Sigh! Unfinished business wherever I look. I mean, it’s cool. Forever projects take forever. It’s like practicing any art, you just play and paint and write and do … and then you die, I guess. But it’s probably also why we like to take slices of our art and label them “finished”. There are songs, albums, paintings, essays, short stories, books, blog posts, and they’re “done” at some point.

Even this post will be “done,” promised!

Perhaps I’m setting myself up for these things. I can always go back to wiki pages and edit them. Leave comments on them. Perhaps that’s why I like the blog-like structure of this wiki. I like the idea of a wiki page, but a wiki page is “forever in progress.” A blog post is more or less “finished.” Sure, I can add comments, and I do, but those too are “finished.”

Anyway. Thoughts on projects and writing and art.

Comments on 2021-07-29 Creative projects, perpetually work in progress

It would’ve been two seconds of work for me to change tsp to white if I knew you wanted that; I was trying to more or less match random.png

– Sandra Snan 2021-07-30 06:42 UTC


The problem is rather that I need all the PNGs with transparent background and the opaque parts filled in with white pixels. None of the existing assets have that. Currently all are either blue and white, or blue and transparent.

– Alex 2021-07-30 07:05 UTC


I reckon you’ll probably need real transparency for some other case, so it’s probably better to implement it now rather than building hacks on top of hacks. Then again, if it’s just a toy project, just do whatever’s more interesting instead \/(ツ)\/

– Anonymous 2021-07-30 07:27 UTC


Yes. Technically, it’s all there. I just have hundreds of tiny PNG files where I would have to add either add the opaque pixels or make the surroundings transparent. It’s not a programming task. It’s the not automatable task of fiddling with hair assets, and I’m not motivated to do it. So everybody has dark hair.

– Alex 2021-07-30 09:22 UTC

Add Comment

2021-07-28 The underbelly of university

I was recently talking about university with @susannah, remembering…

I remember finishing my master’s degree with great grades and an informal PhD offer, and yet sitting on my bed, crying, feeling like a terrible loser, and my girlfriend being confused: why this breakdown? It was hard to explain that I could not be proud about work done in an environment full of cheating, lying, overlooking, creative statistics. I had realized in my last year that way too much of it all was fake. I wanted no part in it. That’s how I experienced it, in any case. I’ve heard different experiences, later. But mine was not cool in a weird way.

I remember being part of a “gifted students” program. A professor involved in the program suggested that I apply, and so I did. I had to bring two or three recommendation letters. The professor making the suggestion wrote the first one. I remember asking one of the quiet and unassuming professors, a real nerd, with an office at the far end of a long corridor. I looked up to them because they seemed entirely unconcerned about the publish or perish craze that was starting to pick up at the end of the last millennium. In the end, they did what I felt was right: they told me they weren’t going to write that letter because they felt that where as I was a good student, they didn’t feel like I was exceptionally talented. I knew they were right. The professor that had made the suggestion, however, decided to ask the department head, who had spoken to me maybe once or twice. And without actually knowing me, they wrote that letter. Yay? I definitely felt like a gifted student impostor.

I remember writing an excellent introduction for my masters thesis, then doing my research, failing and not finding anything convincing, and my advisor telling that no, this wouldn’t do. I would have to rewrite it. And so I did. Same results, but now it was a success story. I had proven the trivial thing that somebody had already postulated, with the name that obfuscates the fact that still nobody knows what’s going on. I so desperately wanted it to be over. I wrote day and night, eyeing the deadline I had set myself. I got an excellent grade. Perhaps my advisor was right? It didn’t feel right.

I remember the final oral exam with the department head. I had answered a bunch of questions but was struggling with the last one, then time was up. I waited outside. After a while, they call me back in. I get the best grade: “With a little more time, I’m sure you would have answered that last question as well,” he said, smiling. Yay! I might have deserved a good grade, but this strange admission that they were giving me the best grade because they believed in me, not because I actually answered all the questions? I guess their point of view makes sense if you’re a professor and don’t believe in grades, either. But back then, I believed in grades. I believed in the impartiality of tests. Stupid, but impartial. But that’s how it was. Some of the professors believed in me, but I did not believe in me.

Would it have been possible to accept a PhD position without losing my integrity? It might have been. It’s not the easy way, that’s for sure. Perhaps the difference is just the illusion that academia would be different than everything else. But there, as everywhere: narcissism, careerism, power abuse, sweeping failures under the rug, sweet talking. Sure, the ideals are shinier. Sure, a good life is possible, with a small circle of friends, mutual support, a networks of trust, by treading carefully. I guess I wasn’t up to it.

Now that I’m older I often think that perhaps I shouldn’t have dropped the ball. Perhaps I might have stuck around, found a niche where I might have set up shop. Now work is about churning out code for clients and none of it is going to save the planet, either. So I guess if you’re in a position where you’re feeling like an impostor in academia, I wish you the best of luck. May you succeed where I failed.

Think of that PhD comic where one of them stands before the group, saying: “Someone in our group … is an impostor!” And then they all confess to being one.

May you avoid it! Live long and prosper. 🖖

Comments on 2021-07-28 The underbelly of university

It’s depressing, I hear very similar sentiments to what you’ve written a lot from people in academia, but like you say towards the end, at least in academia there is the potential for self-directed work, meaning in theory and at some point you can work on what you believe in, instead on what the market demands of you.

– Karim 2021-07-28 20:02 UTC


I can relate to your experience. I have already been sick of academia after 2 or 3 years. The bureaucracy, the intrigues and the all important fight to get tenure... all these frustrated PhD teaching as assistant professors that were very talented sometimes... and the tenured professors without any clue or merit, but that were obviously good at marketing themselves or networking. There was also a lot of statistical cheating or dressing up of results in my field of psychology. So it become very clear for me that academia is not my world and it was quite a struggle for me to bring up the energy to finally get my master’s degree. I never had any regrets of leaving academia behind. I very much prefer a client and results oriented field of work in a small business in the free market than the totally self-contained world of academia (or big corporations)... not to speak of all the political correctness and thought control that was already rearing its ugly head in the late 90s... I still remember one seminar in German literature, where we talked about a text of Derida and its statement that human beauty is a pure cultural construct. I made the counterargument that there might be a biological component to what humans find beautiful and with reasons anchored in reality and survival. And the professor became totally mad at me and silenced me with the notion, that this remark was “borderline facism.”

– Peter 2021-07-29 15:03 UTC


I guess the professor was thinking Max Simon NordauDegeneration (1892, dt. Entartung) → Entartete Kunst.

I remember a conflict during a philosophy seminar on life after death where the professor postulated that the spirits of cows were obviously less developed than the spirits of humans, and I challenged that, asking him how he proposed to support that claim, suggesting that maybe we could use the complexity of behaviour as a proxy? But he didn’t want to spend time on that, it was simply a given. And me being a zoology student and a natural scientist with relentless enthusiasm for Karl Popper – I couldn’t listen any more. I ended up dropping philosophy as a minor and wrote up the missing lab reports I needed for my physical chemistry to count as a minor. Too young to struggle for philosophy, I was.

– Alex 2021-07-29 16:40 UTC


I think it was more of the usual premise of academics in humanities like sociology/literature/philosophy that everything is learned and cultural. From their view, arguing about biological factors of behavior or preferences is “biologism” or “fascism”.

Norbert Bischof, the best professor I had, was teaching evolutionary psychology and was a former pupil of Konrad Lorenz. He was obviously sick of all these discussions, but nevertheless always emphasized that many people in humanities become victims of the Moralistic Fallacy: “Men and women have to be equal so there can’t be any biologic differences by birth.” Whereas many biologists (and obviously some true fascists) might be victims of the Naturalistic Fallacy: “Men are biologically better made for fighting and women for child care, so men have the obligation to make war and women have to stay home and raise children.”

– Peter 2021-07-29 17:48 UTC

Add Comment

2021-07-28 Browsers

The browsers I use…

Most of the time I use Firefox. I’m happy Mozilla exists. I’m happy there’s an alternative to Google Chrome. I’m super concerned that they are basically being propped up by Google as a shield, and a tool. They can claim that they’re not a monopoly, and Mozilla cannot be too outspoken about them. I think Aral Balkan is right: the big companies are everywhere, even sponsoring their competittion, thus keeping them weak and malleable.

So, in light of the overwhelming support for surveillance capitalism by generally well-respected organisations that say they work to protect our human rights, privacy, and democracy, I have decided that I must be the one who’s wrong. – Aral Balkan criticising Google and Facebook and their corrupting influence

So yes, I use Firefox. I also change many of the defaults, and install a handful of extensions to make it work. There are a gazillion defaults, and they change every now and then, so this helps keep us all confused and our defences weak. I hate that. Having to install extensions is also problematic as they increase our attack surface, make us more vulnerable. How easy it must be to bully, bribe or buy the small fries extension authors. I dislike this very much.

Basically I search for telemetry*enabled and switch it all too false, because I don’t understand why we need a bewildering plethora of options when almost all people will be either “I don’t care” or “fuck this shit”. This is false choice designed to confuse the unwary, if you ask me. – My Firefox Setup page keeps growing and growing

So what else should I be using, if I can? Not any of the Firefox-derivatives, that’s for sure.

I have qutebrowser installed but hardly ever use it. The lack of a menu make it very difficult to discover; you’re immediately faced with a terrible wall.

qutebrowser is a keyboard-focused browser with a minimal GUI – Qutebrowser

I recently compiled and installed netsurf. We’ll see whether that gets used more often.

Small as a mouse, fast as a cheetah and available for free. NetSurf is a multi-platform web browser for RISC OS, UNIX-like platforms (including Linux), Mac OS X, and more. – NetSurf

I’ve used Eww a lot; the browser inside Emacs. It’s text only, no Javascript, so you can’t use it for many links, but it definitely works well when following links from mails or social media, where you’re not sure where they’re taking you. Having a text-only browser without Javascript at your fingertips is great. And it has a reader mode integrated that I keep forgetting about because it’s already close to ideal. Use R, for “readable.”

Actually, I have started writing a little something that allows me to use Elpher for the web. Elpher is a Gopher and Gemini browser for Emacs. I mean, I’m just hooking existing stuff up: url-retrieve to fetch stuff via HTTP; HTTP header parsing using code I copied from Eww; HTML rendering using shr, the same library that Eww is using. The benefit I’m hoping to see is that it’s easier to follow HTTP links from Elpher without accidentally switching from Elpher to Eww. When that happens, things still look OK, but the keybindings are different, and you can’t go “back” to Elpher because now you have both Elpher and Eww buffers, so you have to quit or bury Eww in order to get back to Elpher. It confused me for a while.

One interesting side effect: Emacs has its own bookmarks, mostly for positions in files, but with a framework that lets us extend them to other uses. Eww has its own bookmarks. Elpher used to have its own bookmarks. But I recently managed to switch Elpher bookmarks to the Emacs bookmarks backend, so now I can visit websites using Elpher, and bookmark them, and they end up in Emacs bookmarks where I can edit and annotate them, bypassing the Eww bookmarks. It’s going to be part of the next official Elpher release, apparently. 😁

Fighting the urge to add Emacs bookmarks as a back-end to Eww bookmarks… Nooo!

Comments on 2021-07-28 Browsers

Fighting the urge to add Emacs bookmarks as a back-end to Eww bookmarks… Nooo!

Don’t fight. I would love to see Emacs native bookmarks in eww.

OTOH, I’m moving away from bookmarks towards using orgmode links.

– Martin 2021-07-28 14:07 UTC


Org Mode is eating the world!

– Alex 2021-07-28 15:09 UTC


I also mostly use Firefox, but I feel about it the same way that you do, I wish they were stronger in defending the Open Web and not dependent on Google. Maybe Firefox users should be encouraged to donate more to Mozilla so they can become independent, but I don’t know if they could ever match Google’s cash. If there could be an awareness of what is going on, similar to what happened at the beginning of the year with WhatsApp, leading people to embrace Signal and Threema more, that might help.

I actually never used Chrome day-to-day. I have only used it occasionally for work (webdev stuff for testing and so on). Originally this was because I was dependent on a lot of Firefox extensions that were not available for Chrome at the time, but now I still stick to Firefox for privacy reasons. Though I am depressed that sometimes sites won’t work properly unless you use Chrome. Just as I’m depressed that nowadays some sites won’t display at all unless you have JavaScript enabled.

Have you tried nyxt? I am busy slowly getting into it at the moment, I think it is similar to qutebrowser, also a keyboard-driven browser, based on GTK-webkit. I used to use Conkeror, which has been discontinued for some time now (RIP XUL) and that was really the best web browsing experience I ever had: a fully functioning keyboard-driven browser like Emacs based on Firefox.

– kahas 2021-07-28 19:50 UTC


Hm, nyxt rings a bell. Perhaps I installed it and forgot about it? I have to check. Thanks for the reminder.

– Alex 2021-07-28 21:25 UTC


I’m always surprised when people forget not only that WebKit exists, but that runs laps around everything else. People switch from Chrome to another Chromium-based browser without realising Blink is just a Google-crippled fork of WebKit! It’s crazy. But luckily there are plenty of cool WebKit-backed browsers:

rnkn 2021-07-29 05:24 UTC


Ah, very cool. “luakit” and “surf” are packages I can install with apt. Thanks!

Sadly, surf won’t start. No desktop file installed, and when started from a terminal, errors… Luakit looks similar to qutebrowser: ‘o’ to open a new URL, ‘:’ for extended commands, with ‘TAB‘ to show completions. I was quite confused, however: ‘:tablast’ or ‘:bookmark’ both didn’t do what I expected them to do. More experimenting will be required.

– Alex 2021-07-29 07:13 UTC


Re surf, it’s surely the least user-friendly of the bunch. And if you’re on Wayland I seem to recall it requiring a env variable hack to work.

Yeah both Luakit and vimb share a design approach with qutebrowser, although using WebKit vs WebEngine (Blink) will greatly improve performance.

rnkn 2021-07-30 09:35 UTC


Emacs should link against libwebkit2gtk.so!

– Martin 2021-07-31 08:16 UTC


Today I saw falkon mentioned. It’s Qt WebEngine based and I’m not a Qt user…

I ended up not installing it.

alex@melanobombus ~> sudo apt install falkon
[sudo] password for alex: 
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  kwayland-data kwayland-integration libdbusmenu-qt5-2 libfam0 libkf5archive5
  libkf5auth-data libkf5auth5 libkf5codecs-data libkf5codecs5 libkf5config-bin
  libkf5config-data libkf5configcore5 libkf5configgui5 libkf5configwidgets-data
  libkf5configwidgets5 libkf5coreaddons-data libkf5coreaddons5 libkf5crash5
  libkf5dbusaddons-bin libkf5dbusaddons-data libkf5dbusaddons5 libkf5guiaddons5
  libkf5i18n-data libkf5i18n5 libkf5iconthemes-bin libkf5iconthemes-data
  libkf5iconthemes5 libkf5idletime5 libkf5itemviews-data libkf5itemviews5
  libkf5notifications-data libkf5notifications5 libkf5service-bin
  libkf5service-data libkf5service5 libkf5wallet-bin libkf5wallet-data
  libkf5wallet5 libkf5waylandclient5 libkf5widgetsaddons-data libkf5widgetsaddons5
  libkf5windowsystem-data libkf5windowsystem5 libkwalletbackend5-5 libphonon4qt5-4
  libpolkit-qt5-1-1 libqt5texttospeech5 libqt5waylandclient5
  libqt5waylandcompositor5 phonon4qt5 phonon4qt5-backend-vlc qtwayland5

– Alex 2021-07-31 22:19 UTC


I think it’s worth the time to learn qutebrowser. I love it. It renders all web pages correctly (being WebEngine; I can’t say the same for WebKit) and I have it highly integrated with Emacs (one key to save page as bookmark in my Org-based bookmarks). Using the keyboard for things allows me to do things that I could not do without it, because of mouse-hovering.

– jtgd 2021-08-02 08:53 UTC

Add Comment

2021-07-27 Podcasts I listen to

It’s been a while. What am I listening to? Apparently I’m set in my ways. The new additions got dropped again. Currently I’m listening to a second RPG podcast:

  • Actual Play is people playing role-playing games on Twitch and creating podcasts of the episodes; surprisingly, it works quite well

Old favourites:

  • In Our Time is about history, science and philosophy, one of the best podcasts out there ❤️
  • Thinking Allowed is about sociology, perfect smalltalk material ❤️
  • What Trump Can Teach Us About Con Law is about US politics and the US constitution ❤️
  • 99% Invisible is about design, architecture and the like ❤️
  • Dan Carlin’s Hardcore History is about history, with very rare episodes, but each episode is about 4h long ❤️
  • The History of Byzantium, which picked up where the History of Rome left off
  • The British History Podcast, excellent history stuff as well
  • Daydreaming about Dragons is about tabletop techniques, my favourite RPG podcast
  • History of Philosophy Without Any Gaps, calming, excellent for walking to the office in the morning
  • History of the Crusades, I just haven’t had the time to listen to any episodes in a very long time
  • Revolutions, by the guy who did the History of Rome before that and I listened to all the episodes
  • The China History Podcast, more meandering history stuff, jumping all over the place but China is important and I love his super positive attitude

Add Comment

2021-07-26 Want to write a Gemini-based game with me?

I run a MUSH already: Ijirait. But what if we wanted to run a real game, with challenges, items to pick up, enemies to fight, lands to explore? Somebody would have to write it up. I’d be happy to turn it into a game. But I need collaborators who write up a cool story. Interested?

Comments on 2021-07-26 Want to write a Gemini-based game with me?

There’s something going on here: gemini://numbersstation.info

I got the link from a now defunct Gemini based zine.

Sean Conner 2021-07-27 04:55 UTC


Thanks for the link!

– Alex 2021-07-27 07:52 UTC


I’d like to chat about it, if you want. my contact is on my capsule on gem.antipod.de. responses might be slow though 😉

– test 2021-07-27 08:59 UTC


Ariane+uses+++instead+of+%20.+duh

– test 2021-07-27 09:00 UTC


Thanks for the heads up, hopefully fixed! 🙂

– Alex 2021-07-27 09:56 UTC


I’m interested, please clue me in! 🙂 I wrote something about an idea I’d had here, a while back (see inu.red):

– panda-roux 2021-07-27 19:56 UTC


pasted a line break by accident; here’s the gemlog post: gemini://gemini.panda-roux.dev/log/entry?7

– panda-roux 2021-07-27 19:57 UTC

Add Comment

2021-07-25 Gemini, CAPCOM, Antenna, and Tower

A few days ago I noticed that CAPCOM has become practically useless to me. Remember when @solderpunk reorganized CAPCOM a while ago?

Each month, CAPCOM randomly selects 100 distinct URLs from its list of known feeds, and includes their content in its output. This makes it a nice way to discover new content in Geminispace.” – CAPCOM

At the time I liked the idea. But of the 64 links visible right now, 20 are links to my blog. This doesn’t feel right. I sent Solderpunk an email and suggested he change the feed. I have multiple feeds and currently I think it shows all the regular changes to wiki pages, and to comment pages. This is not good. There’s a different feed that shows just new blog posts. I think that’s the one people should be subscribing to, if at all. Or just use my front page as a gemlog. 😁

Anyway, I’m not sure the wrong feed is the only reason I seem to be dominating CAPCOM right now. It’s true, I’m enjoying summer break these days and that means I’m not working – I have plenty of time to write blog posts. Like this one. It’s also true that some of the links would disappear if Solderpunk were to change my feed: comment pages would disappear, book club pages would disappear… I still fear that I might be taking up 10% of CAPCOM.

I have written two feed aggregators. Perhaps their implementations are simply better than what CAPCOM is doing at the moment.

Moku pona produces one line per feed it watches; when it works without bugs, that means it links to each feed exactly once, simply reordering them when updates come in. Active feeds are at the top. You don’t know whether you’ll like what’s new on the feed, but after a while you start remembering the names. I like it.

Jupiter takes the four latest entries of every feed, sorts them all by date, and keeps the 100 most recent ones. Again, nobody can really overwhelm the feed. You can’t have more than 4/100 entries.

@tinyrabbit said that they suspected many CAPCOM feeds to be abandoned. Thus, if CAPCOM picks a subset of the feeds it knows, many of them have no updates, so the ones that do (like mine) start dominating.

We started talking about Antenna. Antenna is different in that it doesn’t watch pages for changes like my feed aggregators do. Feed authors have to ping Antenna if they have an update. Antenna updates the feeds it is pinged with, once every 10 minutes.

Let’s quickly compare numbers. Moku pona checks all the URLs for changes (gemlog front pages or actual feeds). I subscribe to 55 sites and check them twice a day, making 110 requests per day. But look at the updates I’m actually showing right now:

awk '{ print $3 }' updates.txt | uniq --count
      5 2021-07-25
      6 2021-07-24
      1 2021-07-23
      1 2021-07-21
      2 2021-07-20
      2 2021-07-19
      1 2021-07-18

That’s not a lot of changes. It seems that I should be able to make do with about 5 requests per day! So if all the services knew to a personal Antenna installation, they would make 5 pings, and my Antenna installation would make 5 requests, done for the day. 10 requests instead of 110.

Now it looks as if Antenna is built to be a central hub. But think about it. Doesn’t ActivityPub allow just this? Don’t email newsletters do just this? You “sign up” to get updates from a server, and when they have updates, the send you a ping, and you have some software that knows what to do with it. What Antenna does is get pings and it knows what to do with it: it requests the URL mentioned in the ping and adds it to the updates it knows about. What we’re missing is the sign up part.

Currently, the one Antenna installation asks authors to ping it, using tools of their own devising. It works surprisingly well and shows that Gemini makes it easy to write clients and servers and services. I love it!

But obviously, not ever reader can ask all the authors to ping them. Or I guess they could, like all the newsletters you can subscribe to, like all the fediverse accounts you can follow using ActivityPub. What we’re missing is the right tools.

Let’s imagine this missing tool and call it Tower.

I have a blog, and it has a Tower installation. You want to get pinged when my blog updates, and you have an Antenna installation. You visit my blog, find the Tower link, and submit your Antenna URL. When my blog updates, it calls Tower, and Tower goes through all the Antenna URLs it has listed, and pings them. Then each Antenna installation can fetch the updated front page of my blog, or my feed, and add new entries to their list.

Now, we all have different Gemini hosting software, so perhaps it’s not as easy to implement. We could implement Tower as a service on the net! This Tower installation doesn’t just get called when my blog updates. Instead, it watches a bunch of URLs.

“Huh?” you ask. And you’d be right. Where’s the benefit? It’s true, if we don’t all implement a Tower convention, the use is limited.

Let’s use some imaginary numbers again. Let’s say we’re 100 people, each of us is following 50 gemlogs. Let’s say we all have moku pona installed and they all update twice a day, like mine does. Each of use makes 2×50=100 requests per day, us being 100 people results in 10,000 requests per day. This is when we all use self hosted feed readers.

Let’s assume somebody wrote the ominous Tower service and all 100 people signed up. They don’t actually add Tower to their blogs, they tell the one centralised tower service: this is the URL of my personal Antenna installation and these are the 50 URLs I want to follow (big privacy issue, for sure – let’s just continue with the thought experiment to get an idea of the number of requests this would result in). The Tower service checks all 100 URLs twice a day, making 2×100=200 requests per day.

It then pings all the signed up Antenna installations with the URLs they are following. We need to make some more assumptions. Let’s say that about 10% of all the blogs update on a given day: 10 of them. Since we’re all following 50 gemlogs, let’s assume that this numerical thought experiment is pretty egalitarian: there are no gemlog superstars, no long tail, instead we have a wonderful, uniform distribution. That means every person’s Antenna gets about 10×½=5 pings every day. All the Antenna installations then fetch each of these URLs, so 100×5=500 requests to update their data. Let’s hope the pings are spread out or else those poor servers are hit by all the Antenna installations at once!

Let’s add this all up: Tower makes 200 requests per day, sends out 500 requests per day, and the Antenna installations then make an additional 500 requests per day. That’s 200+500+500=1,200 requests per day instead of the 10,000 requests per day I mentioned above.

So clearly, something is good about this. Perhaps the numbers are worse because there are multiple Tower service installations. It’s still better than everybody running their own moku pona. It’s also a privacy issue, as I said. The Tower service operator would know which Antenna installation subscribed to what URLs. At least with private moku pona installations, it’s hard to know.

Add Comment

2021-07-24 An anatomy of the request

Early Internet protocols were simple. In order to make a request, you contact a machine on a particular port, and send some bytes, terminated by a carriage return (CR or \r) and a line feed (LF or \n). We can simulate this using echo and nc (netcat) to make these requests ourselves.

This is a request for user information using the finger protocol. Here, we’re sending a name to the server. The response is plain text.

echo -ne "alex\r\n" | nc alexschroeder.ch 79

Gopher, at the protocol level, is the same thing. Here, we’re sending a selector to the server. The response is plain text.

echo -ne "page/Alex_Schroeder\r\n" | nc alexschroeder.ch 70

HTML 1.0, at the protocol level, is already a lot more complicated. We’re sending the “method” (in our case, “GET”), the resource we’re interested in, the procotol version, HTTP headers (in our case, none), and an empty line, i.e. another CRLF:

echo -ne "GET /wiki/Alex_Schroeder HTTP/1.0\r\n\r\n" | nc alexschroeder.ch 80

The response in this case is actually a redirect to the HTTPS URL. We’ll talk about this down below. For now, let’s just note that we got a response that we can act upon.

All these protocols mentioned above cannot do virtual hosting. Virtual hosting is when multiple domains are served on the same machine. We’re used to this, on the web. We know that “http://alexschroeder.ch/” and “http://campaignwiki.org/” result in a different response; I know that this is surprising because they are both hosted on the same server. How is this possible? This is not possible using HTTP/1.0, that’s for sure: the server doesn’t know what host you think you’re talking to. If we try it, both requests get redirected to the page on “alexschroeder.ch”, the default domain.

echo -ne "GET /wiki/Alex_Schroeder HTTP/1.0\r\n\r\n" | nc alexschroeder.ch 80
echo -ne "GET /wiki/Alex_Schroeder HTTP/1.0\r\n\r\n" | nc campaignwiki.org 80

You could argue that this is a bit of security concern. After all, somebody doing this request for “campaignwiki.org” now knows that it’s being hosted on the same server as “alexschroeder.ch”. I’m going to ignore that, however.

In any case, the technical reason for all of this is that at the TCP/IP level, domain names do not exist. We tell “nc” to send the request to “alexschroeder.ch” port 80, but what it actually does is look it up using the domain name system (DNS) and then it uses the IP number instead.

We can do our own lookup using “dig”:

dig alexschroeder.ch

The answer is “178.209.50.237” if you’re using IPv4. By default, that’s the answer you get because type “A” is the default. To get the IPv6 answer, you need to specify type “AAAA”.

dig -t AAAA alexschroeder.ch

The answer is “2a02:418:6a04:178:209:50:237:1” if you’re using IPv6.

We can verify the results above by substituting the IP numbers ourselves:

echo -ne "GET /wiki/Alex_Schroeder HTTP/1.0\r\n\r\n" | nc 178.209.50.237 80

The solution, as far as HTTP was concerned, is the use of additional headers. HTTP has a ton of headers to tell servers whether they already have a resourced cached and how old it is, what sort of languages they’d prefer to get back, what sort of MIME types they’d like to get back, and so on. One of these headers tells the server what host we think we’re talking to.

Here’s how to do the requests with a host header:

echo -ne "GET /wiki/Alex_Schroeder HTTP/1.0\r\nhost: alexschroeder.ch\r\n\r\n" | nc alexschroeder.ch 80
echo -ne "GET /wiki/Alex_Schroeder HTTP/1.0\r\nhost: campaignwiki.org\r\n\r\n" | nc campaignwiki.org 80

The request for “campaignwiki.org” now has a redirect to the same resource on “campaignwiki.org”. You can double check by using the IP number:

echo -ne "GET /wiki/Alex_Schroeder HTTP/1.0\r\nhost: campaignwiki.org\r\n\r\n" | nc 2a02:418:6a04:178:209:50:237:1 80

What about Gemini? It doesn’t have headers to send along, but instead of just naming the path of the resource on the server like Gopher does, it names the URL it is requesting. Sadly, we can’t illustrate this unless we use TLS. If you are lucky, your “netcat” or “nc” has the “--ssl” option and we can keep using it.

First, let’s just quickly show that HTTPS is HTTP over SSL or TLS, on a different port:

echo -ne "GET /wiki/Alex_Schroeder HTTP/1.0\r\nhost: alexschroeder.ch\r\n\r\n" | nc --ssl alexschroeder.ch 443
echo -ne "GET /wiki/Alex_Schroeder HTTP/1.0\r\nhost: campaignwiki.org\r\n\r\n" | nc --ssl campaignwiki.org 443

And so we get to Gemini. As you can see, we got rid of the complication using HTTP headers.

echo -ne "gemini://alexschroeder.ch/page/Alex_Schroeder\r\n" | nc --ssl alexschroeder.ch 1965

All right! 🚀🚀😃🎉

I hope you can see how using the command line tools helped me understand these things.

Now you know how a server like Phoebe can look at the first line of the request it gets and determine whether to serve a Finger response, a Gopher response, a Web response, or a Gemini response.

References, linking to the older RFCs because they’re often simpler to read:

RFCs, yo! I head Gemini might get one, eventually?

Comments on 2021-07-24 An anatomy of the request

If you don’t have an --ssl option on nc you can try:

echo -ne "gemini://alexschroeder.ch/page/Alex_Schroeder\r\n" \
  | openssl s_client alexschroeder.ch:1965

Fails needing a client certificate, but then I assume nc --ssl would fail that way too. I also assume an option on openssl would allow adding one but haven’t looked into it.

Ed Davies 2021-07-24 19:37 UTC


I think it works if you add the -quiet option and ignore stderr:

echo -ne "gemini://alexschroeder.ch/page/Alex_Schroeder\r\n" \
  | openssl s_client -quiet alexschroeder.ch:1965 2>/dev/null

At the time I was experimenting with bash functions as clients, but I have since abandoned it.

At least I think there’s nothing in my personal setup, nothing in my “~/.ssh/config” file that modifies s_client.

– Alex 2021-07-24 20:58 UTC


Actually, /b wrote in letting me know that the key part is not --quiet but that --quiet also turns on --ign_eof, and that’s the important one.

– Alex 2021-07-28 07:50 UTC

Add Comment

2021-07-23 Phoebe 4 released

I finally finished the rewrite of Phoebe I had mentioned on The Transjovian Council (where most of my Gemini activity is). I think I managed to achieve some goals that had always eluded me with Oddmuse, my longest running project (a Usemod fork I started back in 2003!).

I wanted test for all the extensions, and I managed to do that. And Alex Daniel taught me the value of meta tests, so I also have a test that tests whether all the necessary tests exist. In a few cases, I’m even testing the examples provided in the documentation.

This leads me to the next issue I finally fixed. I really wanted better documentation. I’m not happy with how documentation for Oddmuse code ended up on the wiki. I keep thinking about that old adage: “wiki wiki is short for can’t find shit.” For this code, I wanted the code to be part of the source file such that sites like metacpan display it correctly. I also wanted to concatenate all the documentation into a single README document to use as a manual, and to have that show on code hosting services like cgit. This required some improvements to my ‘update-readme’ script, but I think it was worth it.

I also wanted the extensions to be formal Perl modules that you ‘use’ in your wiki configuration, not source files you link into a config directory and maybe they are up to date and maybe they are not. If you update Phoebe via CPAN, you get the new code, and all its dependencies, and all your extensions are also updated.

There is a bit of a grey area, here. Many of the extensions I have are probably of limited use to other people. Therefore I have declared the tests of these extensions to be “author tests”. That is, neither CPAN testers nor users installing Phoebe run those tests. Which then raises the question: should the dependencies for these optional extensions be installed as well? Perhaps the correct answer should be that these extensions belong into separate distributions? I don’t know. Maybe.

I managed to put the web serving into a separate extensions. So now, if you install Phoebe, you really are getting just a Gemini wiki. Phoebe is once again the Gemini-first, Web-second sort of wiki I wanted it to be.

In any case: I now have better code structure such that it looks good on metacpan and in git repositories, I have better tests, I have tests for tests, I have better documentation, I even managed to delete the documentation I had on The Transjovian Council because it was now integrated into Phoebe proper. I love it! 🚀🚀😃🎉

Of course, I already found more things to change. But slowly I’m approaching the point where I feel safe letting it lie for a few years once again, just chugging along and doing its job.

Comments on 2021-07-23 Phoebe 4 released

DeletedPage

Add Comment

More...

Comments

You probably want to contact me via one of the means listed on the Contact page. This is probably the wrong place to do it. 😄

– Alex Schroeder 2020-05-22 12:19 UTC