Administration

These are the pages for all my system administration woes. I rarely post happy things in this section. You have been warned.

2020-02-16 Firefox syncserver

OK, so I’m trying to run my own syncserver. It’s separate from the account server, so you still need a Firefox Account. If you do things in the wrong order your browser syncs immediately – to their own sync server, which is not what I wanted. 😭

Anyway, let’s start from the beginning.

First, check out the project, syncserver. It uses Python 2. Oh well!

The Sync Server software runs using python 2.7, and the build process requires make and virtualenv.

I looked at this blog post by Matthias Dietel.

apt install python-dev git-core python-virtualenv g++

Easy enough! Build the project:

cd ~/src
git clone https://github.com/mozilla-services/syncserver.git
cd syncserver
make build
make test

Edit the syncserver.ini file. I’ve made the following changes:

diff --git a/syncserver.ini b/syncserver.ini
index ccf1ae0..21d0586 100644
--- a/syncserver.ini
+++ b/syncserver.ini
@@ -1,6 +1,6 @@
 [server:main]
 use = egg:gunicorn
-host = 0.0.0.0
+host = localhost
 port = 5000
 workers = 1
 timeout = 30
@@ -11,7 +11,7 @@ use = egg:syncserver
 [syncserver]
 # This must be edited to point to the public URL of your server,
 # i.e. the URL as seen by Firefox.
-public_url = http://localhost:5000/
+public_url = https://alexschroeder.ch/sync
 
 # By default, syncserver will accept identity assertions issued by
 # any BrowserID issuer.  The line below restricts it to accept assertions
@@ -20,7 +20,7 @@ public_url = http://localhost:5000/
 identity_provider = https://accounts.firefox.com/
 
 # This defines the database in which to store all server data.
-#sqluri = sqlite:////tmp/syncserver.db
+sqluri = sqlite:////home/alex/syncserver.db
 #sqluri = pymysql://sample_user:sample_password@127.0.0.1/syncstorage
 
 # This is a secret key used for signing authentication tokens.
@@ -34,7 +34,7 @@ identity_provider = https://accounts.firefox.com/
 
 # Set this to "false" to disable new-user signups on the server.
 # Only requests by existing accounts will be honoured.
-# allow_new_users = false
+allow_new_users = new
 
 # Set this to "true" to work around a mismatch between public_url and
 # the application URL as seen by python, which can happen in certain reverse-
@@ -55,4 +55,4 @@ force_wsgi_environ = false
 # MySQL based syncstorage-rs 1.5 server hosted at http://localhost:8000/1.5
 
 # "{node}/1.5/{uid}"
-# sync-1.5 = "http://localhost:8000/1.5/{uid}"
+sync-1.5 = "https://alexschroeder.ch/sync/token/{node}/sync/1.5/{uid}"

That is to say:

  1. the server only listens on localhost:5000
  2. the server can be reached at https://alexschroeder.ch/sync
  3. it uses a SQLite db at /home/alex/syncserver.db
  4. it allows new users to sync their stuff
  5. and I had to change that endpoint at the end... 🤷

So now I need to set up my website. I use Apache and I already have proxy stuff set up. All I had to add was this:

ProxyPass /sync	    http://localhost:5000/

Thus, any request starting with /sync to my existing website gets sent to port 5000 on localhost which happens to be the URL where the sync server is running.

Next, open about:config in the browser and change identity.sync.tokenserver.uri to https://alexschroeder.ch/sync/token/1.0/sync/1.5 – then I’m ready to open the browser preferences and sign in.

If everything worked, you should see your display name in the top spot of the hamburger menu. 🍔 😁

If you see a warning sign after a second or two, stuff isn’t working.

It all sounds so easy now, but it took me a while to get everything working, believe me. 😭

Now I’m seeing the following errors in the log output of make serve:

ERROR:syncserver:The public_url setting doesn't match the application url.
This will almost certainly cause authentication failures!
    public_url setting is: https://alexschroeder.ch/sync
    application url is:    http://localhost:5000/sync
You can disable this check by setting the force_wsgi_environ
option in your config file, but do so at your own risk.

I have no idea what it means. I’m thinking it works anyway? I’m making an additional change to syncserver.ini just to be sure.

force_wsgi_environ = true

The instructions tell me that we only need this in certain reverse-proxy setups but this is a reverse-proxy setup so I think I should be fine?

Now I wonder how to leave the sync server running... Perhaps it’s OK to just do this every now and then?

Now that I have it running I wonder about the utility of it all:

  1. there’s no need for me to mix my private Firefox and my work Firefox bookmarks and logins
  2. I currently cannot make Firefox on my iPad and my iPhone use a different sync address

Oh Apple, the golden prison, where I’m safe, but also where I’m locked in. Sigh. 😔

But wait… This issue is closed: Add ability to set custom identity.sync.tokenserver.uri for self-hosted Sync #5006?

One of the last comments on that issue, by user fireglow:

Just to chime in a little: Python 2.7, the flavor of Python the sync software is written in, will go End-of-Life at the end of this month, year, and decade. Mozilla already has indicated that there will be no rewrite for Python 3. I gather there’s a rewrite of these services in Rust in the works, at https://github.com/mozilla-services/syncstorage-rs It’s as of now unclear to me how all these parts will fit together in a way so us self-hosters will be able migrate over.

Oh wow.

I think I’m going to stop all of this. It’s making my head hurt. I checked the Rust rewrite project. Apparently you need to use either MySQL or Spanner as your DB. And with that, I’ve decided that not running this service and not syncing my stuff is probably for the best.

I undid all the changes, stopped the server, uninstalled the software, and deleted the Firefox account again.

Tags:

Comments on 2020-02-16 Firefox syncserver

Hi Alex,

You can set a custom sync server amd token server in Firefox ios.

To access it:

  • Enter the settings menu
  • Hit the version number 5 times, this opens debug mode
  • Under the sync option, there is a new menu item, Advanced Sync
  • The two settings are now available

Note that I haven’t done this, as I don’t (yet) have a sync server.

dgold 2020-02-16 19:25 UTC


I get the feeling that this doesn’t work on the iPad... I get a bunch of extra options but no place where I can set the sync server URL.

– Alex Schroeder 2020-02-16 19:28 UTC


I don’t know what to say to you. I set up my server in the way you described (except I didn’t set the last two options in the file)

I accessed the Advanced Settings in FFox on iPadOS. All works perfectly.

dgold 2020-02-16 20:24 UTC


How weird. I just updated Firefox on the iPad and it does look different, but I still don’t know where to start. Do you feel like pasting a screenshot, or can you explain it like I’m super confused? Because I feel like I am!

  1. open Firefox on the iPad
  2. click hamburger menu
  3. scroll down until you see “Version 22.0 (17157)
  4. tap this section five time until the list of options changes
  5. now what?

At the top, I see three items:

  1. my email address and a note saying I need to enter my password (but since I deleted my account, clicking on this item just takes me to accounts.firefox.com where I could probably recreated my account and start sync immediately but with their servers)
  2. error diagnosis: ask for an upgrade (translating from German)
  3. error diagnosis: forgot sync-status (translating from German)

No other item in these settings seem related to sync. What am I not seeing?

– Alex Schroeder 2020-02-16 22:08 UTC

Add Comment

2020-01-31 Banning myself with fail2ban

Recently I have noticed that I’m sometimes banned from my own websites. That is, the site is not reachable, but when I check any of the “is the site down for everybody or is it just me?” sites, it’s always just me. I also cannot SSH to the machine unless I use the IPv6 address directly.

OK, so fail2ban is banning the IPv4 of my home network. Why? Is some app I’m using bombarding the site with requests? Let’s check.

Visiting ip4.me tells me my IPv4. Grepping /var/log/fail2ban.log I see that it has been banned at 11:03 and 13:03 today, by the alex-apache rule.

This rules counts every access as a potential fail and ignores some URLs that I deem to be harmless (static files such as CSS files, fonts, pictures, podcast episodes, PDF files, and so on:

failregex = ^(www\.)?(alexschroeder\.ch|arabisch-lernen\.org|campaignwiki\.org|communitywiki\.org|emacswiki\.org|flying-carpet\.ch|korero\.org|oddmuse\.org|orientalisch\.info):[0-9]+ <HOST>

ignoreregex = ^[^"]*"GET /(robots\.txt |favicon\.ico |[^/ ]+.(css|js) |cgit/|css/|fonts/|pics/|export/|podcast/|1pdc/|gallery/|static/|munin/|osr/|indie/|rpg/|face/|traveller/|hex-describe/|text-mapper/|contrib/pics/)

This is from /etc/fail2ban/filter.d/alex-apache.conf. The actual limits are defined in /etc/fail2ban/jail.d/alex.conf:

[alex-apache]
enabled = true
port    = http,https
logpath = %(apache_access_log)s
findtime = 40
maxretry = 20

So basically you’re allowed 20 requests in 40s, not counting the requests matching ignoreregex.

OK, let’s see the requests. Here’s a little Perl script I wrote:

#!/usr/bin/env perl
while (<STDIN>) {
  m/^(\S+:\d+) ([0-9.]+) - - \[(.*?)\] "(.*?)" (\d+) (\d+|-) "(.*?)" "(.*?)"/ or warn "Cannot parse:\n$_" and next;
  my ($host, $ip4, $date, $request, $code, $size, $referrer, $agent) = ($1, $2, $3, $4, $5, $6, $7, $8);
  $requests{$request}++;
  $total++;
}
@result = sort {$requests{$b} <=> $requests{$a}} keys %requests;
foreach $label (@result) {
  printf "%70s %10d   %3d%%\n", $label, $requests{$label}, 100* $requests{$label} / $total;
}

Let’s use it to report on today’s log entries from my IP:

    GET /pdfs/spellcasters/ HTTP/1.1         12    19%
                 GET /pdfs/ HTTP/1.1         12    19%
               GET /jewelry HTTP/1.1          8    13%
             GET /wiki/Apps HTTP/1.1          8    13%

Hm, who the hell is requesting /pdfs/spellcasters/ and /pdfs/‽ Not me, that’s for sure! At least not that I know of. Then again, I’d say that the /pdfs directory contains just static files so adding that to the ignoreregex should be no problem.

I still wonder who does this, though. Is it the Firefox app on iOS?

Tags:

Add Comment

2019-12-19 Oddmuse 6 memory use

Process RSS summed

I have a Munin module that runs ps -eo rss,command | grep $prog for various regular expressions.

Well, actually I sum it all up so I run this:

ps -eo rss,command | gawk '
BEGIN              { total = "U"; } # U = Unknown.
/grep/             { next; }
'"/perl6/"'        { total = total + $1; }
END                { print total; }'

For Oddmuse 6 running on Cro and Perl 6 the result is 385120, i.e. 385 MB. That doesn’t quite match the data in the image (about 350MB) but it’s close enough. Compare that to the regular Oddmuse instances running on Perl 5 with Mojolicious...

Am I comparing apples to oranges? Let’s see the ps output for this stuff and compare it to the output of a regular Oddmuse wiki:

alex@sibirocobombus:~$ ps -eo rss,command | grep perl6
  892 grep perl6
358120 /home/alex/rakudo/bin/moar --execname=/home/alex/rakudo/bin/perl6 --libpath=/home/alex/rakudo/share/nqp/lib --libpath=/home/alex/rakudo/share/nqp/lib --libpath=/home/alex/rakudo/share/perl6/lib --libpath=/home/alex/rakudo/share/perl6/runtime /home/alex/rakudo/share/perl6/runtime/perl6.moarvm service.p6
alex@sibirocobombus:~$ ps -eo rss,command | grep campaignwiki
 5104 /home/alex/campaignwiki.org/gridmapper-server.pl
  892 grep campaignwiki
32924 /home/alex/farm/campaignwiki2.pl
 7784 /home/alex/farm/campaignwiki2.pl
32460 /home/alex/farm/campaignwiki2.pl
17712 /home/alex/campaignwiki.org/gridmapper-server.pl
alex@sibirocobombus:~$ ps -eo rss,command | gawk '
BEGIN              { total = "U"; } # U = Unknown.
/grep/             { next; }
'"/campaignwiki2/"'        { total = total + $1; }
END                { print total; }'
73168

So, 358120 for Oddmuse 6 looks correct, and the 73168 for Campaign Wiki looks correct. Both act as web servers behind Apache, both run a wiki.

Tags:

Add Comment

2019-11-16 Pleroma

OK, so I tried Epicyon and it used a lot of CPU cycles doing not much. Would Pleroma fare any better?

I am ready to give it a try: Installing on Debian Based Distributions.

apt install postgresql postgresql-contrib elixir erlang erlang-dev erlang-tools erlang-parsetools erlang-ssh erlang-xmerl

Well, I was following the instructions until I got to this step:

root@sibirocobombus:/opt/pleroma# sudo -Hu pleroma mix deps.get
!!! RUNNING IN LOCALHOST DEV MODE! !!!
FEDERATION WON'T WORK UNTIL YOU CONFIGURE A dev.secret.exs
Could not find Hex, which is needed to build dependency :phoenix
Shall I install Hex? (if running non-interactively, use "mix local.hex --force") [Yn] 
** (UndefinedFunctionError) function :inets.stop/2 is undefined (module :inets is not available)
    :inets.stop(:httpc, :mix)
    (mix) lib/mix/utils.ex:560: Mix.Utils.read_httpc/1
    (mix) lib/mix/utils.ex:501: Mix.Utils.read_path/2
    (mix) lib/mix/local.ex:149: Mix.Local.read_path!/2
    (mix) lib/mix/local.ex:126: Mix.Local.find_matching_versions_from_signed_csv!/2
    (mix) lib/mix/tasks/local.hex.ex:56: Mix.Tasks.Local.Hex.run_install/1
    (mix) lib/mix/dep/loader.ex:168: Mix.Dep.Loader.with_scm_and_app/4
    (mix) lib/mix/dep/loader.ex:121: Mix.Dep.Loader.to_dep/3

This helped:

apt install erlang-inets

🙂

I also changed registrations_open to false in config/prod.secret.exs.

Tags:

Comments on 2019-11-16 Pleroma

And I wrote my first Elixir code with the help of @wim_v12e:

diff --git a/lib/mix/tasks/pleroma/user.ex b/lib/mix/tasks/pleroma/user.ex
index a3f8bc945..964209c96 100644
--- a/lib/mix/tasks/pleroma/user.ex
+++ b/lib/mix/tasks/pleroma/user.ex
@@ -442,6 +442,20 @@ defmodule Mix.Tasks.Pleroma.User do
     end
   end
 
+  def run(["list"]) do
+    start_pleroma()
+
+    Pleroma.User.Query.build(%{local: true})
+    |> Pleroma.RepoStreamer.chunk_stream(500)
+    |> Stream.each(fn users ->
+      users
+      |> Enum.each(fn user ->
+        shell_info("#{user.nickname} moderator: #{user.info.is_moderator}, admin: #{user.info.is_admin}, locked: #{user.info.locked}, deactivated: #{user.info.deactivated}")
+      end)
+    end)
+    |> Stream.run()
+  end
+
   defp set_moderator(user, value) do
     info_cng = User.Info.admin_api_update(user.info, %{is_moderator: value})

This allows me to run:

root@sibirocobombus:/opt/pleroma# sudo -Hu pleroma MIX_ENV=prod mix pleroma.user list
Compiling 1 file (.ex)
internal.fetch moderator: false, admin: false, locked: false, deactivated: false
alex moderator: false, admin: true, locked: false, deactivated: false
admin moderator: false, admin: true, locked: false, deactivated: false
kensanata moderator: false, admin: false, locked: false, deactivated: false

– Alex Schroeder 2019-11-17 13:29 UTC


For now I’m happy with Pleroma. Notice how load came back down again after switching from Epicyon to Pleroma and PostgreSQL. I wasn’t running a database before the Pleroma installation.

Load

– Alex Schroeder 2019-11-17 21:11 UTC


It’s still running on my server without acting up. I’m happy even though I had to install a database for it. Should I ever want a lightweight server, another project to look at would be Kibou. It’s written in Rust instead of Erlang+Elixir. Not sure whether I like that better but there it is.

– Alex Schroeder 2019-12-10 21:58 UTC


Upgrade:

sudo -Hu pleroma git pull
sudo -Hu pleroma mix deps.get
sudo systemctl stop pleroma
sudo -Hu pleroma MIX_ENV=prod mix ecto.migrate
sudo systemctl start pleroma

– Alex Schroeder 2020-01-02 14:23 UTC


Actually, now that I think about, I think I don’t really use it for anything. I’ll simply uninstall it again. It worked. It was nice. Time to do something else.

– Alex Schroeder 2020-01-07 21:39 UTC

Add Comment

2019-11-09 Upgrading Debian from Stretch to Buster

I did it! I upgrade the server. No restart required. I was a bit intimidated by the extensive documentation. But I was fine.

First, I improved my backup. I currently use the following:

#!/bin/bash

# Using sudo rsync --archive to preserve ownership.
# Using --fake-super to avoid changes to groups and owners

echo Backing up Sibirocobombus
echo home directory
rsync --archive --fake-super --verbose --compress --delete --delete-excluded \
      --itemize-changes \
      --exclude '/home/alex/logs' \
      --exclude '/home/alex/alexschroeder.ch/share' \
      --exclude '/home/alex/planet/osr/' \
      --exclude '/home/alex/planet/indie/' \
      --exclude '.cpan/build' \
      --exclude '.cpan/sources' \
      --exclude '.cpanplus' \
      --exclude '.cpanm' \
      --exclude '.cache' \
      --exclude '.Trash' \
      --exclude '.local/share/Trash' \
      --exclude 'temp/' \
      --exclude 'pids/' \
      --exclude 'visitors.log' \
      --exclude 'referer/' \
      --exclude '.git/' \
      --rsh="ssh -p 882" \
      root@alexschroeder.ch:/home \
      /home/alex/Documents/Sibirocobombus

# https://www.debian.org/releases/buster/amd64/release-notes/ch-upgrading.en.html#data-backup
echo etc directory
rsync --archive --fake-super --verbose --compress --delete --delete-excluded \
      --itemize-changes \
      --rsh="ssh -p 882" \
      root@alexschroeder.ch:/etc \
      root@alexschroeder.ch:/var/lib/dpkg \
      root@alexschroeder.ch:/var/lib/apt/extended_states \
      root@alexschroeder.ch:/var/lib/dehydrated \
      root@alexschroeder.ch:/usr/lib/cgit/filters \
      /home/alex/Documents/Sibirocobombus

I also installed and ran apt-forktracer:

apt install apt-forktracer
apt-forktracer | sort > packages-not-from-debian

I also checked my system using dpkg --audit.

I ran everything inside script -t 2>~/upgrade-busterstep.time -a ~/upgrade-busterstep.script.

I was very afraid!

But eventually I just changed my /etc/apt/sources.list to the following:

deb http://deb.debian.org/debian buster main non-free contrib
deb http://security.debian.org/debian-security buster/updates main contrib non-free

# deb  http://ftp.debian.org/debian stretch main non-free contrib
# deb-src  http://ftp.debian.org/debian stretch main non-free contrib

# deb  http://ftp.debian.org/debian stretch-updates main non-free contrib
# deb-src  http://ftp.debian.org/debian stretch-updates main non-free contrib

# deb http://security.debian.org/ stretch/updates main non-free contrib
# deb-src http://security.debian.org/ stretch/updates main non-free contrib

I guess I should think about updates and security? Or perhaps they aren’t used anymore? I have no idea. I’ll look at this some other time.

Finally I ran the upgrade itself:

apt update
apt upgrade
apt dist-upgrade

I had two changes to system config files:

  1. in /etc/logrotate.conf I changed weeklydaily (keeping 4 days worth of logs instead of 4 weeks)
  2. in /etc/ssh/sshd_config I merged my changes with what upstream had provided

I am amazed to see that the server is still up. It didn’t even reboot. Let me sing the praise of Debian and all the things it includes and all the volunteers and maintainers and testers—thank you all. ❤️

I think I’m going to be fine. 😃

I noticed that I had to reinstall all my pip3 stuff.

For the Mastodon bots:

pip3 install Mastodon.py cairosvg html2text

For Epicyon:

pip3 install requests commentjson beautifulsoup4 pycryptodome

My monitoring also broke down.

Monit stopped monitoring a bunch of stuff because the checksums failed.

Munin isn’t running at all...

Well, time to look at this tomorrow!

Tags:

Comments on 2019-11-09 Upgrading Debian from Stretch to Buster

Cleaning up non-default config files... I installed debsums and use debsums -ce to find config files that I have changed.

Then I use a little script I got from StackExchange to find the diff.

#!/bin/bash

# Usage: debdiffconf.sh FILE
# Produce on stdout diff of FILE against the first installed Debian package
# found that provides it.
# Returns the exit code of diff if everything worked, 3 or 4 otherwise.

command -v apt-get >/dev/null 2>&1 || {
  echo "apt-get not found, this is probably not a Debian system. Aborting." >&2;
  exit 4; }
command -v apt-file >/dev/null 2>&1 || {
  echo "Please install apt-file: sudo apt-get install apt-file. Aborting." >&2;
  exit 4; }

FILE=$(readlink -f "$1")
while read PACKAGE
do
  # verify from first installed package
  if dpkg-query -W --showformat='${Status}\n' | grep installed > /dev/null
  then
    DIR=$(mktemp -d)
    cd "$DIR"
    echo "Trying $PACKAGE..." >&2
    apt-get download "$PACKAGE" >&2
    # downloaded archive is the only file present...
    ARCHIVE=$(ls)
    mkdir contents
    # extract entire archive
    dpkg-deb -x "$ARCHIVE" contents/ >&2
    if [ -f "contents$FILE" ]
    then
      # package contained required file
      diff "contents$FILE" "$FILE"
      RET=$?
      # cleanup
      cd
      rm -Rf "$DIR"
      # exit entire script as this is the main shell
      # with the return code from diff
      exit $RET
    else
      # cleanup
      cd
      rm -Rf "$DIR"
    fi
  fi
done < <(apt-file -l search "$FILE")
# if we are here, it means we have found no suitable package
echo "Could not find original package for $FILE" >&2
exit 3

And if I think the changes I made could be reverted, I run patch --reverse $FILE on the file and paste the diff seen above to revert it.

– Alex Schroeder 2019-11-10 11:38 UTC


The problem I had seems to due to changes I made to the Apache configuration. Apparently I created a special account with password to allow access to Munin from remote systems. This makes sense since I’m running Munin on the server...

*** apache24.conf~	2019-05-16 01:21:08.000000000 +0200
--- apache24.conf	2019-11-10 12:47:00.533056622 +0100
***************
*** 14,25 ****
  Alias /munin/static/ /var/cache/munin/www/static/
  
  <Directory /var/cache/munin/www>
!     Require local
      Options None
  </Directory>
  
  <Directory /usr/lib/munin/cgi>
!     Require local
      <IfModule mod_fcgid.c>
          SetHandler fcgid-script
      </IfModule>
--- 14,36 ----
  Alias /munin/static/ /var/cache/munin/www/static/
  
  <Directory /var/cache/munin/www>
!     Order allow,deny
!     Allow from all
      Options None
+     AuthUserFile /etc/munin/munin-htpasswd
+     AuthName "Munin"
+     AuthType Basic
+     require valid-user
  </Directory>
  
  <Directory /usr/lib/munin/cgi>
!     Order allow,deny
!     Allow from all
!     Options None
!     AuthUserFile /etc/munin/munin-htpasswd
!     AuthName "Munin"
!     AuthType Basic
!     require valid-user
      <IfModule mod_fcgid.c>
          SetHandler fcgid-script
      </IfModule>

– Alex Schroeder 2019-11-10 11:51 UTC

Add Comment

2019-11-09 Epicyon needs Python 3.6 which needs a Debian upgrade

I’ve tried to install Epicyon on my server but I’m running into a Python problem. I think I have everything set up correctly (my notes below) but I’m getting a syntax error in webfinger.py:

python3 webfinger.py 
  File "webfinger.py", line 81
    return f"data:application/magic-public-key,RSA.{mod}.{pubexp}"
                                                                 ^
SyntaxError: invalid syntax

I’m using Debian stable (9.11) on my server which means I have Python 3.5.3 installed. f-strings joined Python in 3.6, apparently.

So... Upgrade my server? Make Epicyon work with Python 3.5? Postpone the entire thing?

I don’t know...

Epicyon

I was more or less following the official installation instructions.

Some decisions:

  • I’m not using certbot because I am using dehydrated.
  • I don’t want to install in /etc/epicyon. Let me use /home/epicyon instead.

Install the code as root:

apt-get -y install tor python3-pip python3-socks imagemagick python3-numpy python3-setuptools python3-crypto python3-dateutil python3-pil.imagetk
pip3 install commentjson beautifulsoup4 pycryptodome
adduser --system --home=/home/epicyon --group epicyon
cd /home
git clone https://gitlab.com/bashrc2/epicyon
cd epicyon
./theme purple
chown -R epicyon:epicyon /home/epicyon

Create the demon by creating the file /etc/systemd/system/epicyon.service as follows:

[Unit]
Description=epicyon
After=syslog.target
After=network.target

[Service]
Type=simple
User=epicyon
Group=epicyon
WorkingDirectory=/home/epicyon
ExecStart=/usr/bin/python3 /home/epicyon/epicyon.py --port 443 --proxy 7156 --domain fedi.alexschroeder.ch --registration open --debug
Environment=USER=epicyon
Restart=always
StandardError=syslog
CPUQuota=30%

[Install]
WantedBy=multi-user.target

Activate the demon:

systemctl enable epicyon
systemctl start epicyon

Check how it is doing:

journalctl -u epicyon

On my first attempt I had forgotten to install the Python modules using pip3 and saw the error messages in the journal alerting me to the fact.

As for the website, I’m using Apache. I’m on my own, now. I edited /etc/apache2/sites-enabled/100-alexschroeder.ch.conf because that’s where my site is configured and added the following:

  • a redirect from HTTP on port 80 to HTTPS on port 443
  • the right server name
  • SSL files generated by dehydrated
  • proxying to port 7156

I’m basically trusting my remaining SSL and security settings.

Here’s what I added to the site configuration:

<VirtualHost *:80>
    ServerName fedi.alexschroeder.ch
    Redirect permanent / https://fedi.alexschroeder.ch/
</VirtualHost>
<VirtualHost *:443>
    ServerName fedi.alexschroeder.ch
    SSLEngine on
    SSLCertificateFile      /var/lib/dehydrated/certs/alexschroeder.ch/cert.pem
    SSLCertificateKeyFile   /var/lib/dehydrated/certs/alexschroeder.ch/privkey.pem
    SSLCertificateChainFile /var/lib/dehydrated/certs/alexschroeder.ch/chain.pem
    SSLVerifyClient None
    ProxyPass /             http://alexschroeder.ch:7156/
</VirtualHost>

Next I had to tell my domain name service provider (Gandi) about the new subdomain. In the DNS zone file, I had to add a line for IPv4 and IPv6:

fedi 10800 IN A 178.209.50.237
fedi 10800 IN AAAA 2a02:418:6a04:178:209:50:237:1

Then I had to tell dehydrated about the new subdomain by editing /etc/dehydrated/domains.txt and changing the line for my site to:

alexschroeder.ch www.alexschroeder.ch fedi.alexschroeder.ch

With that, I was ready to regenerate the certificates (still doing everything as root). The version of dehydrated in Debian isn’t working for me so I have to use a copy of the master branch I checked out.

/home/alex/src/dehydrated/dehydrated -c

And now I should be ready!

  • the domain name server points to the right machine
  • on that machine, the web server knows how to redirect from HTTP to HTTPS for this domain
  • the web server also knows how to proxy HTTPS to port 7156 locally
  • the application is running and listening on the port

Or is it? I’m getting an error! journalctl -u epicyon shows the following:

Nov 09 14:56:01 sibirocobombus python3[27300]: Traceback (most recent call last):
Nov 09 14:56:01 sibirocobombus python3[27300]:   File "/home/epicyon/epicyon.py", line 9, in <module>
Nov 09 14:56:01 sibirocobombus python3[27300]:     from person import createPerson
Nov 09 14:56:01 sibirocobombus python3[27300]:   File "/home/epicyon/person.py", line 19, in <module>
Nov 09 14:56:01 sibirocobombus python3[27300]:     from webfinger import createWebfingerEndpoint
Nov 09 14:56:01 sibirocobombus python3[27300]:   File "/home/epicyon/webfinger.py", line 81
Nov 09 14:56:01 sibirocobombus python3[27300]:     return f"data:application/magic-public-key,RSA.{mod}.{pubexp}"
Nov 09 14:56:01 sibirocobombus python3[27300]:                                                                  ^
Nov 09 14:56:01 sibirocobombus python3[27300]: SyntaxError: invalid syntax

Tags:

Comments on 2019-11-09 Epicyon needs Python 3.6 which needs a Debian upgrade

I decided to stop Epicyon for now. It was basically using up all the CPU resources I had granted it via systemd. With no actual activity, I find that unacceptable.

Load goes up

Sometime between November 10 and 11 I managed to get it up and running and load goes up, and stays up.

– Alex Schroeder 2019-11-14 21:30 UTC


A lot of back and forth on this issue. Increase the CPU quota? Work on debugging this? What’s the problem?

Or should I just try Pleroma instead?

– Alex Schroeder 2019-11-16 22:09


I think I’ll give Pleroma a try...

systemctl stop epicyon
systemctl disable epicyon
apt remove tor python3-socks python3-numpy python3-setuptools \
  python3-crypto python3-dateutil python3-pil.imagetk

– Alex Schroeder 2019-11-17 11:09

Add Comment

2019-09-25 RAM doubled on this VM

A few days ago I received an email: if I power cycled the VM all my sites are running on, I’d get my RAM doubled for no reason. Ohhhh, cool! But...

Power cycling my server always scares me – but they promised to double my RAM if I did it… What to do! What to do? What to do‽ 😨

I did it.

Monit shows all the processes restarting

Well, actually I did a soft reset first and I think that didn’t do it! They really want me to power cycle the thing. So here we go, again! 😱

Or did it? Anyway, I did it again, but this time I really did a power toggle.

And it seems to have worked.

Munin shows RAM going from 3GB to 6GB

I don't understand all the lines on that graph, to be honest. What’s that green “committed” line doing, for example? On ServerFault somebody wrote:

Committed memory is, essentially, all the memory which has been allocated by applications, whether it’s used or not. In contrast, the “apps” is memory that is allocated AND used.

OK! Lots of unused memory allocated to apps... Looks like I have a ton of dead apps hanging around or something?

The price we pay is always interesting, of course. I’m paying 15€/month for this kernel-based virtual machine (KVM). And apparently that now gets me one core at 2.3GHz, 6GB RAM and around 75GB of disk space.

alex@sibirocobombus:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.0G     0  3.0G   0% /dev
tmpfs           598M  7.9M  590M   2% /run
/dev/vda1        73G   32G   39G  46% /
tmpfs           3.0G     0  3.0G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.0G     0  3.0G   0% /sys/fs/cgroup
tmpfs           598M     0  598M   0% /run/user/1000

No idea how that compares to others.

Tags:

Add Comment

2019-07-24 Alternative multips plugins for Munin

I learned about multips and multips_memory today but found that I couldn’t use them for my purposes.

The main problem is that both of them depend on the process name. Let me illustrate the problem:

# ps -eo rss,comm
...
29908 /home/alex/farm
30836 /home/alex/farm
 7596 /home/alex/farm
24156 /home/alex/farm
...

But I want to match on the command name:

# ps -eo rss,command
...
29908 /home/alex/farm/communitywiki2.pl
30836 /home/alex/farm/campaignwiki2.pl
 7596 /home/alex/farm/emacswiki2.pl
24156 /home/alex/farm/oddmuse2.pl

Furthermore, I can’t rely on awk splitting and looking at $2 because I want to match on the arguments as well. These three processes are distinct:

# ps -eo rss,command|grep gopher|grep -v grep
 4008 /home/alex/perl5/perlbrew/perls/perl-5.26.1/bin/perl /home/alex/farm/gopher-server.pl --setsid --user=alex --group=alex --host=alexschroeder.ch --port=79 --log_level=3 --log_file=/home/alex/farm/finger-server.log --pid_file=/home/alex/farm/finger-server.pid --wiki=/home/alex/farm/wiki.pl --wiki_dir=/home/alex/alexschroeder --wiki_pages=alex --wiki_pages=About --wiki_pages=Contact
26240 /home/alex/perl5/perlbrew/perls/perl-5.26.1/bin/perl /home/alex/farm/gopher-server.pl --setsid --user=alex --group=alex --host=alexschroeder.ch --port=7443 --log_level=3 --log_file=/home/alex/farm/gopher-server-ssl.log --pid_file=/home/alex/farm/gopher-server-ssl.pid --wiki=/home/alex/farm/wiki.pl --wiki_dir=/home/alex/alexschroeder --wiki_pages=SiteMap --wiki_pages=About --wiki_pages=Gopher --menu=Moku_Pona_Updates --menu_file=/home/alex/.moku-pona/updates.txt --menu=Moku_Pona_Sites --menu_file=/home/alex/.moku-pona/sites.txt --wiki_cert_file=/var/lib/dehydrated/certs/alexschroeder.ch/fullchain.pem --wiki_key_file=/var/lib/dehydrated/certs/alexschroeder.ch/privkey.pem
19776 /home/alex/perl5/perlbrew/perls/perl-5.26.1/bin/perl /home/alex/farm/gopher-server.pl --setsid --user=alex --group=alex --host=alexschroeder.ch --port=70 --log_level=3 --log_file=/home/alex/farm/gopher-server.log --pid_file=/home/alex/farm/gopher-server.pid --wiki=/home/alex/farm/wiki.pl --wiki_dir=/home/alex/alexschroeder --wiki_pages=SiteMap --wiki_pages=About --wiki_pages=Gopher --menu=Moku_Pona_Updates --menu_file=/home/alex/.moku-pona/updates.txt --menu=Moku_Pona_Sites --menu_file=/home/alex/.moku-pona/sites.txt

That’s a Gopher server listening on ports 70 (Gohper), 79 (Finger), and 7443 (TLS Gopher).

Anyway, I made some changes.

For multips I got rid of the overly restricted regular expressions:

# diff /usr/share/munin/plugins/multips /etc/munin/plugins/alex_multips
8,9c8,9
< multips - Munin plugin to monitor number of processes. Which processes
< are configured in client-conf.d
---
> alex_multips - Munin plugin to monitor number of processes. Which
> processes are configured in client-conf.d
19,25c19,20
<   [multips]
<      env.names pop3d imapd sslwrap
<      env.regex_imapd ^[0-9]* imapd:
<      env.regex_pop3d ^[0-9]* pop3d:
< 
< The regex parts are not needed if the name given in "names" can be
< used to grep with directly.
---
>   [alex_multips]
>      env.names perl6
26a22,23
> The regexp environment variables have been removed.
>      
30,31c27
< configured regular expressions.  The regular expressions are
< interpreted by "grep" (and not egrep or perl).
---
> command name.
80d75
< 		eval REGEX='"${regex_'$name'-\<'$name'\>}"'
84c79
< 		echo "$fieldname.info Processes matching this regular expression: /$REGEX/"
---
> 		echo "$fieldname.info Processes matching $name"
95d89
< 	eval REGEX='"${regex_'$name'-\<'$name'\>}"'
98c92
< 		$PGREP -f -l "$name" | grep "$REGEX" | wc -l
---
> 		$PGREP -f -l "$name" | wc -l
101c95
< 		/usr/ucb/ps auxwww | grep "$REGEX" | grep -v grep | wc -l
---
> 		/usr/ucb/ps auxwww | grep -v grep | wc -l
103c97
< 		ps auxwww | grep "$REGEX" | grep -v grep | wc -l
---
> 		ps auxwww | grep -v grep | wc -l

For multips_memory I got rid of the overly restricted regular expressions, but this time in the awk script at the end, and I use command instead of comm in the ps format.

# diff /usr/share/munin/plugins/multips_memory /etc/munin/plugins/alex_multips_memory
8c8
< multips_memory - Munin plugin to monitor memory usage of processes. Which
---
> alex_multips_memory - Munin plugin to monitor memory usage of processes. Which
15c15
<   ps -eo rss,comm
---
>   ps -eo rss,command
19c19
< You must specify what process names to monitor:
---
> You must specify what commands to monitor:
21c21
<   [multips_memory]
---
>   [alex_multips_memory]
28d27
< 
42,43c41,43
< /etc/munin/plugins or /etc/opt/munin/plugins), eg. multips_memory_rss and
< multips_memory_vsize as symlinks to multips_memory and configure them thus:
---
> /etc/munin/plugins or /etc/opt/munin/plugins), eg. multips_memory_rss
> and multips_memory_vsize as symlinks to alex_multips_memory and
> configure them thus:
58d57
< 
77,81d75
< Only the executable name is matched against (ps -eo comm)1, and it must
< be a full string match to the executable base name, not substring,
< unless you enter a name such as ".*apache" since RE meta characters in
< the names are active.
< 
95a90,91
> Switching from comm to command by Alex Schroeder <alex@gnu.org>
> 
129,130d124
< 		eval REGEX='^$name$';
< 
132c126
< 		echo "$fieldname.info For /$REGEX/"
---
> 		echo "$fieldname.info For /$name/"
140c134
< 	ps -eo $monitor,comm | gawk '
---
> 	ps -eo $monitor,command | gawk '
143c137
< $2 ~ /^'"$name"'$/ { total = total + ($1*1024); }
---
> '"/$name/"'        { total = total + ($1*1024); }

For configuration, I use the same regular expressions for both plugins:

[alex_multips*]
env.names gridmapper-server alexschroeder campaignwiki communitywiki emacswiki face food halberdsnhelmets helmut hex-describe hug linearb mark megadungeon monones names oddmuse paper small-sites software soweli-lukin tarballs text-mapper traveller trunk gopher-server.*port=79 gopher-server.*port=70 gopher-server.*port=7443 perl6

Tags:

Add Comment

2019-07-21 PureOS, Debian, and updates

I use PureOS, which is derived from Debian. Debian recently had a new release, so I was interested in learning how they handled it. Apparently not too well.

I had already added the following line to my /etc/apt/sources.list because I needed the GNU manuals:

deb http://http.us.debian.org/debian testing non-free

That’s why I wasn’t surprised when apt told me that something or other had changed from testing to something else. Whatever, I accept. For the purposes of this blog post I commented that line.

But something still isn’t right:

$ sudo apt update
Hit:1 https://repo.puri.sm/pureos green InRelease
Reading package lists... Done                               
Building dependency tree       
Reading state information... Done
All packages are up to date.
N: Skipping acquire of configured file 'main/binary-i386/Packages' as repository 'https://repo.puri.sm/pureos green InRelease' doesn't support architecture 'i386'

I wonder what that means.

Tags:

Comments on 2019-07-21 PureOS, Debian, and updates

The problem remains unsolved. I deleted /etc/apt/sources.list and recreated it using Software & Updates.

Software & Updates

New content:

deb https://repo.pureos.net/pureos/ green main

I used to have a line saying puri.sm instead of pureos.net but it appears to make no difference:

$ sudo apt update
Hit:1 https://repo.pureos.net/pureos green InRelease
Reading package lists... Done
Building dependency tree       
Reading state information... Done
All packages are up to date.
N: Skipping acquire of configured file 'main/binary-i386/Packages' as repository 'https://repo.pureos.net/pureos green InRelease' doesn't support architecture 'i386'

– Alex Schroeder 2019-07-27 20:53 UTC


What architecture am I using?

$ dpkg --print-architecture
amd64
$ dpkg --print-foreign-architectures
i386

Perhaps that is the problem? I think I don’t have any i386 stuff installed:

$ dpkg --get-selections | grep :i386

No output.

Let’s remove it!

$ sudo dpkg --remove-architecture i386

Problem solved!

$ sudo apt update
Hit:1 https://repo.pureos.net/pureos green InRelease
Reading package lists... Done
Building dependency tree       
Reading state information... Done
All packages are up to date.

– Alex Schroeder 2019-07-27 20:59 UTC


Add this to the Purism forum, just in case you find more info in the replies over there.

– Alex Schroeder 2019-07-27 21:05 UTC


For future reference, you shouldn’t have a line like

deb http://http.us.debian.org/debian testing non-free

because you might accidently upgrade to the next testing release when you don’t want to. Instead you should refer to the releases by codename, ie:

deb http://http.us.debian.org/debian buster non-free

That way, you can upgrade to the next release at your own discretion.

– matthew 2019-07-30 05:38 UTC


Thanks!

– Alex Schroeder 2019-07-30 06:02 UTC

Add Comment

2019-07-21 Not trusting a Mac

Here I am, sitting next to my wife’s unattended Mac. Suddenly the Mac’s fan is spinning up. What the hell?

I open a terminal and run top. Apparently load was up to 6, slowly going down but still around 3. How strange. I use the o key to change the sort order and use cpu as the primary key. The process using about 50% of the CPU is photoanalys. It’s shortened. I assume it’s photoanalysisid. Other people have reported something like it back in 2016.

Today, I had loaded some pictures onto the external disk. That would explain it? So I’m trying to eject the external disk, but that doesn’t work because it’s “being used by another process”. I still have top running. So now I’m force-ejecting the external drive. In the top window I see ReportCrash for a moment. What the hell is it doing?

This blog post says “it’s designed to saves the application state to aid developers in working out why the app crashed” (ReportCrash High CPU & How to Disable reportcrash in Mac OSX). OK, I guess?

But I think my main problem is I don’t trust systems that have a ton of processes starting up and doing stuff and shutting down and maybe all of that is required, and perhaps it’s “modular design”, but I also get a vague feeling of dread as the design of our machines complicates.

Tags:

Comments on 2019-07-21 Not trusting a Mac

Just so you know I dislike this complexity in all systems, here’s what happened to me today with my GNU/Linux laptop: I pulled out the SD card including the adapter from my camera, marvelled at it, pulled the micro (?) card out of the adapter, showed it to my wife, put it back, and plugged it into my laptop. I heard the typical beep-bop sound, but the drive didn’t mount. I removed the card and heard the typical bop-beep sound. Repeated it a few times, but it didn’t mount. I asked for help on Mastodon, got some good replies, with ideas ranging from lsblk to mount to fdisk. But the thing that fixed it was trying the same thing on the Mac, seeing that it failed, pulling the micro card out of the adapter and inserting it again, and then trying it all over again. And then it worked.

So, the problem was that something about the contacts in the card adapter was good enough for the laptop to recognize a card but not good enough to recognize a filesystem on it? Or something? And the suggestions revealed the abyss of layers upon layers of architecture required to make external drives and plug-and-play and USB all work. And for a short moment, I wanted it all to just go away. What have we done?

– Alex Schroeder 2019-07-21 21:22 UTC


Every problem in computing can be solved by adding another layer of indirection, except the problem of having too many layers of indirection.

Ed Davies 2019-07-22 10:49 UTC


Hahaha!

– Alex Schroeder 2019-07-22 14:10 UTC

Add Comment

More...

Comments


Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. Note: in order to facilitate peer review and fight vandalism, we will store your IP number for a number of days. See Privacy Policy for more information. See Info for text formatting rules. You can edit the comment page if you need to fix typos. You can subscribe to new comments by email without leaving a comment.

To save this page you must answer this question:

Please say HELLO.