Oddmuse

Oddmuse is the wiki engine running all the wikis at emacswiki.org – including EmacsWiki itself. My main interests are:

See OddmuseRoadmap for my thoughts about the design of Oddmuse.

2020-01-01 What about Oddmuse?

Yeah. What about Oddmuse? Am I going to reinvent everything and implement in Raku (i.e. Perl 6) – or do I keep working on what we currently use and stick with Perl (i.e. Perl 5)?

  • There’s the question of the web server framework: Raku has Cro but somehow I never quite got to grips with it and I have slowly warmed to Mojolicious.
  • There’s the question of markup rendering. Oddmuse 6 has a few Markdown modules, but the ones I looked at didn’t use Raku Grammars, and I was unable to implement one myself. I never quite figured out how I would compose Grammars in order to allow users to have per-site text formatting rules. Then again, this state machine based parser I’m using is still slow. Too slow?
  • Sadly, Oddmuse 6 takes a lot more memory.

Just now I’ve skimmed the list of Revolutionary Changes we could attempt for Oddmuse. Perhaps this choice between Raku and Perl has been paralysing me as well?

I keep getting back to this because I feel Oddmuse could move towards typical static site generators with an online editor, basically.

I would like to separate the page files into three files: the raw text file for easy editing, checking into a version control system, backup, and so on; the static HTML file for quick serving using a dedicated web server; and the metadata for the current version of the page.

This means that I also need to get rid of “dirty” text formatting rules. The original issue has been that a link to a non-existing page would render differently (remember that question mark after a WikiWord?) – but these days I think we might simply ignore whether a target page exists or not. I’ve been using such a setup on this site (my homepage) for quite a while and noticed no adverse effects.

We would also lose things that didn’t turn out to be too popular, anyway: transcluding pages from elsewhere; transcluding RSS feeds. Those features would simply disappear and it would be hard to get them back. We could, if we wanted to, move their code into a separate action and replace the inlined HTML with a link to the new action. It think it would be just as well.

I'm pasting images into the blog more often. The problem is that the move to HTTPS made it really hard to hot-link external content. Sure, if you knew how, you can setup your HTTP headers to do the right thing. But whoa! Isn’t it better do disallow that anyway? No more bandwidth stealing from other people. No spying on you by using images from servers that might be tracking you.

That’s why better uploads are on my mind.

If we have better uploads, we might also get rid of the “upload file on a page”. I still love the idea. But in practice, I fear it failed. I never met anybody who loved the idea. I know, sad! But once we decide to no have images as page files, or as a special kind of page file, we can think about moving comments into a special kind of page file (instead of relying on a visible prefix).

And once we’ve gone this far, perhaps we can do the same with day pages? I know that I almost always write [[2020-01-01 Foo|Foo]] when I link to other pages on this blog.

Et tu, CGI.pm. Yes, you. What should we do? With Oddmuse 6 I tried to use templates for all the actions and it worked surprisingly well. Perhaps we can start doing that for Oddmuse? Then we abandon CGI both for the generation of HTML and for the use of Oddmuse as a CGI script. In fact, we could move to Mojolicious 100%. You can still use a Mojolicious app as a CGI script, I hear. It’s probably slow. But then again, at least it still works.

So much to do, it’s true. But I know myself. What shall I do when summer break 2020 comes up? It pays to be prepared. I loved that energy when Alex Daniel was totally into Oddmuse. We made plans. We got shit done! 🙂

Tags:

Comments on 2020-01-01 What about Oddmuse?

For what it’s worth, I found the OddMuse way of dealing with images much easier than PmWiki’s attachment system, and used it accordingly. But yeah, comments and day pages could be better handled. Not that mixing a blog and a wiki worked all that well for me. The two are fundamentally different media. And the promise of reworking old blog posts into more permanent article is moot when you have to copy-paste manually anyway. Assuming you’d even want to destroy your own historical context.

Felix 2020-01-05 12:36 UTC


I guess you could say that it didn’t really for me either: I ended up having a blog built on wiki tech, but not on wiki culture. 😒

– Alex Schroeder 2020-01-05 12:52 UTC


Perhaps because a blog built on wiki culture is an oxymoron. 😉 It’s also kind of pointless. What exactly stops anyone from going back and editing old blog posts on an ordinary blog if the need arises? What advantage does a wiki provide here? OddMuse was just what I needed to rescue my old website, for a number of reasons... except for those parts that were better off turned into static pages, like the game library and blog archive. And more content might end up having the same fate. It’s not like I have any collaborators left to worry about.

Felix 2020-01-05 18:59 UTC


Absolutely true. And for me and my RPG interest I have a related problem: I sometimes revise my ideas and so I could in theory write a wiki and keep it in the Wiki Now, forgetting the past. But no, I keep posting blog posts, linking to old stuff, never quite assembling a document collection for a snapshot in time. Like time and dust, things accrete and sink deeper and deeper into the past, and new layers cover everything.

I guess in the end it is a question of affordance. The wiki allows us to pivot and change precisely because it has so little built in blog-supporting structure.

– Alex Schroeder 2020-01-05 20:22 UTC


Which reminds me. Before the big change, No Time To Play was all blog, and that meant content that should have been timeless ended up buried. Even tags and categories weren’t enough of a fix. But in the middle of the big migration I found that, conversely, some of the content needed to stay chronological. Even if parts of it //also// ended up in the wiki now.

Well, “wiki now”. Even on the wiki, all articles have their original publication date at the top, unless I forgot to add it. No matter what, we need the historical context.

Which, I suppose, is also why in practice most wiki pages remain in the form of a conversation, with little if anything moved to the top. Go figure.

Felix 2020-01-06 07:15 UTC


Like this conversation...

– Alex Schroeder 2020-01-06 07:42 UTC


Via a web mention:

In the spheres I follow there’s been a big push lately for protocols like gopher and gemini, a form of expression that crushes the entire possibility space into the pre-HTML basics: text, hyperlinks, and files. I understand the sentiment. The web has become so overgrown with weeds and tangles of an ever-growing and always-”innovating” spec that it’s tempting to just want to scrap it all and start tabula rasa.

I dunno, I appreciate the minimalism, but for me, plain text is an over-correction. It feels too stifling. I want my page to look at least a little unique (and I’m not that into ASCII art.)

Add Comment

2019-12-19 Oddmuse 6 memory use

Process RSS summed

I have a Munin module that runs ps -eo rss,command | grep $prog for various regular expressions.

Well, actually I sum it all up so I run this:

ps -eo rss,command | gawk '
BEGIN              { total = "U"; } # U = Unknown.
/grep/             { next; }
'"/perl6/"'        { total = total + $1; }
END                { print total; }'

For Oddmuse 6 running on Cro and Perl 6 the result is 385120, i.e. 385 MB. That doesn’t quite match the data in the image (about 350MB) but it’s close enough. Compare that to the regular Oddmuse instances running on Perl 5 with Mojolicious...

Am I comparing apples to oranges? Let’s see the ps output for this stuff and compare it to the output of a regular Oddmuse wiki:

alex@sibirocobombus:~$ ps -eo rss,command | grep perl6
  892 grep perl6
358120 /home/alex/rakudo/bin/moar --execname=/home/alex/rakudo/bin/perl6 --libpath=/home/alex/rakudo/share/nqp/lib --libpath=/home/alex/rakudo/share/nqp/lib --libpath=/home/alex/rakudo/share/perl6/lib --libpath=/home/alex/rakudo/share/perl6/runtime /home/alex/rakudo/share/perl6/runtime/perl6.moarvm service.p6
alex@sibirocobombus:~$ ps -eo rss,command | grep campaignwiki
 5104 /home/alex/campaignwiki.org/gridmapper-server.pl
  892 grep campaignwiki
32924 /home/alex/farm/campaignwiki2.pl
 7784 /home/alex/farm/campaignwiki2.pl
32460 /home/alex/farm/campaignwiki2.pl
17712 /home/alex/campaignwiki.org/gridmapper-server.pl
alex@sibirocobombus:~$ ps -eo rss,command | gawk '
BEGIN              { total = "U"; } # U = Unknown.
/grep/             { next; }
'"/campaignwiki2/"'        { total = total + $1; }
END                { print total; }'
73168

So, 358120 for Oddmuse 6 looks correct, and the 73168 for Campaign Wiki looks correct. Both act as web servers behind Apache, both run a wiki.

Tags:

Add Comment

2019-10-26 Trying to add ActivityPub to Oddmuse

OK, so I want to add ActivityPub to my wiki. I started the endeavour by looking at the blog post How to implement a basic ActivityPub server as it uses static files to get started.

My goals:

  1. be able to search for my account ✔
  2. see all my blog posts on the profile page ✘

I’ve compared my responses to the responses from my Octodon Social account and can’t figure out what I’m doing wrong.

Let’s start at the beginning: We need WebFinger support. Compare the two:

curl https://octodon.social/.well-known/webfinger?resource=acct:kensanata@octodon.social
curl https://alexschroeder.ch/.well-known/webfinger?resource=acct:alex@alexschroeder.ch

Looking good!

Use the self URL from the result above and the application/json MIME type (if you don’t, you’ll get HTML) to get the actor:

curl -H Accept:application/json https://octodon.social/users/kensanata
curl -H Accept:application/json https://alexschroeder.ch/wiki?action=actor

Find the outbox in the result above and request it:

curl -H Accept:application/json https://octodon.social/users/kensanata/outbox
curl -H Accept:application/json https://alexschroeder.ch/wiki?action=outbox

This is where the good times end. I can search for the profile of @alex in my Mastodon client, and I can click on the profile, and it lists the number of items in my outbox (4.9K) – but it doesn’t actually list the items themselves. In order to do this, the Mastodon server would have to follow the link to the “first” page of items and my webserver logs show that it does not. 😟

As far as I can tell the only different is that my outbox doesn’t list a “last” property.

This is what we would be getting:

curl -H Accept:application/json https://octodon.social/users/kensanata/outbox?page=true
curl -H Accept:application/json "https://alexschroeder.ch/wiki?action=outbox&offset=0"

I’m not sure how to debug this, and since Mastodon aggressively caches responses, it’s also hard to retry things.

Tags:

Add Comment

2019-10-13 ActivityPub and Oddmuse?

I wonder whether I should write an extension to a basic ActivityPub server for Oddmuse. What would it do? Allow people to comment? And it would also allow people to delete their comments? And offer a moderating interface so that any users could remove any comment from the wiki? After all, we want peer review.

We also want to edit each other’s wiki pages. How would you edit a wiki page that is based on ActivityPub posts and comments. What would it mean for the original posts and comments? Say you left a comment and I fix a typo in your comment, but then you delete your comment. Does my edit disappear? What if my contribution was more than just a typo fix. Does it still disappear?

Would it be possible to create new posts using a post shared with the wiki? What would we get: a wiki that is also an archive of a conversation? As long as you mention the wiki, new stories and comments on the story get posted.

Thinking about this makes by head hurt.

For now it seems to me that a trivial implementation makes no sense. These are our options:

  1. Just post edits to the fediverse. You can already do this by plugging a RSS feed into a bot. Example: @kensanata@bots.tinysubversions.com. This has been done.
  2. Allow wiki editing and posting with weird restrictions as described above. I think this is concept needs a lot more though.
  3. Use a new ActivityPub vocabulary that allows us to talk about page edits. This would work, but it would also require clients that can offer the right UI. It would need servers that offer a new API. It would be very, very similar to simply replicating the database in the back via git, actually. The benefit is unclear to me.

Tags:

Add Comment

2019-01-26 Pingback

@Halfjack mentioned pingback and I realised that Brock had contributed a Pingback Server Extension to Oddmuse back in 2004. I decided to take another look because I’m unhappy with how the Automatic Link Back currently works: it looks at referrer headings and thus it adds link for all the visitors coming to my site via blog rolls, web crawlers, etc.

Pingback would at least indicate the intention of letting me know about a link. I appreciate that, even if I’m 15 years late to the party. Hah!

Work in progress: pingback-server.pl

Tags:

Comments on 2019-01-26 Pingback

Should work now, actually. I even wrote a test that starts a source and a target server in the background. So proud. 🙂

This blog is now officially pingback enabled. Waiting for the spam!

– Alex Schroeder 2019-01-27 19:09 UTC


Now that I have a pingback-client implementation, I’m noticing that most of the blogs I’m interested in are Blogspot blogs which don’t have Pingback. 😢

– Alex Schroeder 2019-01-29 13:01 UTC


Testing Jamie’s site.

– Alex Schroeder 2019-03-06


Hah, today I read about Webmention.

The Webmention spec began as a simplified version of the Pingback spec. Where Pingback required sending the source and target URLs in an XML-RPC payload, Webmention simplified that to a form-encoded payload, which meant it could easily be used in HTML forms, was easier to work with since more tools exist for form-encoded payloads, and was not vulnerable to accidentally exposing other parts of a system’s code via XML-RPC.

Excellent! I implemented a Webmention Extension for Oddmuse.

– Alex Schroeder 2019-05-24 19:46 UTC

Add Comment

2018-12-01 Thinking about the real RSS 3.0

I’m still sad about RSS 2.0 and Atom. Why didn’t we go the other way? Aaron Swartz had the right ideas and called it RSS 3.0! He was so ahead of his time.

Follow those links and read it. The introduction in particular is still gold! 😂

Quote:

  1. Remove XML. XML is just too complicated and is against the spirit of RSS, which is Really Simple Syndication. [...] Instead, we’ll go back to RFC822-style fields. [...]
  2. Remove namespaces. Namespaces are just a waste of time. [...]
  3. HTML forbidden. No one needs HTML. Email has been just fine for years before Microsoft introduce their stupid rich HTML extensions. HTML is for those loser newbies. Any intelligent Internet user deals in plain text.

Clearly, it’s a joke. But the reason it burns is because it is so attractive.

And to be sure, it is a joke! Aaron Swartz was a member of the RSS-DEV Working Group which had developed RSS 1.0. He didn’t actually espouse the values expressed in the list above. But they still speak to me in an irrational way, like Gopher speaks to me.

Wouldn’t this make sense, in a retro kind of way? When some people start seeing a point in Gopher in an age of HTML 5 and rich multi-media hypertext, then perhaps we can also go “back” to a syndication format that never was. I don’t actually prefer Gopher to the Web. I prefer the spirit of Gopher. I like being able to write a client and a server in a few lines of code. I like the absence of Javascript and Cookies. I like the lack of surveillance. That’s because Gopher is simple. I want things to be simple. And I want feed parsing to be simple. Sure, there are libraries to help parse XML (like there are for HTTP, HTML, caching, headers, content negotiation, Javascript, and so on). But nothing beats a few lines of code.

sub ParseData {
  my $data = shift;
  my %result;
  while ($data =~ /(\S+?): (.*?)(?=\n[^ \t]|\Z)/gs) {
    my ($key, $value) = ($1, $2);
    $value =~ s/\n\t/\n/g;
    $result{$key} = $value;
  }
  return wantarray ? %result : \%result; # return list sometimes for compatibility
}

Oddmuse uses this format to save data to disk. 🙃

Anyway, if you want to go for a deep dive, there is a lot more history and examples in the long, multi-page article The Evolution of RSS. The history section on Wikipedia is much shorter.

Tags:

Add Comment

2018-10-25 Using a Mastodon Bot

Perhaps if I had a dedicated bot acting on behalf of the blog, like a newsbot. It toots what I post on the blog, and it posts any replies it gets as comments to the blog. Hm. 🤔

I’ll have to think about that. 🧠

I had half of a Blog/Mastodon bridge going for a while until I decided that people on Mastodon weren’t expecting their toots to show up as comments on the blog (including caching and all that), specially as I myself am trying to expire all my old toots. But a dedicated bot would (perhaps?) solve that problem.

Tags:

Comments on 2018-10-25 Using a Mastodon Bot

That should be relatively easy. I have a few Twitter bots that crawl RSS feeds and post them to Twitter, it’d just be a matter of changing the output API to Mastodon.

Neither costs me anything at the moment.

Wes Baker 2018-10-25 17:08 UTC


Very cool, thanks for the links!

– Alex Schroeder 2018-10-25 20:13 UTC

Add Comment

2018-10-22 Monit and Oddmuse 6

I noticed that my Monit setup for Oddmuse 6 isn’t working.

check process oddmuse6 with pidfile /home/alex/oddmuse6/pid
    start program = "/home/alex/bin/oddmuse6" as uid alex gid alex
    restart program = "/home/alex/bin/oddmuse6" as uid alex gid alex
    stop program = "/bin/kill `cat /home/alex/oddmuse6/pid`" as uid alex gid alex
    if failed host next.oddmuse.org port 443 protocol https
      and request "/view/Monit" for 5 cycles then restart
    if totalmem > 1500 MB for 5 cycles then restart
    if 6 restarts within 15 cycles then timeout

monit status oddmuse6 gives me:

FAILED to [next.oddmuse.org]:443/view/Monit type TCP/IP using SSL/TLS protocol HTTP

Everything seems to be working, however. Finally I decided to try curl and look at the headers.

$ curl -I https://next.oddmuse.org:443/view/Monit
HTTP/1.1 405 Method Not Allowed

Then I realized: this is HEAD vs. GET requests!

Confirmation in the Apache log. There’s the 405 status code again.

next.oddmuse.org:443 178.209.50.237 - - [22/Oct/2018:09:36:52 +0200] "HEAD /view/Monit HTTP/1.1" 405 4561 "-" "Monit/5.20.0"

Clearly, I need to start working on caching and handling HEAD requests in Oddmuse 6!

Tags:

Add Comment

2018-10-21 Cro as a Service?

I ran Oddmuse 6 for a while now from a little shell script. It set up the environment variables, it checked whether the process was already running, it used nohup to run perl6 service.p6, it saved the PID in a file, and so on. But it didn’t use cro run. And I still don’t know how to do it.

I would like to use cro run because I feel that’s what I need to do in the future when I run multiple sites in parallel. So let’s try this:

#!/bin/bash
export ODDMUSE_MENU="Home, Changes, About"
export ODDMUSE_QUESTION="Name a colour of the rainbow."
export ODDMUSE_ANSWER="red, orange, yellow, green, blue, indigo, violet"
export ODDMUSE_SECRET="rainbow-unicorn"
export ODDMUSE_HOST="next.oddmuse.org"
export ODDMUSE_PORT=20000

cd $HOME/oddmuse6
cro run

This script doesn’t work. All I see is this:

▶ Starting oddmuse6 (oddmuse6)
🔌 Endpoint HTTP will be at http://localhost:20000/
♻ Restarting oddmuse6 (oddmuse6)
♻ Restarting oddmuse6 (oddmuse6)
♻ Restarting oddmuse6 (oddmuse6)
♻ Restarting oddmuse6 (oddmuse6)
♻ Restarting oddmuse6 (oddmuse6)

If I replace cro run with perl6 service.p6, it works.

So now I’m back to this:

#!/bin/bash
export ODDMUSE_MENU="Home, Changes, About"
export ODDMUSE_QUESTION="Name a colour of the rainbow."
export ODDMUSE_ANSWER="red, orange, yellow, green, blue, indigo, violet"
export ODDMUSE_SECRET="rainbow-unicorn"
export ODDMUSE_HOST="next.oddmuse.org"
export ODDMUSE_PORT=20000

cd $HOME/oddmuse6
test -f pid && kill $(cat pid)
perl6 service.p6 &
echo $! > pid

Any ideas?

The service.p6 file is simple:

use Cro::HTTP::Log::File;
use Cro::HTTP::Server;
use Oddmuse::Routes;

my $logs   = open 'access.log'.IO, :w;
my $errors = open 'error.log'.IO,  :w;

my Cro::Service $http = Cro::HTTP::Server.new(
    http => <1.1>,
    host => %*ENV<ODDMUSE_HOST> ||
        die("Missing ODDMUSE_HOST in environment"),
    port => %*ENV<ODDMUSE_PORT> ||
        die("Missing ODDMUSE_PORT in environment"),
    application => routes(),
    after => [Cro::HTTP::Log::File.new(logs => $logs, errors => $errors)]
);
$http.start;
$logs.say: "Listening at http://%*ENV<ODDMUSE_HOST>:%*ENV<ODDMUSE_PORT>";
react {
    whenever signal(SIGINT) {
        $logs.say: "Shutting down...";
        $http.stop;
        done;
    }
    whenever signal(SIGHUP) {
        $logs.say: "Ignoring SIGHUP...";
    }
}

Oddmuse::Routes is available from the Oddmuse 6 repo.

Hm, what could be the problem? Let’s look at the files:

  total 80
  drwxr-xr-x  6 alex alex 4096 Oct 21 11:40 .
  drwxr-xr-x 57 alex alex 4096 Oct 21 11:57 ..
  -rw-r--r--  1 alex alex  205 Oct  7 18:42 .cro.yml~
  -rw-r--r--  1 alex alex  479 Oct  7 19:23 .cro.yml
  -rw-r--r--  1 alex alex   10 Oct  7 18:42 .dockerignore
  -rw-r--r--  1 alex alex   52 Oct  7 18:42 .gitignore
  -rw-r--r--  1 alex alex  221 Oct  7 18:42 Dockerfile
  -rw-r--r--  1 alex alex  471 Oct  7 18:42 META6.json
  -rw-r--r--  1 alex alex  473 Oct  7 18:42 README.md
  -rw-r--r--  1 alex alex   43 Oct 21 11:49 access.log
  drwxr-xr-x  2 alex alex 4096 Oct 17 10:59 css
  -rw-r--r--  1 alex alex    0 Oct 21 11:49 error.log
  drwxr-xr-x  3 alex alex 4096 Oct  7 19:24 lib
  -rw-------  1 alex alex  445 Oct 21 11:46 nohup.out
  -rw-r--r--  1 alex alex    6 Oct 21 11:49 pid
  -rwxr-xr-x  1 alex alex  149 Oct 15 10:43 run.sh~
  -rwxr-xr-x  1 alex alex  318 Oct 15 10:46 run.sh
  -rw-r--r--  1 alex alex  683 Oct 21 11:19 service.p6~
  -rw-r--r--  1 alex alex  779 Oct 21 11:49 service.p6
  drwxr-xr-x  2 alex alex 4096 Oct 10 20:40 templates
  drwxr-xr-x  4 alex alex 4096 Oct 11 08:32 wiki

error.log and access.log are the obvious culprits! Changes in the log file will case cro to restart. So what I’m going to do is move the log files intoa logs subdirectory and add that to the ignore section in .cro.yml.

OK, so that works. I still feel strange using nohup, though.

Shell script:

#!/bin/bash
export ODDMUSE_MENU="Home, Changes, About"
export ODDMUSE_QUESTION="Name a colour of the rainbow."
export ODDMUSE_ANSWER="red, orange, yellow, green, blue, indigo, violet"
export ODDMUSE_SECRET="rainbow-unicorn"
export ODDMUSE_HOST="next.oddmuse.org"
export ODDMUSE_PORT=20000

cd $HOME/oddmuse6

# test -f pid && kill $(cat pid)
# perl6 service.p6 &
# echo $! > pid

nohup cro run &

Service:

use Cro::HTTP::Log::File;
use Cro::HTTP::Server;
use Oddmuse::Routes;

my $logs   = open 'logs/access.log'.IO, :w;
my $errors = open 'logs/error.log'.IO,  :w;

my Cro::Service $http = Cro::HTTP::Server.new(
    http => <1.1>,
    host => %*ENV<ODDMUSE_HOST> ||
        die("Missing ODDMUSE_HOST in environment"),
    port => %*ENV<ODDMUSE_PORT> ||
        die("Missing ODDMUSE_PORT in environment"),
    application => routes(),
    after => [Cro::HTTP::Log::File.new(logs => $logs, errors => $errors)]
);
$http.start;
$logs.say: "Listening at http://%*ENV<ODDMUSE_HOST>:%*ENV<ODDMUSE_PORT>";
react {
    whenever signal(SIGINT) {
        $logs.say: "Shutting down...";
        $http.stop;
        done;
    }
    whenever signal(SIGHUP) {
        $logs.say: "Ignoring SIGHUP...";
    }
}

Me ignoring SIGHUP in the service now has no effect because cro doesn’t ignore SIGHUP which is why the nohup wrapper is needed.

OK, so now my shell script starts nohup cro run & because cro doesn’t daemonize itself. I guess I still find that surprising. And I wonder about the trade-offs. So the benefit is automatic restarts? I also see now that using cro run gives me two moar processes (one runs cro, the other runs my process, moar being the virtual machine this Perl 6 runs on).

If there’s nothing else that I’m missing, perhaps running the service using perl6 directly instead of via cro isn’t such a bad idea after all. I think I’ll do that.

Tags:

Comments on 2018-10-21 Cro as a Service?

As a user I’m expecting a working systemd file provided. Systemd can solve some of these problems and more.

– AlexDaniel 2018-10-22 12:29 UTC


If you have more information for what you’d need, I’d love to hear it. Sadly, the only systemd thing I ever did was my backup solution...

– Alex Schroeder 2018-10-22 12:35 UTC


Thanks for pointing me at this gist.

NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=read-only
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
RestrictRealtime=yes
PrivateTmp=yes
PrivateDevices=yes
PrivateUsers=yes

Other things you mentioned in IRC:

MemoryMax and TasksMax should be used. Restart=always and RestartSec=2 is fine. Watchdog needs to be implemented.

– Alex Schroeder 2018-10-22 12:47 UTC

Add Comment

2018-07-19 Paginated Feeds

Recently, @jamey was talking about feed pagination (RFC 5005). I finally got around to adding the necessary code to Oddmuse and deploy it for all my sites. So if you check out Recent Changes and click on RSS with pages you should get a feed for all the pages changed in the last month (ignoring minor changes), and if your feed reader supported it, you could fetch previous pages of the same feed and thus scroll through the backlog until somewhere around 2003 those edits would dry up and there’d be nothing left but empty pages.

I’m currently not implementing the first page link so your feed reader wouldn’t realize that it should stop going back in time. It will follow links forever but eventually loops things will get strange. I just tried to see how far back in “negative time” we can go and if we try it with Recent Changes, you’ll get a page saying “Updates since 1900-01-00 00:00 UTC up to 1970-01-01 00:00 UTC”. I guess 1900 is the limit, then. 😂

Tags:

Comments on 2018-07-19 Paginated Feeds

Sadly, the Journal RSS Extension – which I use a lot – doesn’t fit into the existing RC/RSS framework.

– Alex Schroeder 2018-07-19 12:03 UTC

Add Comment

More...

Comments


Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. Note: in order to facilitate peer review and fight vandalism, we will store your IP number for a number of days. See Privacy Policy for more information. See Info for text formatting rules. You can edit the comment page if you need to fix typos. You can subscribe to new comments by email without leaving a comment.

To save this page you must answer this question:

Just say HELLO