I’ve just implemented a new non-optional Oddmuse feature. I’m removing all hostnames and IP numbers of older log entries. The log entries older than 90 days are stored in a different log file in order to speed up the generation of RecentChanges. During maintenance, these log entries are copied from one file to the other and I’m now taking advantage of this copying to remove the hostname or IP number.
Basically I find that as a person, I dislike invasions of privacy and I feel that in some small form, software engineers are inviting it because often it’s easier to do. We often model things to never forget, e.g. version control.
One of the important pages on Meatball was ForgiveAndForget. Forgetting is human.
At the same time, with Snowden and the NSA, I feel that as a hoster I’m more comfortable if I cannot provide the logs an agency is looking for.
Furthermore, I’ve had a very small number of emails from users asking me to remove their hostnames from the log files because they had accidentally edited the wiki from work. Pages containing their hostname will eventually be deleted but log entries were not. Now they’re anonymized and people can feel safer knowing that the traces will eventually disappear again.
The idea is that you would only need hostnames or IP numbers to fight spam and vandalism: Add regular expressions matching either hostname or IP number of spammers or vandals to your list of banned hosts and prevent the attack from continuing. After a few days, however, this information is no longer required. In this day and age of privacy invasion, I think software should take a pro-active stance. The log entries must be anonymized.
The existing log file for the older entries is not changed. If you want to do the right thing, there’s a script called anonymize.pl in the contrib directory to do just that.
Just call it in your data directory. Example:
alex@psithyrus:~/oddmuse$ perl ~/src/oddmuse/contrib/anonymize.pl Wrote anonymized 'oldrc.log'. Saved a backup as 'oldrc.log~'
See Oddmuse:Upgrading Issues for a more technical explanation of what’s going on.
I’ve done a few changes. Let’s see whether it works out.
- when viewing ordinary pages, previous comments and the comment form are shown ← this is what I wanted to add
- people who follow the RSS feeds will therefore easily find the obvious comment form when following the link ← this is what I hope to achieve
- when leaving a comment, you end up on the comment page, which might be confusing
- when looking at RecentChanges, which has some extra magic associated with its name, the comments are not shown
- when looking at older revisions, the page history, and many other variants, the comments are not shown
- when looking at a journal page such as Diary, the comments are not shown
- when looking at a journal page such as Diary, you can still see links to inline the comment page
- inlined comment pages don’t come with an automatic comment form—you need to click on the “Add comment” link at the end
- comment pages are still excluded from the usual feeds (I wonder whether I should change this)
I think the Wiki + Blog combo still works. I’m just trying to make it less weird.
Source code for your config file, if you’re an Oddmuse user. The source code below also includes my Google +1 setup (Oddmuse:Google Plus One Module) because my code needs to avoid the situation where a page shows two +1 buttons. As for comments within journals, I use Oddmuse:Dynamic Comments Extension.
In the anonymous rant The Wikemacs Experiment: 300 Days Later, the author claims “The biggest problem is that it is insecure. […] Anyone can edit any of the pages that contain Elisp code.” The same sentiment was expressed by Alex Bennée in a comment on Google+: “What is really needed is a way to be sure that the source for the emacs extension your updating hasn’t been subverted by someone else with ill intent.”
Experiences and ideas of “what is really necessary” vary. As for myself, I’ve installed code from all over the Internet without reviewing the source. Installing it from a gist or git repo is hardly a different experience. If you want to figure out whether a source is trustworthy, you do the usual things: do people link to the code, how long has it been around, what about recent checkins, that sort of thing. Or you get into the crypto business of signing releases.
You could of course say that every day that passes without a problem increases our false sense of security… I have no answer to that. All I can say is that if security is your problem, using gists and github is not the solution (as you say yourself). The source of the insecurity is our habits, our culture of downloading and installing anything and everything. I’m not sure how you’ll ever make sure “that the source for the emacs extension your updating hasn’t been subverted by someone else with ill intent.” That seems pretty impossible to me unless you limit yourself to the core Emacs distribution (and even that’s not a guarantee).
People on the #emacs channel keep asking “is there way to do X” and thus my impression is that finding stuff is a more pressing problem. I feel that encouraging people to create a page on the wiki saying “here is code to help you do something” is the solution to that problem.
But then again, I guess we all differ in what we consider to be the most pressing problem.
Alex Bennée the correctly points out that using “a user locked solution like a gist or git repo you can at least be assured what you’re installing has come through one person who you’ve trusted to a degree before.” I guess that’s true. We’ll see whether people start switching over to using gists instead of editing wiki pages. I said in an earlier comment:
I added gist support […] because it was easy to do, not because it will encourage existing authors to move their elisp code on wiki pages to github. If at all, it might encourage future elisp authors to transclude a gist… But then again, there’s nothing preventing them from linking to a gist right now. Perhaps it’s also a generational thing. People that have been living without github and gists don’t feel a particular need to start using it.
I just read a rant about Emacs Wiki and it’s alternative: The Wikemacs Experiment: 300 Days Later. Check out How Emacs Wiki Works for some context from my point of view. Anyway, the anonymous author says: “Maybe someone could work with Alex to add gist-style code snippets to Oddmuse, and make it so that code can be cited inline on Wiki pages, so that anyone visiting the page is automatically looking at the most up to date version of the code.”
(setq abg-elisp-external-dir (expand-file-name "external" abg-elisp-dir)) ; ... ; Add external projects to load path (dolist (project (directory-files abg-elisp-external-dir t "\\w+")) (when (file-directory-p project) (add-to-list 'load-path project)))
Actually, I added an Emacs Wiki feature using two lines of code that add support for fancy inclusion:
<include gist "https://gist.github.com/1236665">
It only works over there, however. See EmacsWiki:Gists.
Anyway, the same also works for Lisppaste:
<include text "http://paste.lisp.org/display/134703/raw">
;; Set XTERM resources as so ;; ;; metaSendsEscape: false ;; altSendsEscape: false ;; eightBitInput: true ;; Verify with cat > /dev/null command that pressing alt-a ;; alt-b and so on produces single >128bit char (will look ;; like a with a hat ;; once above is working in emacs do ;; Prevent pressing esc O from triggering binding (define-key (get-input-decode-map) "\eO" nil) ;; tell emacs Meta is 8th bit (cond ((fboundp 'set-input-meta-mode) (set-input-meta-mode t)) (t (set-input-mode t nil t)))
I don’t think there’s a nice way to include the colored version, unfortunately.
Update: I added support and minimal Lisp highlighting for the following:
<include lisppaste "http://paste.lisp.org/display/134703">
It only works over there, of course.
By chance, I run my leech-detector script and find the following:
aschroeder@thinkmo:~$ leech-detector < logs/access.log | head IP Number hits bandw. hits% interv. status code distrib. 126.96.36.199 14368 118K 17% 4.8s 301 (49%), 200 (49%), 404 (0%), 302 (0%), 403 (0%) 188.8.131.52 3419 11K 4% 20.2s 200 (52%), 404 (43%), 302 (2%), 400 (1%), 301 (0%), 304 (0%) ...
What the hell is this guy doing causing 17% of all my hits?
aschroeder@thinkmo:~$ tail -f logs/access.log | grep 184.108.40.206 220.127.116.11 - - [13/May/2012:01:44:12 +0200] "GET /emacs?action=browse;id=icicles.el;revision=835 HTTP/1.0" 301 447 "http://www.emacswiki.org/emacs/?action=rc&all=1&showedit=1&from=1&rcuseronly=DrewAdams" "Wget/1.12 (linux-gnu)" 18.104.22.168 - - [13/May/2012:01:44:16 +0200] "GET /emacs/?action=browse;id=icicles.el;revision=835 HTTP/1.0" 200 127350 "http://www.emacswiki.org/emacs/?action=rc&all=1&showedit=1&from=1&rcuseronly=DrewAdams" "Wget/1.12 (linux-gnu)" 22.214.171.124 - - [13/May/2012:01:44:21 +0200] "GET /emacs?action=browse;diff=2;id=icicles-cmd2.el;diffrevision=55 HTTP/1.0" 301 462 "http://www.emacswiki.org/emacs/?action=rc&all=1&showedit=1&from=1&rcuseronly=DrewAdams" "Wget/1.12 (linux-gnu)" 126.96.36.199 - - [13/May/2012:01:44:25 +0200] "GET /emacs/?action=browse;diff=2;id=icicles-cmd2.el;diffrevision=55 HTTP/1.0" 200 482243 "http://www.emacswiki.org/emacs/?action=rc&all=1&showedit=1&from=1&rcuseronly=DrewAdams" "Wget/1.12 (linux-gnu)"
Ahhh! A stupid leech using wget to pull the entire site, following all the links, ignoring the rel=“nofollow” rules… Maybe a dude that didn’t read the WikiDownload page. It also looks to me as if the links are listed in the site’s robots.txt file.
Oh well. The solution, unfortunately, seems to involve editing
cgi-bin/.htaccess and adding the following:
# using wget to get everything including actions, old stuff, etc. Deny from 188.8.131.52
(TL;DR: People that don’t like the wiki as it is ought look at the official Emacs documentation instead. I wrote this so that I’d have something to link to in the future. This post was inspired by EmacsWiki:2012-03-20.)
Every year or so, I read about suggested changes to the Emacs Wiki. The complaints are the same, year after year.
- The pages are confusing.
- The code snippets are wrong.
- The site is badly organized.
- The information is out of date.
The solutions invariably have nothing to do with the problem.
- Switch to Mediawiki, the software used to run Wikipedia.
- Use a database or a distributed version control system as the backend.
- Change the text formatting rules to Markdown, Mediawiki markup, or something else that is better known.
- Separate discussion from the main page.
- Delete stuff that is outdated.
- Fix errors.
Why are these suggestions not helpful?
The first problem is the mistaken belief that technology can substitute for social change. Yes, the wiki is badly organized and many of the pages are outdated. Changing the wiki engine, the backend or the formatting rules will not change this, however.
The backend used by the wiki engine can influence performance and resource use, it can the software harder or easier to maintain and backup – but it will not induce somebody to edit a messy page and fix it.
The second problem is the mistaken belief that moderation can be commanded. You can complain about bad editing and a lack of moderation all day. But since nobody is paying people to do a boring job, we must rely on obsessive compulsive people to fix typos and tag pages.
Maybe we could attract more people by gamifying the experience—offer rewards, badges, scores. But Stack Overflow already does this. It’s the best social question answering machine currently known. The wiki doesn’t need to imitate something better. The wiki needs to do what it does best. We’ll come to that.
The third problem is the mistaken belief that quality control and volunteers go well together. Just compare Wikipedia and Citizendium and consider the animosity generated by Deletionism on Wikipedia. How will you encourage authors to contribute if you are telling them that their contributions are lacking the quality you are looking for instead of simply accepting their text and working on it?
You fight spam, you rework text occasionally, you encourage others, you welcome newbies, you lead by example. That’s how you lead.
An abrasive personality, radical change involving a lot of work—those are not the tools you are looking for.
Let me return to the issue of commanding change. Things people have said:
- “the content editing should be one with the goal of creating a comprehensive, coherent, article that gives readers info or tutorial about the subject.” – Xah Lee (2008)
- “I favor a major reorganization of the wiki material.” – Neil Smithline (2011)
- “The articles are littered with crappy advice confusing beginners, have little structure and are filled with ridiculous questions” – Bozhidar Batsov (2012)
The critics can be unhappy about it all they want, and they can complain about it all they want—but in the end, one needs to understand the forces at work, here. There is no chain of command.
It works just like a free software project. If it doesn’t scratch someone’s itch, nobody is going to add it. I think it’s a fundamental issue with our business model: there is no pay for boring stuff. Plus, documentation is of no direct use for anything—unlike code. Thus, people are mostly motivated to keep their own code and its documentation up to date. I don’t think there is anything we can do about that. That’s why the Emacs Wiki Mission Statement does not mention organization and quality. It cannot be commanded.
Once we accept that this is the sand upon which we are building our house, we necessarily need to scale down our expectations. Personally, I think the wiki exists somewhere between the official documentation, Stack Overflow, the FAQ, the newsgroups, the mailing lists, and IRC. It’s certainly nowhere near the quality of organization and writing that the Emacs documentation has—and I don’t think this is the right medium to aim for this level of quality. I think the people willing to invest that amount of energy to write quality stuff ought to be writing the real Emacs documentation—and they probably are.
What remains are the people using Emacs Wiki for their own pet projects, questions asked, answers given, sometimes organized, sometimes rewritten, sometimes linked to the rest of the site.
Wikipedia works because of its universal appeal. When I added an image to an obscure Indian temple we visited when I was staying in Mysore, the photo was terrible. But it was a start, and enough people cared about the page and it grew, and it found people to tend it, and now it’s big and beautiful.
There just aren’t enough Emacs users and authors out there and the best of us will be contributing to the official Emacs documentation. The wiki exists somewhere between the official documentation and the mailing lists. Lower your expectations.
Given all that, why does the wiki exist at all?
When I started it, I had several reasons:
- The wikis I knew, C2 and Meatball Wiki, had attracted a particular community and they had created a particular subculture I liked. We talked about the Wiki Now and many other things that made wikis work. The medium itself was interesting.
- I had been posting on the newsgroups for a long time, and slowly I realized that the same questions kept being asked again and again. The newsgroups and mailing lists were failing as a medium because they were ephemeral. Sure, we kept telling people to search the archives. But the medium afforded asking questions instead of searching.
- When I looked for Frequently Asked Questions, I found a document online, maintained by a single person. This person was a bottleneck. The FAQ updated slowly.
- At the time I was getting into Internet Relay Chat. On IRC, conversation is even more ephemeral than on the mailing list. This time, however, “searching the archives” was out of the question. We needed our own archive. And thus I started answering questions on IRC and posting the answers on the wiki.
I think this last point bears consideration: I was creating pages or adding information to pages because it was pertinent on IRC. An index, linking to the page, categorization, returning to the page later and reworking it, all these quality related tasks were not pertinent on IRC. All I needed was a pastebin that I could go back to and rewrite if I felt like it. Often I did not—and I still don’t.
The wiki being on the web, updated every now and then, with pertinent answers to specialized questions, unorganized and raw, ended up being a good resource for the search engines out there. These search engines bring new people to the site. People that don’t understand how wikis work in general and how this wiki grew to be where it is in particular. They are shocked. So many pages outdated! Such a mess in style and quality!
I think those people are better served reading the official documentation. They don’t want this mess, they don’t benefit from it’s loose rules, they don’t understand how cool it is to have a site with no login required. They are better served elsewhere.
I’m sure that one day the Emacs Wiki will have become irrelevant. But just like the old newsgroups never disappeared entirely, so will the wiki transform into something else and remain part of our information landscape.
Perhaps one of the Emacs Wiki critics will one day set up an alternate site, pull all the pages (more than 8500 pages last time I checked), extract the quality content—or rewrite it from scratch—and produce something better. Perhaps they will build an organization that can keep the quality up, encourage new authors to join, provide more value to their readers. But I don’t think complaining about the existing Emacs Wiki is a step in the right direction. Build it, and they will come—elsewhere.
- Ymir’s Call – I play every other Monday evening in a Barbarians of Lemuria campaign with DM Florian. Last session we switched to Crypts & Things. German campaign wiki.
- Hagfish Tavern was the D&D 3.5 adventure path Rise of the Runelords we played before Ymir’s Call.
- Kurobano And The Dragons was the M20 game that switched to D&D 3.5 before we started Hagfish Tavern.
- The Alder King – I run a Solar System game set in the Wilderlands of High Fantasy on two Sunday afternoons a month, in German. This campaign used D&D 3.5 and English before we switched to Solar System RPG. Right now it’s on a short hiatus as we give the Great Pendragon Campaign a try for two or three sessions. I’ve heard one player mumble that maybe we should try it for a bit longer, though.
- Durgan’s Flying Circus – I play in a monthly Harp game with GM Stefan on another Sunday each month.
- Desert Raiders was the Pathfinder RPG adventure path Legacy of Fire we played before Durgan’s Flying Circus.
- The Golden Lanterns was the D&D 3.5 adventure path Shackled City we played before Desert Raiders.
- Fünf Winde– I run a Labyrinth Lord game set in the Wilderlands of High Fantasy on one Tuesday each month. I used to run two separate groups in the same campaign area, but ended up merging the two groups because one of the two kept shrinking. Both in German.
- Wilderlande – I run a Labyrinth Lord campaign set in a Points of Light campaign setting for my best friend and his three kids for two hours on a Friday evening every month. Also in German.
- Lied vom Eis is a Song of Ice and Fire RPG campaign I used to play in; mostly in German.
- Die Reise nach Rhûn was a Rolemaster campaign in Middle Earth that switched to Legends of Middle Earth we played before Lied vom Eis. German.
There’s more… There must be at least two short Burning Wheel campaigns on that site (Burning Six, Campaign:Krythos). And a Mongoose Traveller game that switched to Diaspora (Campaign:Kaylash). And a wiki I used for my DM notes when running the Kurobano campaign (Campaign:Attaxa). And a Forgotten Realms campaign using D&D 3.5 (Sohn des schwarzen Marlin). And another D&D 3.5 sandbox (Campaign:Grenzmarken).
I totally recommend keeping notes online!
(I run the Campaign Wiki site which explains why I’m so enthusiastic about it.)
aschroeder@thinkmo:~$ bot-analyze < logs/access.log.1 | head ----------------------------Bandwidth-------Hits-------Actions Everybody 2177M 113317 All Bots 274M 16154 100% 9% -------------------------------------------------------------- www.google.com 92390K 5542 34% 2% yandex.com 63545K 3954 24% 9% bot 33803K 2309 14% 1% superfeedr.com 31740K 1091 6% 63% www.bing.com 14161K 525 3% 4% ahrefs.com 9251K 465 2% 2%
I’m surprised – bots are responsible for 14% of all my hits. That’s better than what I saw in 2009.
The way I have set up my web pages, bots should not crawl “actions” (URLS containing the
action parameter) – and yet 9% of them do it, and superfeedr does it most of all. I should investigate, I guess.
Well, it happened again. The blog post A database of old-school monsters caught my attention. There, Gavin says he wants an “online old-school monster database, with tags.”
It just so happened that I had some code lying around that would provide Campaign Wiki with tags. I gave it a try, and it worked.
Then I looked around for monsters to add to the wiki. I didn’t want to start with the The Hypertext d20 SRD or the Pathfinder SRD – I wanted old school. I remembered that Dan Proctor kindly doesn’t just host PDF files, he also provides text files for his stuff—the Open Game Content Library. So I went there, got the Labyrinth Lord monsters and the Advanced Edition Companion monsters, saved the Word document as text files and started using a lot of regular-expression based search and replace and keyboard macros to semi-automatically go through it all, split it into files and upload the stuff using some shell scripting. Yay for Perl, Emacs and the shell.
The result: Campaign:Monsters! A wiki for all of us. Add monsters that allow free distribution – Open Gaming Content, Creative Commons, your own stuff (if you’re willing to make it free), add tags, and make it grow. Use it to browse, find some inspiration, prepare your adventures and populate your wandering monster tables.
I’ve been working on a submission form for the Old School RPG Planet. Today I added another little feature. This is how I like to develop code. No time pressure. One little step at a time. Keep polishing it.
The planet uses Planet Venus to collect the RSS and Atom feeds of many of the Old School RPG blogs out there. Planet Venus allows you to get the list of feeds via an URL. I’m hosting the list of feeds on Campaign Wiki itself (raw format). As you can see, it the format doesn’t look nice.
The thing I did, therefore, was to write a script that makes it easy for people who are not into the technical details to submit new blogs. It also makes it easier for me to submit new blogs!
The things it handles:
- If you submit an invalid URL, it will prepend
http://and try again.
- If it looks like we already have a similar looking feed on our page, it requires a confirmation by the user.
- If you submit a web page, it will look for alternative links with MIME types
text/xml(just making sure) and allow the user to pick one of them.
- If you submitted a feed directly instead of a web page, it it uses that instead.
- If the feed you picked is served with an invalid content type, it is rejected.
- It extracts the title of the feed and adds it to the wiki page, sorting all the entries alphabetically.
I think it’s pretty cool.
If you look at the interface, you’ll note that it has a link to its own source code. I love this little Perl trick:
__DATA__at the end of the source file. Usually you would add actual data at the end. The script could read it using the DATA file handle.
- Serve source code using
seek DATA, 0, 0; print "Content-type: text/plain; charset=UTF-8\r\n\r\n", <DATA>;This resets the current position of the DATA file handle to the beginning of the source file. Tadaa!