Diary SiteMap RecentChanges About Contact Calendar

Search:

Matching Pages:

Journal

2013-12-01 Anonymizing the Oddmuse log files

I’ve just implemented a new non-optional Oddmuse feature. I’m removing all hostnames and IP numbers of older log entries. The log entries older than 90 days are stored in a different log file in order to speed up the generation of RecentChanges. During maintenance, these log entries are copied from one file to the other and I’m now taking advantage of this copying to remove the hostname or IP number.

Basically I find that as a person, I dislike invasions of privacy and I feel that in some small form, software engineers are inviting it because often it’s easier to do. We often model things to never forget, e.g. version control.

One of the important pages on Meatball was ForgiveAndForget. Forgetting is human.

At the same time, with Snowden and the NSA, I feel that as a hoster I’m more comfortable if I cannot provide the logs an agency is looking for.

Furthermore, I’ve had a very small number of emails from users asking me to remove their hostnames from the log files because they had accidentally edited the wiki from work. Pages containing their hostname will eventually be deleted but log entries were not. Now they’re anonymized and people can feel safer knowing that the traces will eventually disappear again.

The idea is that you would only need hostnames or IP numbers to fight spam and vandalism: Add regular expressions matching either hostname or IP number of spammers or vandals to your list of banned hosts and prevent the attack from continuing. After a few days, however, this information is no longer required. In this day and age of privacy invasion, I think software should take a pro-active stance. The log entries must be anonymized.

The existing log file for the older entries is not changed. If you want to do the right thing, there’s a script called anonymize.pl in the contrib directory to do just that.

Just call it in your data directory. Example:

alex@psithyrus:~/oddmuse$ perl ~/src/oddmuse/contrib/anonymize.pl 
Wrote anonymized 'oldrc.log'.
Saved a backup as 'oldrc.log~'

See Oddmuse:Upgrading Issues for a more technical explanation of what’s going on.

Tags: RSS RSS RSS

Add Comment

2013-08-14 Comments on this Wiki Blog

I’ve done a few changes. Let’s see whether it works out.

  1. when viewing ordinary pages, previous comments and the comment form are shown ← this is what I wanted to add
  2. people who follow the RSS feeds will therefore easily find the obvious comment form when following the link ← this is what I hope to achieve
  3. when leaving a comment, you end up on the comment page, which might be confusing
  4. when looking at RecentChanges, which has some extra magic associated with its name, the comments are not shown
  5. when looking at older revisions, the page history, and many other variants, the comments are not shown
  6. when looking at a journal page such as Diary, the comments are not shown
  7. when looking at a journal page such as Diary, you can still see links to inline the comment page
  8. inlined comment pages don’t come with an automatic comment form—you need to click on the “Add comment” link at the end
  9. comment pages are still excluded from the usual feeds (I wonder whether I should change this)

I think the Wiki + Blog combo still works. I’m just trying to make it less weird. :)

Source code for your config file, if you’re an Oddmuse user. The source code below also includes my Google +1 setup (Oddmuse:Google Plus One Module) because my code needs to avoid the situation where a page shows two +1 buttons. As for comments within journals, I use Oddmuse:Dynamic Comments Extension.

# Google +1 list

push(@MyAdminCode, sub {
       my ($id, $menuref, $restref) = @_;
       push(@$menuref, ScriptLink('action=plusone',
				  T('Google +1 Buttons'),
				  'plusone'));
     });

$Action{plusone} = \&DoPlusOne;

sub DoPlusOne {
  print GetHeader('', T('All Pages +1'), ''),
    $q->start_div({-class=>'content plusone'});
  print $q->p(T("This page lists the twenty last diary entries and their +1 buttons."));
  my @pages;
  foreach my $id (AllPagesList()) {
    push(@pages, $id) if $id =~ /^\d\d\d\d-\d\d-\d\d/;
  }
  splice(@pages, 0, $#pages - 19); # last 20 items
  print "<ul>";
  foreach my $id (@pages) {
    my $url = ScriptUrl(UrlEncode($id));
    print $q->li(GetPageLink($id),
		qq{ <g:plusone href="$url"></g:plusone>});
  }
  print "</ul>";
  print $q->end_div();
  PrintFooter();
}

# two step Google +1 button to protect your privacy
# http://my.opera.com/QuHno/blog/adding-the-google-1-button-to-a-webpage-without-violating-the-users-privacy

*MyOldGetCommentForm=*GetCommentForm;
*GetCommentForm=*MyNewGetCommentForm;

sub MyNewGetCommentForm {
  return MyOldGetCommentForm(@_) . q{
<script type="text/javascript">
function loadScript(jssource,thelink) {
   var jsnode = document.createElement('script');
   jsnode.setAttribute('type','text/javascript');
   jsnode.setAttribute('src',jssource);
   document.getElementsByTagName('head')[0].appendChild(jsnode);
   document.getElementById(thelink).innerHTML = "";
 }
 var plus1source = "https://apis.google.com/js/plusone.js";
</script>
<p id="plus1">
  <a href="javascript:loadScript(plus1source,'plus1')">
    <img src="/pics/plusone-h24.png" alt="Show Google +1" />
  </a>
</p>
<!-- <g:plusone></g:plusone> -->
<div class="g-plusone" id="my_plusone"></div>
<script type="text/javascript">
  document.getElementById("my_plusone").setAttribute("data-size", "medium");
  document.getElementById("my_plusone").setAttribute("data-href", document.location.href);
</script>
};
}

# make sure journal pages set a global variable which we then use to
# hide the comment form

*MyOldPrintJournal = *PrintJournal;
*PrintJournal = *MyNewPrintJournal;

my $MyPagePrintedJournal;

push(@MyInitVariables, sub {
       $MyPagePrintedJournal = 0;
     });

sub MyNewPrintJournal {
  $MyPagePrintedJournal = 1;
  return MyOldPrintJournal(@_);
}

# list comments and comment form at the bottom of every normal page

*MyOldPrintFooter = *PrintFooter;
*PrintFooter = *MyNewPrintFooter;

sub MyNewPrintFooter {
  my ($id, $rev, $comment) = @_;
  if (!$MyPagePrintedJournal
      and GetParam('action', 'browse') eq 'browse'
      and $id and $CommentsPrefix
      and $id ne $RCName
      and $id !~ /^$CommentsPrefix(.*)/o) {
    my $target = $CommentsPrefix . $id;
    my $page = '';
    $page = PageHtml($target) if $IndexHash{$target};
    print $q->div({-class=>'comment'},
		  $q->h2(T('Comments')),
		  $page);
    # don't include Google +1 button twice
    print MyOldGetCommentForm("$CommentsPrefix$id", $rev, $comment);
  }
  MyOldPrintFooter(@_);
}

Tags: RSS RSS RSS

Add Comment

2013-01-23 Security of Code Downloaded from Online Sources

In the anonymous rant The Wikemacs Experiment: 300 Days Later, the author claims “The biggest problem is that it is insecure. […] Anyone can edit any of the pages that contain Elisp code.” The same sentiment was expressed by Alex Bennée in a comment on Google+: “What is really needed is a way to be sure that the source for the emacs extension your updating hasn’t been subverted by someone else with ill intent.”

I said:

Experiences and ideas of “what is really necessary” vary. As for myself, I’ve installed code from all over the Internet without reviewing the source. Installing it from a gist or git repo is hardly a different experience. If you want to figure out whether a source is trustworthy, you do the usual things: do people link to the code, how long has it been around, what about recent checkins, that sort of thing. Or you get into the crypto business of signing releases.

You could of course say that every day that passes without a problem increases our false sense of security… I have no answer to that. All I can say is that if security is your problem, using gists and github is not the solution (as you say yourself). The source of the insecurity is our habits, our culture of downloading and installing anything and everything. I’m not sure how you’ll ever make sure “that the source for the emacs extension your updating hasn’t been subverted by someone else with ill intent.” That seems pretty impossible to me unless you limit yourself to the core Emacs distribution (and even that’s not a guarantee).

People on the #emacs channel keep asking “is there way to do X” and thus my impression is that finding stuff is a more pressing problem. I feel that encouraging people to create a page on the wiki saying “here is code to help you do something” is the solution to that problem.

But then again, I guess we all differ in what we consider to be the most pressing problem.

Alex Bennée the correctly points out that using “a user locked solution like a gist or git repo you can at least be assured what you’re installing has come through one person who you’ve trusted to a degree before.” I guess that’s true. We’ll see whether people start switching over to using gists instead of editing wiki pages. I said in an earlier comment:

I added gist support […] because it was easy to do, not because it will encourage existing authors to move their elisp code on wiki pages to github. If at all, it might encourage future elisp authors to transclude a gist… But then again, there’s nothing preventing them from linking to a gist right now. Perhaps it’s also a generational thing. People that have been living without github and gists don’t feel a particular need to start using it.

Interesting times. :)

Tags: RSS RSS RSS RSS

Comments on 2013-01-23 Security of Code Downloaded from Online Sources

Hi Alex,

first of all - thank you very much for Oddmuse! I’m using it for both my personal site and Department's site. It has some rough edges, but overall I find it a very nice tool, and I did recommend it to a few people.

Now to the point: I was just wondering whether it might be a good idea to use stackoverflow with [emacs] tag (which you mentioned in your earlier post), or maybe even start something like emacs.stackexchange.com? I’m not sure whether it could solve any problems you mentioned, but (at least for the more paranoia-oriented people) it might feel a bit more secure, with all the comments, up- and downvotes etc. I don’t know. (Personally, I didn’t use any actual code from Emacswiki, but I guess it would not be a huge problem for me.)

mbork 2013-01-23 20:55 UTC



AaronHawley
Nothing has really changed. Previously, Lisp code was shared between a few Emacs hackers and the intention was to work on improving it and get it integrated into Emacs. The GNU Project was the trusted authority. They distributed the useful contributions. Obviously, that hasn’t scaled well. I think it’s perfectly reasonable for Emacs newbies to distrust code they can’t read that was written by hackers they don’t know.

AaronHawley 2013-01-23 21:56 UTC



AlexSchroeder
Thank you for the kind words, Marcin. I think a lot of people are already using Stackoverflow for Emacs questions. I find the site incredibly useful when I’m at work (except my work is hardly ever related to Emacs, unfortunately).

I also agree with Aaron. Good point regarding the GNU Project being the trusted authority.

AlexSchroeder 2013-01-23 22:39 UTC



Thomas Koch
I’ve collected examples of manipulated code or binaries: http://www.koch.ro/blog/index.php?/archives/153-On-distributing-binaries.html

I don’t think that it’s too hard to get a gpg key, go to a signing party on your next software conference and sign all your releases. It’s rather dumb easy. And you can use signed git tags on github or any other git hosting platform to provide a very strong confidence for your user that they can trace you back in case you provided bad code.

Thomas Koch 2013-01-24 12:57 UTC



AlexSchroeder
True, it is not “too hard” for many people. But when I write a little throw-away piece of code like EmacsWiki:1000 Words it’s a bit much to ask. I’ve never been to a key signing party. I never go to software conferences. I post it on the wiki. And when I write another little piece of code, I do it again. That’s why my code ends up on the wiki and not on github. I keep hoping people will volunteer to maintain code I wrote and either add it to Emacs itself or maintain it in decent repositories. I just don’t see myself doing it. I like the division of labor between programming and packaging.

AlexSchroeder 2013-01-25 10:24 UTC



Thomas Koch
It might be a bit too much to sign a little script of 10 lines that I can quickly review. I was rather referring to big software projects. However once you’ve got a gpg key you can sign a small code snippet just as easily as you can sign an email.

Thomas Koch 2013-01-26 10:11 UTC



AlexSchroeder
I think now the discussion turns to the question of where to draw the line. There’s exactly one large project that is exclusively hosted on Emacs Wiki, I think: EmacsWiki:Icicles. Others, such as EmacsWiki:Anything moved to github. Other, like EmacsWiki:Gnus or EmacsWiki:BBDB were never hosted on Emacs Wiki to begin with. Then there are the large collection of inofficial extensions like the ones listed on EmacsWiki:rcirc. Do they count as a single project or is each file a separate one? From my point of view, each one is a separate project. I just use two of them myself. As such, they are not really “a little script of 10 lines” but they don’t feel like big software projects, either.

I think I’m with Aaron. Emacs Wiki mostly hosts code on the wiki that one could view as “incubator” stuff. Things that haven’t made it into their own repositories or that haven’t made it into Emacs itself. Thus, asking for version control and signed releases is—in the context of code hosted on Emacs Wiki—asking for the right thing at the wrong time. It’s premature for those small single file projects that are hanging in Limbo somewhere between ten lines and inclusion into Emacs or indendence as their separate projects.

AlexSchroeder 2013-01-26 11:46 UTC



dim
Using El-Get you can easily add a checksum in your setup so that you only automatically get code from EmacsWiki with that checksum. So if you get to a new machine or re-install your Emacs setup from scratch, and the newly downloaded EmacsWiki code does not match your checksum, El-Get will refuse to load it for you. You can get the checksum interactively using M-x el-get-checksum command.

dim 2013-01-27 21:13 UTC



AlexSchroeder
Excellent feature!

AlexSchroeder 2013-01-28 07:23 UTC

Add Comment

2013-01-22 Gists on Emacs Wiki

I just read a rant about Emacs Wiki and it’s alternative: The Wikemacs Experiment: 300 Days Later. Check out How Emacs Wiki Works for some context from my point of view. Anyway, the anonymous author says: “Maybe someone could work with Alex to add gist-style code snippets to Oddmuse, and make it so that code can be cited inline on Wiki pages, so that anyone visiting the page is automatically looking at the most up to date version of the code.”

Let’s take this random gist as an example. Click on the “view raw” button. Use <include text "..."> to transclude it:

(setq abg-elisp-external-dir
      (expand-file-name "external" abg-elisp-dir))

; ...

; Add external projects to load path
(dolist (project (directory-files abg-elisp-external-dir t "\\w+"))
  (when (file-directory-p project)
    (add-to-list 'load-path project)))

Actually, I added an Emacs Wiki feature using two lines of code that add support for fancy inclusion:

<include gist "https://gist.github.com/1236665">

It only works over there, however. See EmacsWiki:Gists.

Anyway, the same also works for Lisppaste:

<include text "http://paste.lisp.org/display/134703/raw">

Results in:

;; Set XTERM resources as so
;; 
;; metaSendsEscape: false
;; altSendsEscape: false
;; eightBitInput: true

;; Verify with cat > /dev/null command that pressing alt-a
;; alt-b and so on produces single >128bit char (will look
;; like a with a hat

;; once above is working in emacs do

;; Prevent pressing esc O from triggering binding
(define-key (get-input-decode-map) "\eO" nil)

;; tell emacs Meta is 8th bit
(cond ((fboundp 'set-input-meta-mode)
      (set-input-meta-mode t))
    (t (set-input-mode t nil t)))

I don’t think there’s a nice way to include the colored version, unfortunately.

Update: I added support and minimal Lisp highlighting for the following:

<include lisppaste "http://paste.lisp.org/display/134703">

It only works over there, of course.

Tags: RSS RSS

Add Comment

2012-05-12 Stupid Leeches

By chance, I run my leech-detector script and find the following:

aschroeder@thinkmo:~$ leech-detector < logs/access.log | head
           IP Number       hits bandw. hits% interv. status code distrib.
      184.82.236.206      14368   118K  17%    4.8s  301 (49%), 200 (49%), 404 (0%), 302 (0%), 403 (0%)
      125.199.78.207       3419    11K   4%   20.2s  200 (52%), 404 (43%), 302 (2%), 400 (1%), 301 (0%), 304 (0%)
                 ...

What the hell is this guy doing causing 17% of all my hits?

aschroeder@thinkmo:~$ tail -f logs/access.log | grep 184.82.236.206
184.82.236.206 - - [13/May/2012:01:44:12 +0200] "GET /emacs?action=browse;id=icicles.el;revision=835 HTTP/1.0" 301 447 "http://www.emacswiki.org/emacs/?action=rc&all=1&showedit=1&from=1&rcuseronly=DrewAdams" "Wget/1.12 (linux-gnu)"
184.82.236.206 - - [13/May/2012:01:44:16 +0200] "GET /emacs/?action=browse;id=icicles.el;revision=835 HTTP/1.0" 200 127350 "http://www.emacswiki.org/emacs/?action=rc&all=1&showedit=1&from=1&rcuseronly=DrewAdams" "Wget/1.12 (linux-gnu)"
184.82.236.206 - - [13/May/2012:01:44:21 +0200] "GET /emacs?action=browse;diff=2;id=icicles-cmd2.el;diffrevision=55 HTTP/1.0" 301 462 "http://www.emacswiki.org/emacs/?action=rc&all=1&showedit=1&from=1&rcuseronly=DrewAdams" "Wget/1.12 (linux-gnu)"
184.82.236.206 - - [13/May/2012:01:44:25 +0200] "GET /emacs/?action=browse;diff=2;id=icicles-cmd2.el;diffrevision=55 HTTP/1.0" 200 482243 "http://www.emacswiki.org/emacs/?action=rc&all=1&showedit=1&from=1&rcuseronly=DrewAdams" "Wget/1.12 (linux-gnu)"

Ahhh! A stupid leech using wget to pull the entire site, following all the links, ignoring the rel=“nofollow” rules… Maybe a dude that didn’t read the WikiDownload page. It also looks to me as if the links are listed in the site’s robots.txt file.

Oh well. The solution, unfortunately, seems to involve editing cgi-bin/.htaccess and adding the following:

# using wget to get everything including actions, old stuff, etc.                                                                            
Deny from 184.82.236.206

:(

Tags: RSS

Add Comment

2012-03-24 How Emacs Wiki Works

(TL;DR: People that don’t like the wiki as it is ought look at the official Emacs documentation instead. I wrote this so that I’d have something to link to in the future. This post was inspired by EmacsWiki:2012-03-20.)

Every year or so, I read about suggested changes to the Emacs Wiki. The complaints are the same, year after year.

  1. The pages are confusing.
  2. The code snippets are wrong.
  3. The site is badly organized.
  4. The information is out of date.

The solutions invariably have nothing to do with the problem.

  1. Switch to Mediawiki, the software used to run Wikipedia.
  2. Use a database or a distributed version control system as the backend.
  3. Change the text formatting rules to Markdown, Mediawiki markup, or something else that is better known.
  4. Separate discussion from the main page.
  5. Delete stuff that is outdated.
  6. Fix errors.
  7. Organize.
  8. Moderate.

Why are these suggestions not helpful?

The first problem is the mistaken belief that technology can substitute for social change. Yes, the wiki is badly organized and many of the pages are outdated. Changing the wiki engine, the backend or the formatting rules will not change this, however.

The backend used by the wiki engine can influence performance and resource use, it can the software harder or easier to maintain and backup – but it will not induce somebody to edit a messy page and fix it.

The second problem is the mistaken belief that moderation can be commanded. You can complain about bad editing and a lack of moderation all day. But since nobody is paying people to do a boring job, we must rely on obsessive compulsive people to fix typos and tag pages.

Maybe we could attract more people by gamifying the experience—offer rewards, badges, scores. But Stack Overflow already does this. It’s the best social question answering machine currently known. The wiki doesn’t need to imitate something better. The wiki needs to do what it does best. We’ll come to that.

The third problem is the mistaken belief that quality control and volunteers go well together. Just compare Wikipedia and Citizendium and consider the animosity generated by Deletionism on Wikipedia. How will you encourage authors to contribute if you are telling them that their contributions are lacking the quality you are looking for instead of simply accepting their text and working on it?

You fight spam, you rework text occasionally, you encourage others, you welcome newbies, you lead by example. That’s how you lead.

An abrasive personality, radical change involving a lot of work—those are not the tools you are looking for.

Let me return to the issue of commanding change. Things people have said:

“the content editing should be one with the goal of creating a comprehensive, coherent, article that gives readers info or tutorial about the subject.” – Xah Lee (2008)
“I favor a major reorganization of the wiki material.” – Neil Smithline (2011)
“The articles are littered with crappy advice confusing beginners, have little structure and are filled with ridiculous questions” – Bozhidar Batsov (2012)

The critics can be unhappy about it all they want, and they can complain about it all they want—but in the end, one needs to understand the forces at work, here. There is no chain of command.

It works just like a free software project. If it doesn’t scratch someone’s itch, nobody is going to add it. I think it’s a fundamental issue with our business model: there is no pay for boring stuff. Plus, documentation is of no direct use for anything—unlike code. Thus, people are mostly motivated to keep their own code and its documentation up to date. I don’t think there is anything we can do about that. That’s why the Emacs Wiki Mission Statement does not mention organization and quality. It cannot be commanded.

Once we accept that this is the sand upon which we are building our house, we necessarily need to scale down our expectations. Personally, I think the wiki exists somewhere between the official documentation, Stack Overflow, the FAQ, the newsgroups, the mailing lists, and IRC. It’s certainly nowhere near the quality of organization and writing that the Emacs documentation has—and I don’t think this is the right medium to aim for this level of quality. I think the people willing to invest that amount of energy to write quality stuff ought to be writing the real Emacs documentation—and they probably are.

What remains are the people using Emacs Wiki for their own pet projects, questions asked, answers given, sometimes organized, sometimes rewritten, sometimes linked to the rest of the site.

Wikipedia works because of its universal appeal. When I added an image to an obscure Indian temple we visited when I was staying in Mysore, the photo was terrible. But it was a start, and enough people cared about the page and it grew, and it found people to tend it, and now it’s big and beautiful.

There just aren’t enough Emacs users and authors out there and the best of us will be contributing to the official Emacs documentation. The wiki exists somewhere between the official documentation and the mailing lists. Lower your expectations.

Given all that, why does the wiki exist at all?

When I started it, I had several reasons:

  1. The wikis I knew, C2 and Meatball Wiki, had attracted a particular community and they had created a particular subculture I liked. We talked about the Wiki Now and many other things that made wikis work. The medium itself was interesting.
  2. I had been posting on the newsgroups for a long time, and slowly I realized that the same questions kept being asked again and again. The newsgroups and mailing lists were failing as a medium because they were ephemeral. Sure, we kept telling people to search the archives. But the medium afforded asking questions instead of searching.
  3. When I looked for Frequently Asked Questions, I found a document online, maintained by a single person. This person was a bottleneck. The FAQ updated slowly.
  4. At the time I was getting into Internet Relay Chat. On IRC, conversation is even more ephemeral than on the mailing list. This time, however, “searching the archives” was out of the question. We needed our own archive. And thus I started answering questions on IRC and posting the answers on the wiki.

I think this last point bears consideration: I was creating pages or adding information to pages because it was pertinent on IRC. An index, linking to the page, categorization, returning to the page later and reworking it, all these quality related tasks were not pertinent on IRC. All I needed was a pastebin that I could go back to and rewrite if I felt like it. Often I did not—and I still don’t.

The wiki being on the web, updated every now and then, with pertinent answers to specialized questions, unorganized and raw, ended up being a good resource for the search engines out there. These search engines bring new people to the site. People that don’t understand how wikis work in general and how this wiki grew to be where it is in particular. They are shocked. So many pages outdated! Such a mess in style and quality!

I think those people are better served reading the official documentation. They don’t want this mess, they don’t benefit from it’s loose rules, they don’t understand how cool it is to have a site with no login required. They are better served elsewhere.

I’m sure that one day the Emacs Wiki will have become irrelevant. But just like the old newsgroups never disappeared entirely, so will the wiki transform into something else and remain part of our information landscape.

Perhaps one of the Emacs Wiki critics will one day set up an alternate site, pull all the pages (more than 8500 pages last time I checked), extract the quality content—or rewrite it from scratch—and produce something better.  Perhaps they will build an organization that can keep the quality up, encourage new authors to join, provide more value to their readers. But I don’t think complaining about the existing Emacs Wiki is a step in the right direction. Build it, and they will come—elsewhere.

Tags: RSS RSS

Comments on 2012-03-24 How Emacs Wiki Works


SeanO

> the mistaken belief that technology can substitute for social change

Well, simply having a “talk” page for each wiki page (with a big link at the top) would let people have conversations about those pages without cluttering them up. And having an actual, complete revision history would help someone figure out what happened to a completely messed-up page. But EmacsWiki doesn’t seem to have those things. Its technology is too primitive to reasonably support – much less encourage – a workable society.

> the mistaken belief that moderation can be commanded

As Wikipedia has amply demonstrated, there are plenty of obsessive-compulsive people online. If anything, moderation needs to be limited.

SeanO 2012-03-24 03:44 UTC



Bozhidar
One of the critics already started an alternative wiki - http://wikemacs.org :-)

Other than that - I’m with SeanO on this one. And with 8500 pages or so - it easier to save the worthwhile articles than to revisit all the material.

Bozhidar 2012-03-24 05:48 UTC



AlexSchroeder
Sean, if the lack of a complete revision history is what held you back from reworking and reorganizing, then I guess you’re right. I am keeping all the logs, but discarding all the older revisions after two weeks and never felt the lack.

The addition of Talk pages was discussed a while ago—it was one of the items on the suggestions page—but at the time we had two votes in favor and two votes against them. Nobody else seemed to care. I’ll be surprised if their mere existence improves the pages. But I guess we’ll see, now. Good luck to you all!

After a quick look at the site I’ll suggest that you should add licensing terms as quickly as possible. As it stands, you cannot copy anything from Emacs Wiki. That would require a copyleft license.

AlexSchroeder 2012-03-24 07:16 UTC



SeanO
Alex – I’d say my laziness and your sneering condescension were greater obstacles. But if I don’t check in for more than 2 weeks (quite likely, given how little time I have for Emacs these days), I’d still like to be able to see what happened if something I cared about (e.g. the Perl page or my homepage) went bad. (How much, to an order of magnitude Euros and hours’ work, would it cost you to keep a complete revision log, by the way?)

I guess the Talk page issue came and went in a revision window when I didn’t have time to hang out here.

I’m also tired of your pseudo-legal threats toward the content of the wiki (which is under GPL2, not GFDL).

(This comment is protected by the GFDL, I guess. Whatever that means.)

SeanO 2012-03-24 08:14 UTC



AlexSchroeder
I wonder where you felt my sneering condescension. Perhaps in my replies to people I felt were trying to tell me what to do in my free time and with my money? I also don’t think I threatened you in any way. Perhaps you missed the problems I had with changes to the Emacs Wiki license in its early days. I was trying to help you or Bozhidar make the same mistake.

The logs are there for all to see if you follow the links. Here’s the the SiteMap history, for example. Keeping all the old revisions would cost me nothing – it’s simply a setting. I prefer it this way. As I said, I think keeping the old revisions provides no benefit and adds a number of small drawbacks such as needing administrators to permanently hide particular revisions that contain material deemed problematic from a legal perspective. As it stands, I can undo these edits and with time, they are gone. Another issue is that I like the idea of a right to be forgotten. The original C2 wiki kept no revisions at all. I think that the only reason old revisions need to be kept at all is peer review and anti-spam and anti-vandalism measures. For those tasks, a small time window is sufficient. After all, wiki pages are not code. We don’t need to look through the history of a page to find when bugs were introduced and by whom.

AlexSchroeder 2012-03-24 10:30 UTC

P.S.: EmacsWiki:WikiDownload links to a CVS repository of the source files hosted on the wiki, a subversion repo with daily snapshots of all the wiki pages, and a new, up to date git repo of all the pages with full history. I guess in a way my preference regarding the right to be forgotten is already moot since deleted stuff can be pulled out of the archives. This just hides deleted info from casual visitors.



PhilHudson
Excellent riposte, Alex. We who criticize the wiki should pay attention. I would like to thank you for this consistently useful resource and your dedication. I certainly don’t detect any sneering on your part. Having said that, I do think there are a lot of real and fixable problems with the wiki, the worst being the hosting of code with neither “proper” SCM nor automatic notification of changes. Once you’ve used LaunchPad and (especially) github, this is just not tolerable. So I’m going to see what if anything I can do to help any new project.

PhilHudson 2012-03-24 11:29 UTC



AlexSchroeder
I see the problem! I think people like me don’t feel bad about keeping code that consists of a single file on the wiki because we usually don’t think of these files as requiring maintenance. After all, that’s how gnu.emacs.sources used to work. The wiki has the benefit of providing a stable URL, but the process remains essentially the same: post & forget, possibly have discussions with other people via email, followed by another post & forget.

To me, creating a separate project on Savannah or Source Forge is an unacceptable overhead for files like EmacsWiki:rcirc-color.el or EmacsWiki:rcirc-controls.el. But if somebody else felt like taking those files, putting them up on some other site – excellent! At first, color-theme.el was hosted on Emacs Wiki. Eventually somebody took it, moved it elsewhere, and started a real project. Great!

I still think that the Emacs Wiki can act as a low barrier-to-entry incubator for all those small little files that need a place on the web. I don’t read gnu.emacs.sources anymore, and I don’t think many other people do. At the same time, I think there still are a lot of people without their own web pages out there. They can’t post code on Facebook or Google+ and I imagine uploading code to Wordpress and Blogspot sites is also unwieldy. For all those people, the Emacs Wiki offers an alternative. It’s a bit better than gnu.emacs.sources and Lisppaste but a far cry from a software forge.

If people would take popular code from the wiki to a forge, repackage it as a real project, that would be great.

AlexSchroeder 2012-03-24 12:11 UTC


Phil, I just remembered EmacsWiki:Git repository. Maybe that helps? I know Jonas is very enthusiastic about it and has been pestering me for weeks when I dragged my feet. ;) I’m sure he’d appreciate help or some nice words.

AlexSchroeder 2012-03-24 19:07 UTC


“the mistaken belief that technology can substitute for social change”

Github was an example of technology that brought about a change in social behaviour.

– Phil Jackson 2012-03-26 12:18 UTC



Edward O'Connor
Excellent post, Alex. Long live the EmacsWiki! :)

Edward O'Connor 2012-03-26 23:35 UTC



AlexSchroeder
Thanks, hober!

Phil, regarding Github: I’m not much of a github user. I see that git and github bring a lot of relevant new features to the table. Compared with other version control systems they facilitate forking on a grand scale. Do you feel that using Mediawiki introduces a similar set of new features that will revolutionize how wiki pages are edited and organized? I don’t see it, which is why I cannot imagine that Xah’s and Bozhidar’s idea of switching to Mediawiki will in fact help solve the quality issues they have with Emacs Wiki.

AlexSchroeder 2012-03-28 15:32 UTC



AlexSchroeder
Just saw this: The Wikemacs Experiment: 300 Days Later.

AlexSchroeder 2013-01-22 11:57 UTC


We are grateful for the wiki. It’s the best we have. Thank you for your work.

– Visitor 2014-05-08 17:02 UTC

Add Comment

2012-03-13 Campaign Wikis

Recently Calithena of Fight On asked on Dragonsfoot: Do any of you have your campaigns on line?

I said:

  • Ymir’s Call – I play every other Monday evening in a Barbarians of Lemuria campaign with DM Florian. Last session we switched to Crypts & Things. German campaign wiki.
  • Hagfish Tavern was the D&D 3.5 adventure path Rise of the Runelords we played before Ymir’s Call.
  • Kurobano And The Dragons was the M20 game that switched to D&D 3.5 before we started Hagfish Tavern.
  • The Alder King – I run a Solar System game set in the Wilderlands of High Fantasy on two Sunday afternoons a month, in German. This campaign used D&D 3.5 and English before we switched to Solar System RPG. Right now it’s on a short hiatus as we give the Great Pendragon Campaign a try for two or three sessions. I’ve heard one player mumble that maybe we should try it for a bit longer, though.
  • Durgan’s Flying Circus – I play in a monthly Harp game with GM Stefan on another Sunday each month.
  • Desert Raiders was the Pathfinder RPG adventure path Legacy of Fire we played before Durgan’s Flying Circus.
  • The Golden Lanterns was the D&D 3.5 adventure path Shackled City we played before Desert Raiders.
  • Fünf Winde– I run a Labyrinth Lord game set in the Wilderlands of High Fantasy on one Tuesday each month. I used to run two separate groups in the same campaign area, but ended up merging the two groups because one of the two kept shrinking. Both in German.
  • Wilderlande – I run a Labyrinth Lord campaign set in a Points of Light campaign setting for my best friend and his three kids for two hours on a Friday evening every month. Also in German.
  • Lied vom Eis is a Song of Ice and Fire RPG campaign I used to play in; mostly in German.
  • Die Reise nach Rhûn was a Rolemaster campaign in Middle Earth that switched to Legends of Middle Earth we played before Lied vom Eis. German.

There’s more… There must be at least two short Burning Wheel campaigns on that site (Burning Six, Campaign:Krythos). And a Mongoose Traveller game that switched to Diaspora (Campaign:Kaylash). And a wiki I used for my DM notes when running the Kurobano campaign (Campaign:Attaxa). And a Forgotten Realms campaign using D&D 3.5 (Sohn des schwarzen Marlin). And another D&D 3.5 sandbox (Campaign:Grenzmarken).

I totally recommend keeping notes online! :D

(I run the Campaign Wiki site which explains why I’m so enthusiastic about it.)

Tags: RSS RSS RSS

Comments on 2012-03-13 Campaign Wikis

2012-02-20 Network Traffic

Back in 2009 I wondered about network traffic. Decided to take another look. Yesterday’s log file via bot-analyze:

aschroeder@thinkmo:~$ bot-analyze < logs/access.log.1 | head
    ----------------------------Bandwidth-------Hits-------Actions
                     Everybody      2177M     113317
                      All Bots       274M      16154   100%     9%
    --------------------------------------------------------------
                www.google.com     92390K       5542    34%     2%
                    yandex.com     63545K       3954    24%     9%
                           bot     33803K       2309    14%     1%
                superfeedr.com     31740K       1091     6%    63%
                  www.bing.com     14161K        525     3%     4%
                    ahrefs.com      9251K        465     2%     2%

I’m surprised – bots are responsible for 14% of all my hits. That’s better than what I saw in 2009.

The way I have set up my web pages, bots should not crawl “actions” (URLS containing the action parameter) – and yet 9% of them do it, and superfeedr does it most of all. I should investigate, I guess.

Tags: RSS RSS

Add Comment

2011-10-24 Old School D&D Monsters Online

http://farm4.static.flickr.com/3022/2887255725_74df13bd3f.jpg

Remember Links to Wisdom? At the time, a blog post had caught my attention and I ended up creating a wiki with a tiny bit of software around it in order to try and help solve the problem.

Well, it happened again. The blog post A database of old-school monsters caught my attention. There, Gavin says he wants an “online old-school monster database, with tags.”

It just so happened that I had some code lying around that would provide Campaign Wiki with tags. I gave it a try, and it worked.

Then I looked around for monsters to add to the wiki. I didn’t want to start with the The Hypertext d20 SRD or the Pathfinder SRD – I wanted old school. I remembered that Dan Proctor kindly doesn’t just host PDF files, he also provides text files for his stuff—the Open Game Content Library. So I went there, got the Labyrinth Lord monsters and the Advanced Edition Companion monsters, saved the Word document as text files and started using a lot of regular-expression based search and replace and keyboard macros to semi-automatically go through it all, split it into files and upload the stuff using some shell scripting. Yay for Perl, Emacs and the shell. :)

The result: Campaign:Monsters! A wiki for all of us. Add monsters that allow free distribution – Open Gaming Content, Creative Commons, your own stuff (if you’re willing to make it free), add tags, and make it grow. Use it to browse, find some inspiration, prepare your adventures and populate your wandering monster tables.

Tags: RSS RSS RSS

Comments on 2011-10-24 Old School D&D Monsters Online


-C
Awesome! A request - for those of us running 1e/2e, would it be possible to get an entry for morale on a 1-20 scale? LL uses 2d6 morale.

-C 2011-10-25 05:15 UTC



-C
Also, it would be super helpful if something like this existed for spells.

-C 2011-10-25 06:23 UTC


Morale on a 1-20 scale is easy to add but requires people knowing the actual numbers to edit all the pages. Sounds like a long term project to me. :)

AlexSchroeder 2011-10-25 07:16 UTC



JDJarvis
Not to be a grumpy old man here but shouldn’t the entries include their source? Both for legal reasons and to give crdeit where credit is due.

Otherwise I’m loving it.

JDJarvis 2011-10-25 11:21 UTC



AlexSchroeder
Actually they don’t need to include their source per se, they need to have an appropriate OGL. I don’t think that requires me to paste the link on every single page of the wiki – but every monster entry should have a link to the standard OGL, the Labyrinth Lord OGL, the Advanced Edition Companion Labyrinth Lord OGL, the Tomb of Horrors Complete OGL, or equivalent.

If you note any entries missing a link to their respective license, let me know – this is definitely something that I want to keep straight.

AlexSchroeder 2011-10-25 12:11 UTC



JDJarvis
Let’s say someone screwed up added a monster and didn’t title the page correctly, like I just did, how’s that get fixed?

re: http://campaignwiki.org/wiki/Monsters/2011-10-25

JDJarvis 2011-10-25 13:03 UTC



AlexSchroeder
You just edit the page and delete its content. This marks the page for deletion and it will be removed by a maintenance job in 14 days. At the same time, paste your stuff elsewhere. That’s what I did and now your monster resides here: CW:Monsters/Devouring_Sentinel

AlexSchroeder 2011-10-25 13:34 UTC

Add Comment

2011-10-07 Oddmuse, Venus and Perl

http://www.emacswiki.org/pics/oddmuse-logo.png

I’ve been working on a submission form for the Old School RPG Planet. Today I added another little feature. This is how I like to develop code. No time pressure. One little step at a time. Keep polishing it.

The planet uses Planet Venus to collect the RSS and Atom feeds of many of the Old School RPG blogs out there. Planet Venus allows you to get the list of feeds via an URL. I’m hosting the list of feeds on Campaign Wiki itself (raw format). As you can see, it the format doesn’t look nice.

The thing I did, therefore, was to write a script that makes it easy for people who are not into the technical details to submit new blogs. It also makes it easier for me to submit new blogs!

The things it handles:

  • If you submit an invalid URL, it will prepend http:// and try again.
  • If it looks like we already have a similar looking feed on our page, it requires a confirmation by the user.
  • If you submit a web page, it will look for alternative links with MIME types application/rss+xml, application/atom+xml, application/xml (yeah) and text/xml (just making sure) and allow the user to pick one of them.
  • If you submitted a feed directly instead of a web page, it it uses that instead.
  • If the feed you picked is served with an invalid content type, it is rejected.
  • It extracts the title of the feed and adds it to the wiki page, sorting all the entries alphabetically.

I think it’s pretty cool.

If you look at the interface, you’ll note that it has a link to its own source code. I love this little Perl trick:

  1. Add __DATA__ at the end of the source file. Usually you would add actual data at the end. The script could read it using the DATA file handle.
  2. Serve source code using seek DATA, 0, 0; print "Content-type: text/plain; charset=UTF-8\r\n\r\n", <DATA>; This resets the current position of the DATA file handle to the beginning of the source file. Tadaa! :)

Tags: RSS RSS RSS RSS

Add Comment

More...

Define external redirect: WikiDownload PhilHudson LaunchPad SeanO ForgiveAndForget