Journal

2015-01-08 Tag Cloud

All these tags, all these topics I blogged about. I started this site on 2002-06-23. All this stuff… Who will ever read it again? Sure, some people might be reading it now – but what about those old posts? Who will be reading them in a year? Nobody. Not my family, not my kids, no historian… In fact, most probably nobody will be paying for the server and one day it will all simply disappear. Copies might be left on the Internet Archive.

Tags: RSS RSS

Comments on 2015-01-08 Tag Cloud


Enzo
Well, I am enjoying your blog now :)

– Enzo 2015-01-08 10:14 UTC


Thank you for leaving a comment! :)

– Alex Schroeder 2015-01-08 10:17 UTC


“The Way is so vast that when you use it, something is always left. How deep it is!
“It seems to be the ancestor of the myriad things.
“It blunts sharpness; untangles knots; softens the glare; unifies with the mundane.”

Daodejing ch. 4

– Anonymous 2015-01-21 16:00 UTC


A fantastic book. :)

AlexSchroeder 2015-01-21 22:26 UTC


Don’t worry… It will be part of the Akasha Records for outlanders to speculate about what a strange life human kind lived… I like the thought of turning immortal that way:-) And as you may note, I read you too from time to time so to stay in touch with you in some way:-)) So please continue!

mom*

– mom 2015-02-17 19:55 UTC



Alex Schroeder
Yay! :)

– Alex Schroeder 2015-02-17 23:10 UTC

Add Comment

2013-08-14 Comments on this Wiki Blog

I’ve done a few changes. Let’s see whether it works out.

  1. when viewing ordinary pages, previous comments and the comment form are shown ← this is what I wanted to add
  2. people who follow the RSS feeds will therefore easily find the obvious comment form when following the link ← this is what I hope to achieve
  3. when leaving a comment, you end up on the comment page, which might be confusing
  4. when looking at RecentChanges, which has some extra magic associated with its name, the comments are not shown
  5. when looking at older revisions, the page history, and many other variants, the comments are not shown
  6. when looking at a journal page such as Diary, the comments are not shown
  7. when looking at a journal page such as Diary, you can still see links to inline the comment page
  8. inlined comment pages don’t come with an automatic comment form—you need to click on the “Add comment” link at the end
  9. comment pages are still excluded from the usual feeds (I wonder whether I should change this)

I think the Wiki + Blog combo still works. I’m just trying to make it less weird. :)

Source code for your config file, if you’re an Oddmuse user. The source code below also includes my Google +1 setup (Oddmuse:Google Plus One Module) because my code needs to avoid the situation where a page shows two +1 buttons. As for comments within journals, I use Oddmuse:Dynamic Comments Extension.

# Google +1 list

push(@MyAdminCode, sub {
       my ($id, $menuref, $restref) = @_;
       push(@$menuref, ScriptLink('action=plusone',
				  T('Google +1 Buttons'),
				  'plusone'));
     });

$Action{plusone} = \&DoPlusOne;

sub DoPlusOne {
  print GetHeader('', T('All Pages +1'), ''),
    $q->start_div({-class=>'content plusone'});
  print $q->p(T("This page lists the twenty last diary entries and their +1 buttons."));
  my @pages;
  foreach my $id (AllPagesList()) {
    push(@pages, $id) if $id =~ /^\d\d\d\d-\d\d-\d\d/;
  }
  splice(@pages, 0, $#pages - 19); # last 20 items
  print "<ul>";
  foreach my $id (@pages) {
    my $url = ScriptUrl(UrlEncode($id));
    print $q->li(GetPageLink($id),
		qq{ <g:plusone href="$url"></g:plusone>});
  }
  print "</ul>";
  print $q->end_div();
  PrintFooter();
}

# two step Google +1 button to protect your privacy
# http://my.opera.com/QuHno/blog/adding-the-google-1-button-to-a-webpage-without-violating-the-users-privacy

*MyOldGetCommentForm=*GetCommentForm;
*GetCommentForm=*MyNewGetCommentForm;

sub MyNewGetCommentForm {
  return MyOldGetCommentForm(@_) . q{
<script type="text/javascript">
function loadScript(jssource,thelink) {
   var jsnode = document.createElement('script');
   jsnode.setAttribute('type','text/javascript');
   jsnode.setAttribute('src',jssource);
   document.getElementsByTagName('head')[0].appendChild(jsnode);
   document.getElementById(thelink).innerHTML = "";
 }
 var plus1source = "https://apis.google.com/js/plusone.js";
</script>
<p id="plus1">
  <a href="javascript:loadScript(plus1source,'plus1')">
    <img src="/pics/plusone-h24.png" alt="Show Google +1" />
  </a>
</p>
<!-- <g:plusone></g:plusone> -->
<div class="g-plusone" id="my_plusone"></div>
<script type="text/javascript">
  document.getElementById("my_plusone").setAttribute("data-size", "medium");
  document.getElementById("my_plusone").setAttribute("data-href", document.location.href);
</script>
};
}

# make sure journal pages set a global variable which we then use to
# hide the comment form

*MyOldPrintJournal = *PrintJournal;
*PrintJournal = *MyNewPrintJournal;

my $MyPagePrintedJournal;

push(@MyInitVariables, sub {
       $MyPagePrintedJournal = 0;
     });

sub MyNewPrintJournal {
  $MyPagePrintedJournal = 1;
  return MyOldPrintJournal(@_);
}

# list comments and comment form at the bottom of every normal page

*MyOldPrintFooter = *PrintFooter;
*PrintFooter = *MyNewPrintFooter;

sub MyNewPrintFooter {
  my ($id, $rev, $comment) = @_;
  if (!$MyPagePrintedJournal
      and GetParam('action', 'browse') eq 'browse'
      and $id and $CommentsPrefix
      and $id ne $RCName
      and $id !~ /^$CommentsPrefix(.*)/o) {
    my $target = $CommentsPrefix . $id;
    my $page = '';
    $page = PageHtml($target) if $IndexHash{$target};
    print $q->div({-class=>'comment'},
		  $q->h2(T('Comments')),
		  $page);
    # don't include Google +1 button twice
    print MyOldGetCommentForm("$CommentsPrefix$id", $rev, $comment);
  }
  MyOldPrintFooter(@_);
}

Tags: RSS RSS RSS

Add Comment

2013-07-30 Politics

Looking back at how things have gone in the last year I’d say that all my RPG conversation has moved to Google+ and this wiki-blog has turned into a repository of things I don’t want to loose when Google+ is shut down. All my political thoughts are on Twitter: @kensanata and I’m almost exclusively expressing myself via retweets. I’m also more than happy to talk about it on Twitter.

I post my pictures on Flickr at kensanata for myself and cross-post them to Facebook and a protected Twitter channel for the various family members out there.

Tags: RSS RSS RSS RSS RSS

Add Comment

2013-05-26 Popularity

For a while now, I’ve been using Google’s +1 button at the bottom of the page. They’re special because they require two clicks: the first click loads the button code from Google, which you then have to click. If you don’t click anything, no code is being loaded from Google and hopefully you’re not being tracked. I also no longer use Google Analytics to track my stats so in theory Google shouldn’t be seeing any of your reading on this site.

Usually my +1 counts are really low—mostly zero, in fact. Recently I noticed a few blog posts with more than ten +1. And now that I’ve announced the winners of a little contest I’m running every year, I see how far up the number could go… if only I blogged stuff that was more relevant to other people. Look at the last entry…

https://farm4.staticflickr.com/3795/8841475140_31d134c183_o.png
→ 62 people clicked +1

Oh well. As I’m noticing how much time I’m spending just reading the blogs and Google+ itself, I hesitate before posting on my own blog. Am I really adding much? In a way I felt better when I didn’t think of the audience and felt free to write a two sentence blog post about a movie I saw. These days I often wonder what the readers that come here for RPG posts will think.

Since I don’t know how to make my writing more relevant to my readers, perhaps I should try to make my writing more natural again. We’ll see.

Tags: RSS

Comments on 2013-05-26 Popularity


Josh
Ironically, I started following due to some technical post, though I don’t remember exactly what it was now (probably Emacs-related). The RPG posts are bonus material for me. They have reignited my desire to play, which I haven’t done (for a number of reasons) in somewhere around 14 years. Your posts reminded me how much I love it. I haven’t +1’d any of your posts because I mostly read them through my RSS feed reader, but I will make an effort to open the posts on the site and +1 them when I think they’re great. Sorry to bring you down with my lack of feedback.

– Josh 2013-05-28 14:21 UTC



AlexSchroeder
No worries. The lack of popularity is not bringing me down. :) What I find intriguing, however, is how the imagined audience changes what I feel comfortable writing about. Perhaps the post was a replacement post for a darker collection of thoughts on self-censorship, the balance between tolerance (benevolent reading of others) and intolerance (the blocking, reverting and uncircling of others), vulnerability (linking to posts and comments I want to criticize and naming names, the fear of making enemies, attracting the attention of unwanted readers) and negativity (ranting, negative reviews, complaints)… it’s still very much muddled up.

As for RSS readers: I’m always frustrated by the lack of +1 buttons in Reeder. Google Reader has it and its closing down. :( Don’t feel compelled to click through and +1 posts – that’s too much work and I don’t do it when I read other blogs. Do feel free to leave a comment every now and then, however. :)

AlexSchroeder 2013-05-28 14:37 UTC

Add Comment

2012-08-24 On RPG Blogging

Recently Michael Gibbons asked on Google+ regarding gaming blogs:

What do you prefer: content or opinion?

I prefer opinion and insight with examples from actual play or things that I will immediately adopt for my own games.

My favorite example for this is the Ode to Black Dougal blog.

A blog in the same category I recently stumbled upon is Untimately.

I will download a lot of content, but it doesn’t get read carefully unless it gets used at the table and that happens rarely. I have huge folders on my hard disk full of PDFs: bears, hats, treasure maps, Vancian spell names and short spell descriptions, alternate classes, one page dungeons, character generation shortcuts… At the gaming table, I can barely remember to use one or two of these.

What kind of stuff cheeses you off enough to stop reading someone’s blog?

I think my reasons for unsubscribing from blogs usually involve one of the following:

  1. misanthropic ranting – there is enough negativity out there already; I’m also easily peeved by unpolite hosts
  2. excuses for not posting – not posting is ok, unwritten posts don’t show up in my feed reader, but excuses will show up in my feed reader; I’m interested in the authors’ lives, and thus posting about health or family issues every now and then is not a problem
  3. long posts – “TL;DR” aka. “too long; didn’t read” is a problem: I might skim long articles but often I don’t read them; as the unread articles accumulate, I start wondering whether I’d be happier unsubscribing since I would no longer feel bad for not reading the posts
  4. not my topic – if the author keeps writing about the design of a game that I won’t be playing, I feel that I’m better of reading somebody else’s posts: there are so many out there!

Tags: RSS RSS RSS

Comments on 2012-08-24 On RPG Blogging


-C
I have written about this before on my blog. http://hackslashmaster.blogspot.com/2011/05/on-why-your-blog-isnt-any-good.html

-C 2012-08-24 19:50 UTC



shortymonster
I’m kind of new to all this myself (being going for less than three months), so my points won’t be massively groundbreaking. I’m actually more interested in the results of this mini survey.

A lot of blogs I don’t read, or at least don’t read often, could be amazing, and well written, but they’re about games I don’t play, or styles of play that don’t suit me. As an example, I’m not really into the OSR thing, so if that’s all a blogger talks about, I get turned off. But if you’re an OSR blogger who talks about the hobby at large, and offers advice and insight, maybe even some content that doesn’t have to apply to OSR, then I’ll keep checking back.

I guess that what that means is I want something different every once in a while. Right now I’m on a horror RPG kick, so i wrote something about how to GM a horror game. Didn’t want to tread on old ground, so the next week it was a possible location for a game, that just happened to also work well in horror games. Next Monday comes a little discussion (that’s right, I’m talking about what I’m posting ;p) about role playing a child, something that is done more often in horror systems/settings than others, but can be applied to other games.

So, a general theme, but something different each time, that I hope appeals to more than a core of people, and to gamers as a whole.

shortymonster 2012-08-24 20:16 UTC



AlexSchroeder
-C, that’s right, I remember reading it. I don’t agree with all your points. Do you still think the following points still are important?

“Provide some sort of resource on your blog on a regular basis (recipes, a picture, quiz, etc.) So that people will want to mark your blog as one to come back and check”

That sounds like a setup to trick authors into posting lesser quality posts because they need to follow a particular schedule.

“Schedule your posts to come out either around the morning or the evening, when people are going through their blog roll”

Maybe you are right. If you’re trying not to bias by timezone, however, this is difficult. Also it assumes that most people don’t use a feed reader. That may be true, but I have a hard time understanding such choices.

“Be consistent in your posting schedule”

This is similar to the point above regarding regular features.

“Write interesting titles that draw interest, like they teach in journalism”

I am not sure the web works the same way (see 2011-03-10 Headlines). I don’t like sensationalist and provocative news lines. Or at least I ink I don’t.

AlexSchroeder 2012-08-24 23:41 UTC



AlexSchroeder
Shorty game, variety is a good goal if you want to appeal to a wide audience. If you’re trying to write a Warhammer Fantasy style game, however, I don’t think “more variety” is the answer. At least, I’m not sure. Should you try to pull in more people in the hopes of attracting and converting visitors or should you write for your target audience only? Currently I’m in the latter camp: I think it’s cool to specialize your blog—but at the same time me unsubscribing must be cool, too. Once I determined I am not in the target audience, keeping me tied to the site seems like a bad idea for all of us.

Probably both approaches can work, depending on your goals.

AlexSchroeder 2012-08-24 23:50 UTC



Brendan
I disagree that blog schedule matters much. If people visit your site directly, they will always see the N most recent posts. If people read your site in a feed reader, it’s their schedule that matters, not yours, just like with TV and a DVR. The only exceptions I would give are very rarely posting (which may lead to people removing your site from blogrolls or whatever), and very frequently posting or posting in clumps (which may lead to people unsubscribing because of being flooded). I personally am more likely to subscribe to a lower-volume site that has articles of consistent quality (Swords of Minaria is a good example of this kind of site).

Brendan 2012-08-25 17:31 UTC



AlexSchroeder
To be honest, when I open my feed reader, I also like to read the blogs that have one or two unread posts before tackling the blogs that have twenty or more unread posts (World War II Today, I’m looking at you…).

AlexSchroeder 2012-08-26 22:22 UTC

Add Comment

2012-05-16 Blog Napping

Ok, so I wanted a local copy of Metal Earth in order to better prepare for my game. Based on previous work I had done, this proved to be fairly easy and I improved my scripts along the way. Yay!

The Metal Earth is a blogspot blog with full page content in the Atom feed.

To identify the blog, look at the source of any page. The HTML header will contain a line like the following: <link rel="service.post" type="application/atom+xml" title="..." href="http://www.blogger.com/feeds/XXX/posts/default" /> – this is where you get the number from. In this case, the number is 2248254789731612355.

download.sh – this file downloads the atom feed files:

#! /bin/sh
for i in `seq 40`; do
  start=$((($i-1)*25+1))
  curl -o foo-$i.atom "http://www.blogger.com/feeds/2248254789731612355/posts/default?start-index=$start&amp;max-results=25"
done

You’ll find that you only need to keep the first four of them.

extract.sh – this file calls the Perl script for every Atom file. You can use the -f option to force it to overwrite existing files.

#! /bin/sh
for f in *.atom; do
    perl extract.pl "$*" < $f
done

extract.pl – this file has several CPAN dependencies. It will parse the Atom file, look at each entry, and write it into a separate file. If the entry doesn’t have a title, it will parse the HTML content and try to guess a title (looking at the first H1 or the first SPAN element). It will warn you about duplicate names. It will also try to set the last modification time of the file to the update timestamp in the Atom file.

#!/usr/bin/perl
use strict;
use XML::LibXML;
use HTML::HTML5::Parser;
use Getopt::Std;
use DateTime::Format::W3CDTF;
use DateTime;
our $opt_f;
getopts('f');
undef $/;
my $data = <STDIN>;
my $parser = XML::LibXML->new();
my $doc = $parser->parse_string($data);
die $@ if $@;
my $encoding = $doc->actualEncoding();
my $context = XML::LibXML::XPathContext->new($doc);
$context->registerNs('atom', 'http://www.w3.org/2005/Atom');
my $html_parser;
foreach my $entry ($context->findnodes('//atom:entry')) {
  my $content = $entry->getChildrenByTagName('content')->[0]->to_literal;
  my $title = $entry->getChildrenByTagName('title')->[0]->to_literal;
  $title =~ s!/!_!gi;
  $title =~ s!&amp;!&!gi;
  $title =~ s!&#(\d+);!chr($1)!ge;
  if (not $title) {
    if (not $html_parser) {
      $html_parser = HTML::HTML5::Parser->new;
    }
    my $html_doc = $html_parser->parse_string($content);
    # we don't know the HTML namespace for certain
    my $html_ns = $html_doc->documentElement->namespaceURI();
    my $html_context = XML::LibXML::XPathContext->new($html_doc);
    $html_context->registerNs('html', $html_ns);
    $title = $html_context->findnodes('//html:h1')->[0];
    $title = $html_context->findnodes('//html:span')->[0] unless $title;
    $title = $title->to_literal if $title;
    warn "Guessed missing title: $title\n";
  }
  my $f = DateTime::Format::W3CDTF->new;
  my $dt = $f->parse_datetime($entry->getChildrenByTagName('updated')->[0]->to_literal)->epoch;
  my $file = $title . ".html";
  if (-f $file and ! $opt_f) {
    warn "$file exists\n";
  } else {
    open(F, ">:encoding($encoding)", $file) or die $! . ' ' . $file;
    print F <<EOT;
<html>
<head>
<meta content='text/html; charset=$encoding' http-equiv='Content-Type'/>
</head>
<body>
$content
</body>
</html>
EOT
    close F;
    utime $dt, $dt, $file;
  }
}

Tags: RSS

Add Comment

2010-11-05 The Wiki Nature of This Blog

I recently saw an essay by SamRose, linked to from Community:InformationOverload: Why you never see people complaining about “knowledge overload”…

  1. “Your ability to track, read, digest and understand blog posts cannot match the exponential volume of blogs emerging on the internet every day (even just in the subject areas that you are interested in).”
  2. “[I]f your intent is to be a source of re-usable knowledge, then focusing on frequency of posting, and statistics of people looking at your web or blogsite could become difficult to sustain.”
  3. “A more sustainable approach for digesting, understanding, and sharing for the 80% of people who will not be one of the widely-followed blogs, is to do it in a form that others can digest, understand, and share.”

It makes me want return to writing wiki pages on this site instead of blog pages (like this one). Start with a copy of a good blog page on a topic, copy it to a page without date prefix, and integrate other good pages instead of just appending to them (like the tag or category pages will).

On the topic of my RPG-related blogging, I have a few things that I keep returning to:

  1. My current set of house-rules, somewhat collected on House Rules
  2. The underpinnings of my play style, somewhat collected on How I Roll
  3. There’s also a much longer German page describing what I like as a player, 2010-03-01 Spielervorlieben
  4. The underpinnings of my referee style, somewhat collected on Know Your DM
  5. A wide variety of pages on the abstract idea of writing short texts, currently collected on Keep It Short

Tags: RSS RSS

Comments on 2010-11-05 The Wiki Nature of This Blog


Harald Wagener
I personally like the wiki nature of this blog very much, and if dropping the date prefix makes you happier, more power to you! Just keep the RSS feed, please (-;

– Harald Wagener 2010-11-05 18:18 UTC


Hehe, thanks. I think what I meant was that when I started this site, it was just a wiki. Then I wrote some code to add blogging functionality (mostly comment pages, RSS feeds limited to the DatePage”“s, that kind of stuff). And then I slowly started to write more and more blog pages instead of wiki pages.

What’s the difference, you might ask. For me personally, wiki pages – having no temporal context – basically stand in a permanent WikiNow. When a reader comes accross the wiki page, the page says “this is what I say and I’m saying it right now” instead of “this is what Alex thought years and years ago and who knows whether he cares anymore”. I guess the implicit assumption is a wiki audience that will keep rereading the existing pages, questioning them, putting them back into context… Thus, maybe what I’m saying about the Wiki Now is just a dream that won’t work on a one person wiki.

There’s also the question of DocumentsVsMessages raised by LionKimbro, and WikiIsDocumentBased in particular.

Hm, food for thought.

AlexSchroeder 2010-11-05 20:54 UTC



AaronHawley
Yes, this is why blogs can be damaging in communities if there is no Wiki alternative. Even popular software communities suffer if they rely only on the hype of their bloggers and don’t promote a community-driven Wiki. It can be hard to find the blog entry with the example you want with Web search, and even quality entries can become poor style or useless with time or after a new release of the software. The comment section of blog entries try to make up for it with people submitting corrections, but there’s only so much you can “fit in the margins”.

AaronHawley 2010-11-09 22:23 UTC



awwaiid
Interesting – I’ve been thinking the exact same thing for my blog/wiki. My intent for some time now has been to write stub blog entries, and then expand them into full pages. But I almost never go back and do that… so I just end up with sort of half-thoughts that are never expanded upon.

awwaiid 2010-12-12 20:47 UTC

Add Comment

2010-08-05 Why Blog

Over on the RSS blog Once More Unto The Breach! the author asks: why did you start your blogs? I answered:

I got myself a website when I started running a play by email game back in 1994 or 1995. That was before Firefox, Internet Explorer, or Netscape: Mosaic was the browser and Geocities was the web service provider people used. I started the site because I wanted to build trust between myself and my players. Much later I discovered wikis and started writing on Meatball Wiki. Again, the question of trusting people, real names versus weirdo nicknames, etc. came up. We derided blogs as the inferior solution to wikis. Some of us started maintaining a “diary page” on which we posted stuff that didn’t belong onto a real wiki page. I moved my own homepage from Geocities to a real wiki. And as I kept updating my diary page I started realizing that what I was in fact doing was keeping a blog. I finally decided to go all the way and do “real” blogging on my wiki. The main point has remained the same: build trust between myself and the people I want to interact with online, whether it is Free Software I am contributing to, people I game with, family I want to keep in touch with, or ideas I wanted to write down for my future self.

There’s a bit more information on the AlexSchroeder page. :)

I guess if the point is building trust, then a mix of all sorts of topics is appropriate. It shows that I am a real person. The drawback is that people only interested in a single topic can’t just subscribe to the entire feed. They need to focus on a tag or category.

I hope it works as intended. :)

Tags: RSS RSS

Add Comment

2010-07-13 To My RPG Followers

I’ve noticed that some people have added my blog to their blogroll. Thank you!

Some of you have subscribed to a feed listing all changes – including non RPG pages, German pages, Comments, etc. I’m assuming that those topics are annoying you. If not, you can stop reading now. :)

I figured that maybe the reason is because Blogspot and similar services are having difficulties subscribing to a feed URL containing a query string (question marks and semicolons). I don’t really know what the problem is, but I have seen that one before. Anyway, I fiddled with the site setup again. :vee:

Subscribe to this URL in your feed reader – Google Reader, Bloglines, etc:

If you don’t need the full page content because you are adding it to your sidebar, you can use the following URL:

I hope that helps?

The feed icon for the RPG tag below should now link to the correct URL as well.

Tags: RSS RSS

Add Comment

2010-05-31 Blognapping

For Blogger, using bash, perl, and curl. You need to replace XXX with the magic number you get when you look at the blog’s source. The HTML header will contain a line like the following: <link rel="service.post" type="application/atom+xml" title="..." href="http://www.blogger.com/feeds/XXX/posts/default" /> – this is where you get the number from.

Once you have it:

for i in `seq 40`; do
  start=$(((10#$i-1)*25+1))
  curl -o foo-$i.atom "http://www.blogger.com/feeds/XXX/posts/default?start-index=$start&max-results=25"
done

This should get you 40 files called foo-1.atom to foo.40.atom with 25 articles each in your current directory. Delete the ones that don’t contain any results or increase the number if you’re looking at a blog with more than 1000 posts that you’re interested in.

For a Wordpress blog, we try to do the same thing. First, get the atom pages:

for i in `seq 100`; do
  curl -o foo-$i.atom "http://foo.wordpress.com/feed/atom/?paged=$i"
done

This should get you 100 files called foo-1.atom to foo.100.atom in your current directory. Delete the ones that don’t contain any results or increase the number if you’re looking at a blog with more posts.

Now, unless the author has disabled it somehow, the atom feeds already include the complete articles. It’s certainly possible to fetch them all again, but it’s not necessary. Save the following in a Perl script called extract.pl.

#!/usr/bin/perl
use strict;
use XML::LibXML;
undef $/;
my $data = <STDIN>;
my $parser = XML::LibXML->new();
my $doc = $parser->parse_string($data);
die $@ if $@;
my $encoding = $doc->actualEncoding();
my $context = XML::LibXML::XPathContext->new($doc);
$context->registerNs('atom', 'http://www.w3.org/2005/Atom');
foreach my $entry ($context->findnodes('//atom:entry')) {
  my $title = $entry->getChildrenByTagName('title')->[0]->to_literal;
  $title =~ s!/!_!gi;
  $title =~ s!&amp;!&!gi;
  $title =~ s!&#(\d+);!chr($1)!ge;
  my $content = $entry->getChildrenByTagName('content')->[0]->to_literal;
  open(F, ">:raw" . $title . ".html") or die $! . ' ' . $title;
  $content = utf8::decode($content);
  print F <<EOT;
<html>
<head>
<meta content='text/html; charset=$encoding' http-equiv='Content-Type'/>
</head>
<body>
$content
</body>
</html>
EOT
  close F;
}

Run it on the Atom files:

for f in *.atom; do
    perl extract.pl < $f
done

You should end up with a ton of HTML files in your current directory.

If that doesn’t work, perhaps the author only has links to the actual articles in their atom files. Here is how to extract the HTML links from these Atom feeds: save the following in a Perl script called url.pl.

#!/usr/bin/perl
use XML::LibXML;
undef $/;
$data = <STDIN>;
my $parser = XML::LibXML->new();
my $doc = $parser->parse_string($data);
die $@ if $@;
my $context = XML::LibXML::XPathContext->new($doc);
$context->registerNs('atom', 'http://www.w3.org/2005/Atom');
foreach ($context->findnodes('//atom:entry'
			     . '/atom:link[@rel="alternate"][@type="text/html"]'
			     . '/attribute::href')) {
  print $_->to_literal() . "\n";
}

Now you can extract all the URLs and fetch them:

for f in *.atom; do
    for url in `perl url.pl < $f`; do
        curl -O "$url";
    done;
done

The use of the -O option assumes that the file names given by the URL will be unique – this is not necessarily true as http://localhost/2010/05/test.html and http://localhost/2010/06/test.html will result in one overwriting the other.

You should end up with a ton of HTML files in your current directory.

This doesn’t get any required extra files like CSS or images, but it might be good enough for a blog backup.

If you want to get the images, here’s a way to extract the image URLs and download them. Save the following as img.pl.

#!/usr/bin/perl
use XML::LibXML;
undef $/;
$data = <STDIN>;
# munging
$data =~ s!<colgroup>.*?</colgroup>!!gs;
$data =~ s!<class western="">.*?</class>!!gs;
# parsing
my $parser = XML::LibXML->new();
my $doc = $parser->parse_html_string($data);
die $@ if $@;
# extracting
my $context = XML::LibXML::XPathContext->new($doc);
foreach ($context->findnodes('//img/attribute::src[starts-with(.,"http")]')) {
  print $_->to_literal() . "\n";
}

Use it:

for f in *.html; do
    echo "$f";
    for img in $(perl img.pl < "$f"); do
	echo "$img"
        curl -s -O "$img"
    done
done

Watch out for parser errors!

What you’re still lacking is a fixing of all the links in the HTML sources. How about this? Save as img-replace.pl.

#!/usr/bin/perl
use XML::LibXML;
undef $/;
$data = <STDIN>;
# munging
$data =~ s!<colgroup>.*?</colgroup>!!gs;
$data =~ s!<class western="">.*?</class>!!gs;
# parsing
my $parser = XML::LibXML->new();
my $doc = $parser->parse_html_string($data);
die $@ if $@;
# extracting
my $context = XML::LibXML::XPathContext->new($doc);
for my $attr ($context->findnodes('//img/attribute::src[starts-with(.,"http")]')) {
  my $url = $attr->getValue();
  $url =~ s!.*/!!;
  $attr->setValue($url);
}
print $doc->toString();

Use:

for f in *.html; do
    echo "$f";
    perl img-replace.pl < "$f" > "${f}_"
    mv "${f}_" "$f"
done

This seems to work well enough. If you have some HTML files that cannot be parsed, however, this will result in them getting overwritten with an empty file.

Tags: RSS RSS

Comments on 2010-05-31 Blognapping

I’m surprised (and somewhat impressed) that it works for Wordpress blogs too.

Good work!

greywulf 2010-06-01 05:51 UTC


Now that the coding has been done, I need to do actual text assembly. Yikes! :)

AlexSchroeder 2010-06-01 17:52 UTC



AlexSchroeder
If you’re wondering how to do this… Assume you want to pull a copy of A Hamsterish Hoard of Dungeons and Dragons. Examine the source code and you’ll find a link to the atom feed within blogger. This is important, because it’ll provide us with the blog Id! In this case:

<link rel="alternate" type="application/atom+xml" title="A Hamsterish Hoard of Dungeons and Dragons - Atom" href="http://hamsterhoard.blogspot.com/feeds/posts/default" /> <link rel="alternate" type="application/rss+xml" title="A Hamsterish Hoard of Dungeons and Dragons - RSS" href="http://hamsterhoard.blogspot.com/feeds/posts/default?alt=rss" /> <link rel="service.post" type="application/atom+xml" title="A Hamsterish Hoard of Dungeons and Dragons - Atom" href="http://www.blogger.com/feeds/5373792969086619654/posts/default" /> ← that’s the one we’re looking for!

Start with a small set: the last 100 entries:

for i in `seq 4`; do
  start=$((($i-1)*25+1))
  curl -o taichara-$i.atom "http://www.blogger.com/feeds/5373792969086619654/posts/default?start-index=$start&amp;amp;max-results=25"
done

Save it in a script such as download-atom.sh and run it using bash download-atom.sh. You’ll end up with the files taichara-1.atom taichara-2.atom taichara-3.atom taichara-4.atom.

Now take the Perl script from the main page and save it as url.pl. It will extract the page URLs from the Atom files.

for f in *.atom; do
  for p in `perl url.pl &lt; $f`; do
    wget $p
  done
done

Once you’ve verified it, you can fetch more Atom pages.

AlexSchroeder 2011-11-16 19:53 UTC

Add Comment

More...