My adventures using Perl. I use it for Oddmuse and whenever I need to get things done quickly.

2015-10-09 Oddmuse and Mojolicious

I’m trying to run Oddmuse within a Perl web framework: Mojolicious using Mojolicious::Plugin::CGI. It won’t be as perfect as a true Mojolicious app, but it will still be much faster than a simple CGI script. When running as a CGI script, every request loads Perl and compiles all the modules – including the CGI module itself – and Oddmuse and the config files. When running under Mojolicious, we no longer load Perl and we compile Oddmuse just once. Oddmuse itself will keep loading the config file and all that, but it’s still much better than before. The Mojolicious app itself is then started by Toadfarm. And in order to force myself to test it, I’ve switched this wiki over to the new setup!

First, tell Apache to act as a reverse proxy and pass all /wiki requests to the new server. The two ProxyPass instructions at the bottom are the important bits:

<VirtualHost *:80>
    Redirect permanent /
<VirtualHost *:443>
    <Directory />
        Options None
        AllowOverride None
        Order Deny,Allow
        Deny from all
    DocumentRoot /home/alex/
    <Directory /home/alex/>
        Options ExecCGI Includes Indexes MultiViews SymLinksIfOwnerMatch
        AddHandler cgi-script .pl
        AllowOverride All
        Order Allow,Deny
        Allow from all

    SSLEngine on
    SSLCertificateFile      /home/alex/ssl/alexschroeder.crt
    SSLCertificateKeyFile   /home/alex/ssl/alexschroeder.key
    SSLCertificateChainFile /home/alex/ssl/GandiStandardSSLCA2.pem
    SSLVerifyClient None

    ProxyPass /wiki   
    ProxyPass /mojo   


Toadfarm setup in ~/farm/farm:

#!/usr/bin/env perl
use Toadfarm -init;

logging {
  combined => 1,
  file     => "farm.log",
  level    => "info",

# other apps go here...

mount "$farm/" => {
  "Host" => qr{^(www.)?alexschroeder\.ch$},
  mount_point => '/wiki',

plugin "Toadfarm::Plugin::AccessLog";

start; # needs to be at the last line

The above means that all requests to htpp:// and htpp:// will be handled by Oddmuse.

Mojolicious wrapper in ~/farm/

#! /usr/bin/env perl

use Mojolicious::Lite;

plugin CGI => {
  support_semicolon_in_query_string => 1,

plugin CGI => {
  route => '/',
  script => '', # ~/farm/
  env => {WikiDataDir => '/home/alex/alexschroeder'},
  errlog => 'alexschroeder.log', # path to where STDERR from cgi script goes


In this case, is simply the original Oddmuse script, unchanged. :)


Comments on 2015-10-09 Oddmuse and Mojolicious

Cool! That’s interesting.

What about side effects? Like, if you forget to initialize some variable somewhere, what is going to happen? (Thinking about mod_perl and all stuff related to it)

Also, I miss one important thing from your report… What about page load time? How much faster is it right now, comparing to regular cgi script?

– AlexDaniel 2015-10-09 11:21 UTC

Alex Schroeder
I did not measure page load time. I also think that it keeps forking to run the script, so globals start empty every time? Not sure about the details. I haven’t seen any side effects, yet. Let me know if you see anything!

– Alex Schroeder 2015-10-09 11:39 UTC

Could you be so kind to measure the load time?

– AlexDaniel 2015-10-09 11:50 UTC

Alex Schroeder
I think I was under the illusion that it was working. Now it is working. Hah! I had to rename my old wrapper script to realize that my ProxyPass instructions had been wrong.

Things to remember:

Check the config file for weird stuff.

# if ($ENV{SERVER_PORT} == 8080) {
#   $ScriptName  = '';
#   $FullUrl     = '';
# }

Make sure the user running toadfarm can write alle the files in your data directory. In my case, the data directory belongs to www-data.alex but there were temp files that alex could not write.

– Alex Schroeder 2015-10-09 13:27 UTC

Alex Schroeder
Hm. Current setup, I get more or less the same time when I look at network analysis with my browser when I’m loading /wiki/About and when I’m loading /wiki2/About with being my old wrapper script. I get numbers between 200ms and 700ms. I do note that /wiki/About keeps getting me a 200 OK response instead of a 304 NOT MODIFIED. I’m not quite sure what the problem is.

– Alex Schroeder 2015-10-09 13:40 UTC

Alex Schroeder
Oh, and the code shows that our script is being exec’d. I’m not sure about the headers. I think if our script prints HTTP headers, we should be fine.

– Alex Schroeder 2015-10-09 13:47 UTC

Well, at least it seems to work right now. I think I got confused about the various restarts required.

The missing 304 NOT MODIFIED remains a mistery, however. I’ve tried $q = new CGI; $q->nph(1) in the config file, $CGI::NPH = 1 in the config file, CGI::nph(1) in the config file, use CGI qw/-utf8 -nph/ in the core script, and print $q->header(-nph=>1, -status=>'304 NOT MODIFIED') and return if PageFresh(); in the core script, reloading my toadfarm between edits and found no change.

When I tried to export MOJO_PLUGIN_CGI_DEBUG=1 before starting the toadfarm, I got a 503 error when visiting the site.

This is very annoying.

– AlexSchroeder 2015-10-09 14:32 UTC

OK, so the question is: How often is 304 NOT MODIFIED actually used?

alex@kallobombus:~/farm$ sudo wc -l /var/log/apache2/access.log.1
259852 /var/log/apache2/access.log.1
alex@kallobombus:~/farm$ sudo grep ^oddmuse /var/log/apache2/access.log.1 | wc -l
alex@kallobombus:~/farm$ sudo grep "^oddmuse.*GET /wiki.* 304 " /var/log/apache2/access.log.1 | grep -v feed | wc -l

Filtering out feed request we’re down to 8‰.

For my own site:

alex@kallobombus:~/farm$ sudo grep ^alexschroeder /var/log/apache2/access.log.1 | wc -l
alex@kallobombus:~/farm$ sudo grep "^alexschroeder.*GET /wiki.* 304 " /var/log/apache2/access.log.1 | grep -v feed | wc -l

This reduces the percentage to 9‰.

So, if we redirect feed requests to files and use a cron job to produce some of them, the 304 NOT MODIFIED response seems to be something we can do without, even if I think it ought to work.

– AlexSchroeder 2015-10-09 14:54 UTC

In other words, no noticeable performance boost? Do I read it right? But then why bother?

– AlexDaniel 2015-10-09 21:26 UTC

Hehe, indeed. The point is, however, that I want some sort of plan that will allow me to incrementally develop a solution. Mojolicious::Plugin::CGI does a lot of what I want. It integrates into Mojolicious and thus it can do logging, forking, and so on. It handles input and output. The only thing that’s missing is that it exec’s the script – loading Perl and compiling all the modules on every request.

I think I’ll try and modify the Plugin for my purposes. Then it’ll require Oddmuse once and just run DoWikiRequest on every request, mod_perl style.

– AlexSchroeder 2015-10-10 07:40 UTC

These are the changes I had to make to Mojolicious::Plugin::CGI: Add run option for a code reference and Make sure the status code is actually used and Add HTTP_IF_NONE_MATCH to environment.

This is the new server script I used:

#! /usr/bin/env perl

use lib '/Users/alex/src/mojolicious-plugin-cgi/lib';

use Mojolicious::Lite;

plugin CGI => {
  support_semicolon_in_query_string => 1,
plugin CGI => {
  route => '/wiki',
  script => '',
  run => \&OddMuse::DoWikiRequest,
  before => sub {
    $OddMuse::RunCGI = 0;
    $OddMuse::DataDir = '/tmp/oddmuse';
    require '' unless defined &OddMuse::DoWikiRequest;
  env => {},
  errlog => 'wiki.log', # path to where STDERR from cgi script goes

get '/' => sub {
  my $self = shift;

– AlexSchroeder 2015-10-11 06:51 UTC

With the above setup now installed on, I’m getting response times of 150–230ms when looking at /wiki/About (and a 304 Not Modified response). The old CGI setup under /wiki2/About is giving response times of 250–720ms.

So, much better?

I’ve also started noticing the first signs of trouble: Edit page, save, look at it (looks good), edit page again and notice that it’s showing the old text, not the new text.

– AlexSchroeder 2015-10-11 10:28 UTC

When I tried it with this page, I saw my browser requesting the page using the following:

GET /wiki?action=edit;id=Comments_on_2015-10-09_Oddmuse_and_Mojolicious HTTP/1.1

Oddmuse replied with a 304 Not Modified and the content was wrong. As I checked, however, the timestamp of the index file does in fact match:

alex@kallobombus:~/alexschroeder$ date -r pageidx +%s

Looking at the log of recent changes, however:

alex@kallobombus:~/alexschroeder$ tail -n 1 rc.log
1444559479Comments_on_2015-10-09_Oddmuse_and_Mojolicious1With the above setup now installed on, I'm getting response times of 150–230ms when looking at wikiAbout (and a 304 Not Modified response). So, much better?

I guess something about TouchIndexFile isn’t working.

sub TouchIndexFile {
  my $ts = time;
  utime $ts, $ts, $IndexFile;
  $LastUpdate = $Now = $ts;

I’ve made sure that all the files belong to alex.alex instead of www-data.alex. And that seems to help. After saving:

alex@kallobombus:~/alexschroeder$ date -r pageidx +%s
alex@kallobombus:~/alexschroeder$ tail -n 1 rc.log
1444560771Comments_on_2015-10-09_Oddmuse_and_MojoliciousWhen I tried it with this page, I saw my browser requesting the page using the following: {{{ GET wiki?action=edit;id=Commentson2015-10-09OddmuseandMojolicious HTTP1.1 … If-None-Match: }}} Oddmuse replied with a 304 Not Modified and the content…

And the next request:

GET /wiki?action=edit;id=Comments_on_2015-10-09_Oddmuse_and_Mojolicious HTTP/1.1

HTTP/1.1 200 OK


– AlexSchroeder 2015-10-11 10:52 UTC

OK, added one final patch to make it work without having to use the -nph option for the CGI library and submitted a pull request. And I installed it for this site as well. “Eat your own dog food,” and all that.

– AlexSchroeder 2015-10-11 12:41 UTC

Wow, many more commits after talking to the maintainer.

– AlexSchroeder 2015-10-11 16:14 UTC

Add Comment

2015-09-16 Mojolicious

Yesterday I needed to write a little prototype of a site that would allow users to spell check their documents. The idea being that we want to add new dictionaries to the site, demo it to the right owners and thus secure an open license for their word lists. This is for an association interested in providing tech support for minority languages called Korero.

I used the Text::Hunspell library and the Hunspell dictionaries available from Debian. Yay!

Oddmuse is my Perl 5 project and has been since 2003. At the time, all we needed was a CGI script. These days, however, people often seem to use their own web servers that are part of their application. And so I decided to take a look at Mojolicious. And it was simple to use and easy to get started. I like it!

More info on Github.


  1. add mobile support, the CSS currently depends on the :hover attribute
  2. better menu placement, the CSS currently just uses position: absolute which places the menu at the left edge (this might require Javascript)
  3. the menu should replace the text in question
  4. a JSON interface to allow other apps to do thrir spell checking here

At the same time, we’re working on a Hunspell dictionary for one of the many Rumantsch idioms in Switzerland.


Add Comment

2015-01-13 Handwritten Optimization

Oddmuse datafiles are stored in a simple key-value pair format similar to email headers:

key1: some value
key2: a very long value
        continued on the next line
        (with a TAB).

(Learn more…)

I used to have this straight forward regular expression based code to parse this data:

sub ParseData {
  my $data = shift;
  my %result;
  while ($data =~ /(\S+?): (.*?)(?=\n[^ \t]|\Z)/sg) {
    my ($key, $value) = ($1, $2);
    $value =~ s/\n\t/\n/g;
    $result{$key} = $value;
  return %result;

In 2006, I found the code was really slow and so it was rewritten.

sub ParseData {      # called a lot during search, so it was optimized
  my $data = shift;   # by eliminating non-trivial regular expressions
  my %result;
  my $end = index($data, ': ');
  my $key = substr($data, 0, $end);
  my $start = $end += 2;           # skip ': '
  while ($end = index($data, "\n", $end) + 1) { # include \n
    next if substr($data, $end, 1) eq "\t";     # continue after \n\t
    $result{$key} = substr($data, $start, $end - $start - 1); # strip last \n
    $start = index($data, ': ', $end); # starting at $end begins the new key
    last if $start == -1;
    $key = substr($data, $end, $start - $end);
    $end = $start += 2;   # skip ': '
  $result{$key} .= substr($data, $end, -1); # strip last \n
  $result{$_} =~ s/\n\t/\n/g foreach (keys %result);
  return %result;

Around 2014/2015 I moved Emacs Wiki to a new host and discovered that this implementation was very slow for large files. I moved back to the old implementation from before 2006 and it was fast again.

Can you explain this? Did regular expression parsing improve dramatically? The new host runs Perl v5.18.2. The old host runs Perl v5.14.2.

OK, I tested the following implementations. For whatever reasons, the version that seemed so fast in 2006 is a hundred times slower than other approaches!

sub ParseData1 {      # called a lot during search, so it was optimized
  my $data = shift;   # by eliminating non-trivial regular expressions
  my %result;
  my $end = index($data, ': ');
  my $key = substr($data, 0, $end);
  my $start = $end += 2;           # skip ': '
  while ($end = index($data, "\n", $end) + 1) { # include \n
    next if substr($data, $end, 1) eq "\t";     # continue after \n\t
    $result{$key} = substr($data, $start, $end - $start - 1); # strip last \n
    $start = index($data, ': ', $end); # starting at $end begins the new key
    last if $start == -1;
    $key = substr($data, $end, $start - $end);
    $end = $start += 2;   # skip ': '
  $result{$key} .= substr($data, $end, -1); # strip last \n
  $result{$_} =~ s/\n\t/\n/g foreach (keys %result);
  return %result;

sub ParseData2 {
  my $data = shift;
  my %result;
  while ($data =~ /(\S+?): (.*?)(?=\n[^ \t]|\Z)/sg) {
    my ($key, $value) = ($1, $2);
    $value =~ s/\n\t/\n/g;
    $result{$key} = $value;
  return %result;

sub ParseData3 {
  my @lines = split(/\n/, shift);
  my $key, $value, %result;
  for my $line (@lines) {
    if ($line =~ /(\S+?): (.*)/) {
      $result{$key} = $value if $key;
      ($key, $value) = ($1, $2 . "\n");
    } elsif (substr($line, 0, 1) eq "\t") {
      $value .= substr($line, 1) . "\n";
    } else {
      die "Format error after key $key\n";
  $result{$key} = $value if $key;
  return %result;

use Time::HiRes qw/time/;

my $file = '/home/alex/emacswiki/page/';
my $n = 1;
my $now = time;
my $data;

if (open(IN, '<:utf8', $file)) {
  local $/ = undef; # Read complete files
  close IN;
} else {
  die "Cannot open $file: $!\n";

printf "Reading file: %.4f\n", time - $now;
printf "Lines read: %d\n", scalar(() = $data =~ /\n/g);

$now = time;
for my $i (1 .. $n) {
printf "ParseData1 $n times: %.4f\n", time - $now;

my $n = 100;

$now = time;
for my $i (1 .. $n) {
printf "ParseData2 $n times: %.4f\n", time - $now;

$now = time;
for my $i (1 .. $n) {
printf "ParseData3 $n times: %.4f\n", time - $now;

The result:

alex@kallobombus:~$ perl
Reading file: 0.0012
Lines read: 23334
ParseData1 1 times: 20.8207
ParseData2 100 times: 15.9124
ParseData3 100 times: 15.6762

Tags: RSS

Comments on 2015-01-13 Handwritten Optimization

A few optimizations were made in Perl 5.16. I don’t know if they affected this though.

– Anonymous 2015-01-21 15:50 UTC

Add Comment

2014-04-26 Down the rabbit hole of software installing

It all started with this:

alex@Megabombus:~$ inkscape
dyld: Library not loaded: /usr/local/lib/libpng15.15.dylib
  Referenced from: /usr/local/bin/inkscape
  Reason: image not found
Trace/BPT trap: 5

It seemed I had 1.5 and 1.6 in the Homebrew Cellar, but only 1.6 was installed. I tried to have both libraries linked but missed this hint:

ln -s /usr/local/Cellar/libpng/1.5.18/lib/libpng15.15.dylib /usr/local/lib/libpng15.15.dylib

(It would have been 1.5.17 in my case.)

Instead I deleted both versions of libpng and hoped that reinstalling all the stuff would work.

brew update && brew upgrade

But look at this:

alex@Megabombus:~$ brew reinstall inkscape
==> Reinstalling inkscape
intltool: Unsatisfied dependency: XML::Parser
Homebrew does not provide Perl dependencies; install with:
  cpan -i XML::Parser
Error: An unsatisfied requirement failed this build.

Now we’re switching from Homebrew issues to Perl issues!

alex@Megabombus:~$ cpanm XML::Parser
--> Working on XML::Parser
Fetching ... OK
Configuring XML-Parser-2.41 ... OK
Building and testing XML-Parser-2.41 ... FAIL
! Installing XML::Parser failed. See /Users/alex/.cpanm/work/1398513458.18494/build.log for details. Retry with --force to force install it.
alex@Megabombus:~$ less /Users/alex/.cpanm/work/1398513458.18494/build.log

Looking at the log file:

PERL_DL_NONLAZY=1 /Users/alex/perl5/perlbrew/perls/perl-5.18.1/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
Can't load '/Users/alex/.cpanm/work/1398513458.18494/XML-Parser-2.41/blib/arch/auto/XML/Parser/Expat/Expat.bundle' for module XML::Parser::Expat: dlopen(/Users/alex/.cpanm/work/1398513458.18494/XML-Parser-2.41/blib/arch/auto/XML/Parser/Expat/Expat.bundle, 2): no suitable image found.  Did find:
        /Users/alex/.cpanm/work/1398513458.18494/XML-Parser-2.41/blib/arch/auto/XML/Parser/Expat/Expat.bundle: mach-o, but wrong architecture at /Users/alex/perl5/perlbrew/perls/perl-5.18.1/lib/5.18.1/darwin-2level/ line 194.
 at /Users/alex/.cpanm/work/1398513458.18494/XML-Parser-2.41/blib/lib/XML/ line 18.
Compilation failed in require at /Users/alex/.cpanm/work/1398513458.18494/XML-Parser-2.41/blib/lib/XML/ line 18.

WTF “wrong architecture”? Now I’m thinking of our recent upgrade of the laptop to OSX Mavericks. Oh, and before that I migrated my account from the old Mac Mini using Mac OS 10.6.8 to this laptop… Considering that we’re using Perlbrew and that the resulting Perl is stored in my home directory, this might indeed be an issue.

Thus, deeper into the rabbit hole we go!

alex@Megabombus:~$ perlbrew install perl-stable
Fetching perl 5.18.2 as /Users/alex/perl5/perlbrew/dists/perl-5.18.2.tar.bz2
Download to /Users/alex/perl5/perlbrew/dists/perl-5.18.2.tar.bz2
Installing /Users/alex/perl5/perlbrew/build/perl-5.18.2 into ~/perl5/perlbrew/perls/perl-5.18.2

I think I will go and take some pretty pictures in the mean time.

In the end, everything seems to be ready.

alex@Megabombus:~$ cpanm XML::Parser
! Can't write to /Library/Perl/5.16 and /usr/local/bin: Installing modules to /Users/alex/perl5
! To turn off this warning, you have to do one of the following:
!   - run me as a root or with --sudo option (to install to /Library/Perl/5.16 and /usr/local/bin)
!   - Configure local::lib your existing local::lib in this shell to set PERL_MM_OPT etc.
!   - Install local::lib by running the following commands
!         cpanm --local-lib=~/perl5 local::lib && eval $(perl -I ~/perl5/lib/perl5/ -Mlocal::lib)
XML::Parser is up to date. (2.41)
alex@Megabombus:~$ cpanm --local-lib=~/perl5 local::lib && eval $(perl -I ~/perl5/lib/perl5/ -Mlocal::lib)
--> Working on local::lib
Fetching ... OK
Configuring local-lib-2.000011 ... OK
==> Found dependencies: ExtUtils::MakeMaker
--> Working on ExtUtils::MakeMaker
Fetching ... OK
Configuring ExtUtils-MakeMaker-6.96 ... OK
Building and testing ExtUtils-MakeMaker-6.96 ... OK
Successfully installed ExtUtils-MakeMaker-6.96 (upgraded from 6.63_02)
Building and testing local-lib-2.000011 ... OK
Successfully installed local-lib-2.000011
2 distributions installed

I did notice something strange, though. When I opened a new terminal:

ERROR: The installation "perl-5.18.1" is unknown.

I had to manually change perl-5.18.1 to perl-5.18.2 in my ~/.perlbrew/init file:

export PERLBREW_MANPATH="/Users/alex/perl5/perlbrew/perls/perl-5.18.2/man"
export PERLBREW_PATH="/Users/alex/perl5/perlbrew/bin:/Users/alex/perl5/perlbrew/perls/perl-5.18.2/bin"
export PERLBREW_PERL="perl-5.18.2"
export PERLBREW_ROOT="/Users/alex/perl5/perlbrew"
export PERLBREW_VERSION="0.66"

Yeah, ignore the comment on the first line. I wonder why perlbrew didn’t do this automatically when I uninstalled 5.18.1 and installed 5.18.2. Oh well.

And now, Inkscape!

alex@Megabombus:~$ brew reinstall inkscape
==> Reinstalling inkscape
==> Installing inkscape dependency: boost-build
==> Downloading
######################################################################## 100.0%
==> ./
=º  /usr/local/Cellar/boost-build/1.55.0: 269 files, 3.0M, built in 65 seconds
==> Installing inkscape
==> Downloading
Already downloaded: /Library/Caches/Homebrew/inkscape-0.48.4.tar.gz
==> ./configure --prefix=/usr/local/Cellar/inkscape/0.48.4 --enable-lcms --disable-poppler-cairo
==> make install
collect2: ld returned 1 exit status
make[1]: *** [inkview] Error 1
make[1]: *** Waiting for unfinished jobs....
make[1]: *** [inkscape] Error 1
make: *** [install-recursive] Error 1


These open issues may also help:
inkscape fails on 10.9.2 (


OK, following the instructions of mistydemeo:

alex@Megabombus:~$ rm -rf /Library/Caches/Homebrew/inkscape--bzr
alex@Megabombus:~$ brew install inkscape --HEAD
Warning: inkscape-HEAD already installed
alex@Megabombus:~$ brew uninstall inkscape
Uninstalling /usr/local/Cellar/inkscape/HEAD...
alex@Megabombus:~$ brew install inkscape --HEAD
==> Cloning lp:inkscape/0.48.x
Not checking SSL certificate for
You have not informed bzr of your Launchpad ID, and you must do this to
write to Launchpad or access private data.  See "bzr help launchpad-login".
==> ./ / Build phase:Apply phase:adding file 280/282                                                    
==> ./configure --prefix=/usr/local/Cellar/inkscape/HEAD --enable-lcms --disable-poppler-cairo
==> make install
🍺  /usr/local/Cellar/inkscape/HEAD: 853 files, 81M, built in 22.6 minutes

Testing… YES!! 😺 👍


Add Comment

2012-07-20 Perl and UTF-8

I maintain a wiki engine called Oddmuse. It’s the software used to run my blog, for example. It is written in an older scripting language called Perl. Perl predates Unicode. That’s why the use of UTF-8 or UTF-16 is not mandated. That, in turn, means that usually bytes are interpreted as an UTF-8 encoded character is only visible as two bytes.

Consider this regular expression to match WikiWords: [A-Z][a-z]+[A-Z][a-z]+

How would you extend it to parse ÖlPlattform?

Assume the following Perl code was written in a source file that was UTF-8 encoded:

$str = "OelPlattform";
print "OelPlattform YES\n" if $str =~ /[[:upper:]][[:lower:]]+[[:upper:]]\w+/;
$str = "ÖlPlattform";
print "ÖlPlattform YES\n" if $str =~ /[[:upper:]][[:lower:]]+[[:upper:]]\w+/;

This will just print OelPlattform YES because what looks like “ÖlPlattform” actually starts with the bytes C3 96 and C3 is not an upper case letter. It’s actually unclear what it is. In a Latin-1 environment the C3 would print as ×the dreaded sign of encoding errors!

I wanted to keep Oddmuse encoding agnostic. Users could specify a different encoding which would be served together with the page HTML such that they could have wikis using GB 2312. This is why Oddmuse contained the following line and similar code:

# we treat input and output as bytes
eval { local $SIG{__DIE__}; binmode(STDOUT, ":raw"); };

This resulted in problems when some packages I was using did in fact produce UTF-8 and so I had to use code as follows:

eval { local $SIG{__DIE__}; binmode(STDOUT, ":utf8"); } if $HttpCharset eq 'UTF-8';
print RSS($3 ? $3 : 15, split(/\s+/, UnquoteHtml($4)));
eval { local $SIG{__DIE__}; binmode(STDOUT, ":raw"); };

I’m not sure why I surrounded it all with an eval—I assume it was to support an older version of Perl but I’m not sure.

Ok, so I wanted to get rid of all that.

The solution seems deceptively simple: add use utf8; to the source files and open all files using the UTF-8 encoding layer.

When printing UTF-8 to STDOUT, you need to tell Perl that STDOUT can in fact handle multi-byte characters. Since the HTML produced is UTF-8 encoded, I know that this is true. If you don’t, you’ll get “wide character in print” warnings.

binmode(STDOUT, ':utf8');

You need to be careful with all input and output, however.

open(F, '<:encoding(UTF-8)', $RcFile);

The same is true for output:

open(OUT, '>:encoding(UTF-8)', $file)
  or ReportError(Ts('Cannot write %s', $file) . ": $!", '500 INTERNAL SERVER ERROR');

Oddmuse also offers the ability to include other pages (Transclusion) and to produce feeds. This can be a problem. The default page processing is to parse the raw text and start printing HTML as soon as possible because I have always felt that it was more expedient to start printing the top of the page while the rest was still being parsed. What happens when I don’t want to do this, eg. I’m in the middle of building the RSS feed?

The solution I had been using was to redirect STDOUT to a variable. Perl calls this a “memory file.” The problem is the encoding of this memory file:

Here’s what I had to write:

open(STDOUT, '>', \$page) or die "Can't open memory file: $!";
binmode(STDOUT, ":utf8");

I think this works because binmode tells all the print instructions that it’s ok to print multi-byte characters and utf8::decode makes sure that all those bytes are in fact decoded back to Perl’s internal representation.

Then I discovered that I needed to look at the bytes if I wanted to URL-encode strings:

utf8::encode($str); # turn to byte string
my @letters = split(//, $str);
my %safe = map {$_ => 1} ('a' .. 'z', 'A' .. 'Z', '0' .. '9', '-', '_', '.', '!', '~', '*', "'", '(', ')', '#');
foreach my $letter (@letters) {
  $letter = sprintf("%%%02x", ord($letter)) unless $safe{$letter};

Now that I’m looking at the above I wonder what sort of bugs I’m introducing with the inverse operation that I haven’t changed:

$str =~ s/%([0-9a-f][0-9a-f])/chr(hex($1))/ge;

I feel that this requires a call to utf8::decode when done! Strangely enough none of my tests have picked this up. question

(Actually I think I know why I haven’t stumbled across this problem: I only use the function to decode the Cookie, and all the functions accessing the cookie go through an extra encoding/decoding step that would not be necessary if I had fixed the URL-decoding function. lightbulb)

Another problem I stumbled upon: directories. Directories often ended up Latin-1 encoded.

return if -d $newdir;
mkdir($newdir, 0775)
  or ReportError(Ts('Cannot create %s', $newdir) . ": $!", '500 INTERNAL SERVER ERROR');

The reason I didn’t discover I had the same problem with filenames was that I’m using a compatibility layer on my Mac when I do my developments. The Mac uses UTF-8 NFD instead of UTF-8 NFC as is the standard on the web. Thus if you take bytes encoding a filename from the web and create the file, or if you go the other way, you have a problem. I store the index of all pages in a files. When a new page is created, I get the page name (NCF encoded) from the web, and store it in a file. When I read the file, the content contains the NFC bytes and with these, I cannot find the NFD encoded file (because the filesystem changed the encoding as it wrote the file). I hated it so much. Thus, the Mac compatibility layer does an extra encoding and decoding to get everything from NFD to NFC—and thereby protected me from this error.

As soon as I installed it on my sites, however—they all use Debian and ext3 filesystems, I think—I had a problem.

The necessary fix:

if (open(IN, '<:encoding(UTF-8)', $file)) {
  local $/ = undef;   # Read complete files
  my $data=<IN>;
  close IN;
  return (1, $data);


open(OUT, '>:encoding(UTF-8)', $file)
  or ReportError(Ts('Cannot write %s', $file) . ": $!", '500 INTERNAL SERVER ERROR');
print OUT  $string;

Another stumbling block was that the non-breaking space was no longer just a byte sequence like any other, namely C2 A0. Perl suddenly recognized it as whitespace! This is a problem if a path contains non-breaking spaces! The old code translated spaces to underscore characters, so that wasn’t really a possibility. But whenever I had been “smart” and used a non-breaking space, I now had a problem. The glob function splits its arguments on whitespace. Where there was one pattern, I now had two broken patterns!

Here’s an example:

glob(GetKeepDir(shift) . '/*.kp'); # files such as,, etc.

Here’s another example:

foreach (glob("$PageDir/*/*.pg $PageDir/*/.*.pg"))

The solution is to use File::Glob ':glob' and replace every occurence of glob with bsd_glob. Wow, my application was very much unsuited to filenames containing whitespace and I hadn’t even realized it!

foreach (bsd_glob("$PageDir/*/*.pg"), bsd_glob("$PageDir/*/.*.pg"))

Remember the regular expression to detect wiki words I used at the top? This was the actual regular expression I had been using:

$WikiWord = '[A-Z]+[a-z\x80-\xff]+[A-Z][A-Za-z\x80-\xff]*';

Essentially wiki words only worked for a first letter containing an ASCII upper case letter.

At first, I switched this to the following regular expression (trying to minimize changes):

$WikiWord = '[A-Z]+[a-z\x{0080}-\x{ffff}]+[A-Z][A-Za-z\x{0080}-\x{ffff}]*';

It turns out that Perl 5.8 chokes on this regular expression, howeveer. FFFE and FFFF are noncharacters. I had to change the regular expression.

$WikiWord = '[A-Z]+[a-z\x{0080}-\x{fffd}]+[A-Z][A-Za-z\x{0080}-\x{fffd}]*'; # exclude noncharacters FFFE and FFFF

I’m sure this list isn’t complete but I’m sure it’s long enough to illustrate my main point: this is painful. It’s HTML quoting all over again.


Comments on 2012-07-20 Perl and UTF-8

Things to do for Oddmuse:

  1. fix that regular expression
  2. fix that cookie encoding issue

– Alex

Can it recognize new WikiWords as “ÖlPlattform”, thanks to change regular expression to match them?. As far I can understanding it a little, does it need change those regex and changes way of read (write?), string (from url, for instance) and files?

JuanmaMP 2012-07-22 01:16 UTC

Actually, I think that a simple change of the regular expressions is all that is needed. :)

AlexSchroeder 2012-07-22 05:11 UTC

Add Comment

2011-10-07 Oddmuse, Venus and Perl

I’ve been working on a submission form for the Old School RPG Planet. Today I added another little feature. This is how I like to develop code. No time pressure. One little step at a time. Keep polishing it.

The planet uses Planet Venus to collect the RSS and Atom feeds of many of the Old School RPG blogs out there. Planet Venus allows you to get the list of feeds via an URL. I’m hosting the list of feeds on Campaign Wiki itself (raw format). As you can see, it the format doesn’t look nice.

The thing I did, therefore, was to write a script that makes it easy for people who are not into the technical details to submit new blogs. It also makes it easier for me to submit new blogs!

The things it handles:

  • If you submit an invalid URL, it will prepend http:// and try again.
  • If it looks like we already have a similar looking feed on our page, it requires a confirmation by the user.
  • If you submit a web page, it will look for alternative links with MIME types application/rss+xml, application/atom+xml, application/xml (yeah) and text/xml (just making sure) and allow the user to pick one of them.
  • If you submitted a feed directly instead of a web page, it it uses that instead.
  • If the feed you picked is served with an invalid content type, it is rejected.
  • It extracts the title of the feed and adds it to the wiki page, sorting all the entries alphabetically.

I think it’s pretty cool.

If you look at the interface, you’ll note that it has a link to its own source code. I love this little Perl trick:

  1. Add __DATA__ at the end of the source file. Usually you would add actual data at the end. The script could read it using the DATA file handle.
  2. Serve source code using seek DATA, 0, 0; print "Content-type: text/plain; charset=UTF-8\r\n\r\n", <DATA>; This resets the current position of the DATA file handle to the beginning of the source file. Tadaa! :)


Add Comment

2011-04-30 Map Drawing Using Polygons

I’m currently working on randomly generating islands using the ideas presented in Polygonal Map Generation by Amit. Check out his Flash demo! I am nowhere as far, yet. I’m writing my code in Perl and producing SVG output.

See below for source code used. I’d install it on a public server, but unfortunately there are quite some dependencies…


#! /usr/bin/perl -w
# Copyright (C) 2011  Alex Schroeder <>
# This program is free software: you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
# You should have received a copy of the GNU General Public License along with
# this program. If not, see <>.

use strict;
use CGI qw(:standard);
use SVG;
use Math::Geometry::Voronoi;
use Class::Struct;
use Math::Fractal::Noisemaker;
use List::Util qw(min max);
use Data::Dumper;

my $points = 3000;
my $width  = 1000;
my $height =  550;
my $center_x = $width / 2;
my $center_y = $height / 2;
my $radius = 500;

my %color = (beach => '#a09077',
	     ocean => '#44447a',);

struct World => { points => '@',
		  centroids => '@',
		  voronoi => '$',
		  height => '@',

sub add_random_points {
  my ($world) = @_;
  for (my $i = 0; $i < $points; $i++) {
    push(@{$world->points}, [rand($width), rand($height)]);
  # print(join("\n", map {join(",", $_->[0], $_->[1])} @{$world->points}));
  return $world;

sub add_voronoi {
  my ($world) = @_;
  $world->voronoi(Math::Geometry::Voronoi->new(points => $world->points));

sub add_centroids {
  my ($world) = @_;
  $world->centroids([]); # clear
  foreach my $polygon ($world->voronoi->polygons) {
    push(@{$world->centroids}, centroid($polygon));

sub centroid {
  my ($cx, $cy) = (0, 0);
  my $A = 0;
  my $polygon = shift;
  my ($point_index, @points) = @$polygon; # see Math::Geometry::Voronoi
  my $point = $points[$#points];
  my ($x0, $y0) = ($point->[0], $point->[1]);
  for $point (@points) {
    my ($x1, $y1) = ($point->[0], $point->[1]);
    $cx += ($x0 + $x1) * ($x0 * $y1 - $x1 * $y0);
    $cy += ($y0 + $y1) * ($x0 * $y1 - $x1 * $y0);
    $A += ($x0 * $y1 - $x1 * $y0);
    ($x0, $y0) = ($x1, $y1);
  $A /= 2;
  $cx /= 6 * $A;
  $cy /= 6 * $A;
  return [$cx, $cy, $point_index];

sub add_height {
  my $world = shift;
  $Math::Fractal::Noisemaker::QUIET = 1;
  my $grid = Math::Fractal::Noisemaker::square();
  $world->height([]); # clear
  my $scale = max($height, $width); # grid is a square
  foreach my $point (@{$world->points}) {
    my $x = int($point->[0]*255/$scale);
    my $y = int($point->[1]*255/$scale);
    my $h = 0; # we must not skip any points!
    $h = $grid->[$x]->get($y) / 255
      unless $x < 0 or $y < 0 or $x > 255 or $y > 255;
    push(@{$world->height}, $h);

sub raise_point {
  my ($world, $x, $y, $radius) = @_;
  my $i = 0;
  foreach my $point (@{$world->points}) {
    my $dx = $point->[0] - $x;
    my $dy = $point->[1] - $y;
    my $d = sqrt($dx * $dx + $dy * $dy);
    my $v = max(0, $world->height->[$i] - $d / $radius);
    $world->height($i, $v);

sub svg {
  my $world = shift;
  my $svg = new SVG(-width => $width,
		    -height => $height, );
  foreach my $polygon ($world->voronoi->polygons) {
    my ($point_index, @points) = @$polygon; # see Math::Geometry::Voronoi
    my $x = $world->points->[$point_index]->[0];
    my $y = $world->points->[$point_index]->[1];
    next if $x < 0 or $y < 0 or $x > $width or $y > $height;
    my $z = int($world->height->[$point_index] * 255);
    my $color = $z == 0 ? $color{ocean} : "rgb($z,$z,$z)";
    my $path = join(",", map { map { int } @$_ } @points);
    $svg->polygon(points => $path,
		  fill => $color,
		  style => { 'stroke-width' => 1,
			     'stroke' => 'black'});
  return $svg->xmlify();

sub response {
  print header(-type=>'image/svg+xml');
  print shift;

sub main {
  if (path_info eq '/source') {
    seek DATA, 0, 0;
    print "Content-type: text/plain; charset=UTF-8\r\n\r\n", <DATA>;
  } else {
    srand(param('seed') || time);
    my $world = new World;
    add_random_points($world, $points);
    for (my $i = 2; $i--; ) {
      # Lloyd Relaxation
    # skip corner improvement
    # skip Delaunay triangulation
    raise_point($world, $center_x, $center_y, $radius);
    # draw

main ();


Add Comment

2010-10-15 Web Standards Dream Bubble

I maintain the Old School RPG Planet. The list of feeds it manages is saved on a wiki page. I wanted to write a little script that will allow me to quickly add feeds to that list. And I did! There’s now a way to submit new feeds to the feed instead of editing the wiki page.

The problem? The thing tries to parse web pages, trying to discover feed addresses. And that works well for sites that validate. But the two first Blogspot sites I tried each had over two hundred errors! Once the markup is borked, parsing doesn’t work, and thus feed discovery doesn’t work.

Now, if I need to work around broken markup, I start wondering why tried to standardize HTML… What a glorious waste of time! In the end, we just treat it as tag soup anyway. >{

If you’re still interested in the source code, no problem. Lately all my CGI-scripts are able to spew forth their source code.

Unfortunately it is not complete, yet. It doesn’t update the wiki page. I didn’t bother once I realized that the entire parsing idea was not going to work. :'(

Update: Wohoo, replaced HTML and XML parsing with regular expression matching, wrote what I needed, and finished the script! [1] :)


Comments on 2010-10-15 Web Standards Dream Bubble

Standardizing HTML is an excellent idea, and it works pretty well now. What doesn’t work is the attempt at only describing th behavior of a subset of possible markup, called for some reason “valid HTML”, and completely ignored all the other possibilities that humans can produce and expect to work somehow. Fortunately, the HTML5 standard finally standardized the error handling, so you can be finally sure that no matter how inventive the author of the document, the information that you get from parsing it is the same for all the standard-conforming parsers you use.

Throwing errors at human-generated content is kind of a silly approach, especially when the human who created it is long gone and unable to correct the errors. It seems much easier to just assume that every input must mean something, even if you are risking that it’s not quite the same thing that the author had in mind. To be honest I am really surprised that Perl, which follows this philosophy itself somewhat, doesn’t have a forgiving HTML parser that you could use.

RadomirDopieralski 2010-10-16 19:00 UTC

Well, I need to check. Right now I am using a Perl module that uses libxml2 in the background. I did that because of XPath support. Perhaps switching to a SAX parser will help…

AlexSchroeder 2010-10-16 19:28 UTC

Wow, falling back to regular expressions and it actually seems to work! :)

AlexSchroeder 2010-10-16 22:59 UTC

The awesome answer on Stack Exchange notwithstanding:

You can’t parse [X]HTML with regex. Because HTML can’t be parsed by regex. Regex is not a tool that can be used to correctly parse HTML. As I have answered in HTML-and-regex questions here so many times before, the use of regex will not allow you to consume HTML. Regular expressions are a tool that is insufficiently sophisticated to understand the constructs employed by HTML. HTML is not a regular language and hence cannot be parsed by regular expressions. Regex queries are not equipped to break down HTML into its meaningful parts. so many times but it is not getting to me. Even enhanced irregular regular expressions as used by Perl are not up to the task of parsing HTML. You will never make me crack. HTML is a language of sufficient complexity that it cannot be parsed by regular expressions. Even Jon Skeet cannot parse HTML using regular expressions. Every time you attempt to parse HTML with regular expressions, the unholy child weeps the blood of virgins, and Russian hackers pwn your webapp. Parsing HTML with regex summons tainted souls into the realm of the living. HTML and regex go together like love, marriage, and ritual infanticide. The <center> cannot hold it is too late. The force of regex and HTML together in the same conceptual space will destroy your mind like so much watery putty. If you parse HTML with regex you are giving in to Them and their blasphemous ways which doom us all to inhuman toil for the One whose Name cannot be expressed in the Basic Multilingual Plane, he comes. HTML-plus-regexp will liquify the n​erves of the sentient whilst you observe, your psyche withering in the onslaught of horror. Rege̿̔̉x-based HTML parsers are the cancer that is killing StackOverflow it is too late it is too late we cannot be saved the trangession of a chi͡ld ensures regex will consume all living tissue (except for HTML which it cannot, as previously prophesied) dear lord help us how can anyone survive this scourge using regex to parse HTML has doomed humanity to an eternity of dread torture and security holes using regex as a tool to process HTML establishes a breach between this world and the dread realm of c͒ͪo͛ͫrrupt entities (like SGML entities, but more corrupt) a mere glimpse of the world of reg​ex parsers for HTML will ins​tantly transport a programmer’s consciousness into a world of ceaseless screaming, he comes, the pestilent slithy regex-infection wil​l devour your HT​ML parser, application and existence for all time like Visual Basic only worse he comes he comes do not fi​ght he com̡e̶s, ̕h̵i​s un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment, HTML tags lea͠ki̧n͘g fr̶ǫm ̡yo​͟ur eye͢s̸ ̛l̕ik͏e liq​uid pain, the song of re̸gular exp​ression parsing will exti​nguish the voices of mor​tal man from the sp​here I can see it can you see ̲͚̖͔̙î̩́t̲͎̩̱͔́̋̀ it is beautiful t​he final snuffing of the lie​s of Man ALL IS LOŚ͖̩͇̗̪̏̈́T ALL I​S LOST the pon̷y he comes he c̶̮omes he comes the ich​or permeates all MY FACE MY FACE ᵒh god no NO NOO̼O​O NΘ stop the an​*̶͑̾̾​̅ͫ͏̙̤g͇̫͛͆̾ͫ̑͆l͖͉̗̩̳̟̍ͫͥͨe̠̅s ͎a̧͈͖r̽̾̈́͒͑e n​ot rè̑ͧ̌aͨl̘̝̙̃ͤ͂̾̆ ZA̡͊͠͝LGΌ ISͮ̂҉̯͈͕̹̘̱ TO͇̹̺ͅƝ̴ȳ̳ TH̘Ë͖́̉ ͠P̯͍̭O̚​N̐Y̡ H̸̡̪̯ͨ͊̽̅̾̎Ȩ̬̩̾͛ͪ̈́̀́͘ ̶̧̨̱̹̭̯ͧ̾ͬC̷̙̲̝͖ͭ̏ͥͮ͟Oͮ͏̮̪̝͍M̲̖͊̒ͪͩͬ̚̚͜Ȇ̴̟̟͙̞ͩ͌͝S̨̥̫͎̭ͯ̿̔̀ͅ </nowiki> – 3891 votes for not being able to parse HTML or XHTML with regex

AlexSchroeder 2010-10-19 11:01 UTC

Add Comment

2009-10-02 I hate the Perl SMTP libraries

At home I have Net::SMTP::TLS and Net::SMTP::SSL installed and I’ve managed to use both to send mail via my Google account.

On one of my hosting services, I have only Net::SMTP::SSL, and it just won’t work.

Debug output at home:

Net::SMTP::SSL>>> Net::SMTP::SSL(1.01)
Net::SMTP::SSL>>>   IO::Socket::SSL(1.24)
Net::SMTP::SSL>>>     IO::Socket::INET(1.31)
Net::SMTP::SSL>>>       IO::Socket(1.31)
Net::SMTP::SSL>>>         IO::Handle(1.28)
Net::SMTP::SSL>>>           Exporter(5.58)
Net::SMTP::SSL>>>   Net::Cmd(2.29)
Net::SMTP::SSL=GLOB(0x186fc04)<<< 220 ESMTP 24sm915314eyx.9
Net::SMTP::SSL=GLOB(0x186fc04)>>> EHLO localhost.localdomain
Net::SMTP::SSL=GLOB(0x186fc04)<<< at your service, []
Net::SMTP::SSL=GLOB(0x186fc04)<<< 250-SIZE 35651584
Net::SMTP::SSL=GLOB(0x186fc04)<<< 250-8BITMIME
Net::SMTP::SSL=GLOB(0x186fc04)<<< 250-AUTH LOGIN PLAIN
Net::SMTP::SSL=GLOB(0x186fc04)<<< 250 PIPELINING
Net::SMTP::SSL=GLOB(0x186fc04)>>> AUTH LOGIN
Net::SMTP::SSL=GLOB(0x186fc04)<<< 334 VXNlcm5hbWU6
Net::SMTP::SSL=GLOB(0x186fc04)>>> a2Vuc2FuYXRh
Net::SMTP::SSL=GLOB(0x186fc04)<<< 334 UGFzc3dvcmQ6
Net::SMTP::SSL=GLOB(0x186fc04)>>> VGgsYmFpZA==
Net::SMTP::SSL=GLOB(0x186fc04)<<< 235 2.7.0 Accepted
Net::SMTP::SSL=GLOB(0x186fc04)>>> MAIL FROM:<>

Notice the AUTH LOGIN command.

Debug output on my host:

Net::SMTP::SSL>>> Net::SMTP::SSL(1.01)
Net::SMTP::SSL>>>   IO::Socket::SSL(1.16)
Net::SMTP::SSL>>>     IO::Socket::INET(1.31)
Net::SMTP::SSL>>>       IO::Socket(1.30_01)
Net::SMTP::SSL>>>         IO::Handle(1.27)
Net::SMTP::SSL>>>           Exporter(5.62)
Net::SMTP::SSL>>>   Net::Cmd(2.29)
Net::SMTP::SSL=GLOB(0xa025520)<<< 220 ESMTP 10sm135225eyz.42
Net::SMTP::SSL=GLOB(0xa025520)>>> EHLO localhost.localdomain
Net::SMTP::SSL=GLOB(0xa025520)<<< at your service, []
Net::SMTP::SSL=GLOB(0xa025520)<<< 250-SIZE 35651584
Net::SMTP::SSL=GLOB(0xa025520)<<< 250-8BITMIME
Net::SMTP::SSL=GLOB(0xa025520)<<< 250-AUTH LOGIN PLAIN
Net::SMTP::SSL=GLOB(0xa025520)<<< 250 PIPELINING
Net::SMTP::SSL=GLOB(0xa025520)>>> MAIL FROM:<>
Net::SMTP::SSL=GLOB(0xa025520)<<< 530-5.5.1 Authentication Required. Learn more at                              
Net::SMTP::SSL=GLOB(0xa025520)<<< 530 5.5.1 10sm135225eyz.42

Notice the error: Authentication Required.

Why is the same script (I checked twice – I sure hope I’m not confusing anything) not using the AUTH LOGIN command?

I don’t understand.

  my $mail = new MIME::Entity->build(To => $from, # test!
				     From => $from,
				     Subject => 'Test Net::SMTP::SSL',
				     Path => $fh,
				     Type => "text/html");
  my $smtp = Net::SMTP::SSL->new($host, Port => 465,
				 Debug => 1);
  $smtp->auth($user, $password);
  $smtp->to($from); # test!

Source is available. [1]

Output of perl -MNet::SMTP::SSL -wle 'for (keys %INC) { next if m[^/]; $m = $_; $m =~ s[/][::]g; $m =~ s/\.pm$//; print "$m ", $m->VERSION || "<unknown>" }' as suggested on #perl:

At homeRemote system
Net::SSLeay 1.35
IO::Handle 1.28
List::Util 1.14
SelectSaver 1.00
IO::Socket 1.31
warnings 1.03
Symbol 1.05
Scalar::Util 1.14
IO::Socket::INET 1.31
Exporter 5.58
Errno 1.09
IO::Socket::SSL 1.24
warnings::register 1.00
XSLoader 0.02
Net::Config 1.11
Net::Cmd 2.29
utf8 1.04
Config <unknown>
IO 1.25
IO::Socket::UNIX 1.23
Carp 1.03
bytes 1.01
Exporter::Heavy 5.58
Net::SMTP 2.31
vars 1.01
strict 1.03
Net::SMTP::SSL 1.01
constant 1.04
Socket 1.77
AutoLoader 5.60
DynaLoader 1.05
Net::SSLeay 1.35
XSLoader 0.08
IO::Handle 1.27
warnings::register 1.01
Net::Config 1.11
List::Util 1.19
SelectSaver 1.01
Net::Cmd 2.29
IO::Socket 1.30_01
warnings 1.06
utf8 1.07
IO::Socket::UNIX 1.23
IO 1.23_01
Symbol 1.06
bytes 1.03
Carp 1.08
Net::SMTP 2.31
Scalar::Util 1.19
Exporter::Heavy 5.62
IO::Socket::INET 1.31
Net::SMTP::SSL 1.01
strict 1.04
vars 1.01
Exporter 5.62
constant 1.13
Socket 1.80
Errno 1.1
IO::Socket::SSL 1.16
AutoLoader 5.63


Update: I found the problem and submitted a bug: The remote system is a Debian system, and the admin installed libnet-smtp-ssl-perl. If you look at the Net::SMTP code, however, you’ll see the following:

sub auth {
  my ($self, $username, $password) = @_;

  eval {
    require MIME::Base64;
    require Authen::SASL;
  } or $self->set_status(500, ["Need MIME::Base64 and Authen::SASL
todo auth"]), return 0;

There is therefore a dependency on Authen::SASL. If you don’t have that module, sending your email will fail in a non-obvious way, as seen above. Installing libauthen-sasl-perl fixes the problem.


Comments on 2009-10-02 I hate the Perl SMTP libraries

Thanks for the heads up on the Authen::SASL dependency…been working on it for hours and getting nowhere.

– Fred 2009-12-09 18:34 UTC

I’m not sure what to make of the response given to the bug report. Does Gregor agree with me or not? It’s weird. :)

AlexSchroeder 2009-12-09 23:24 UTC

Add Comment

2009-09-23 Astronomy Picture of the Day for Mac OSX Desktop Background

Based on Harold Bakker's APOD script, I offer the following solution. I don’t keep my computer running, so I just install the following Apple Script as a login item (→ System Preferences → Accounts → Login Items):

(* Place your with timeout statement within a try... on error statement to prevent the script from 
stopping when a timeout occurs. *)
	-- Give the script a three minute timeout to prevent problems when this is run as a login item
	with timeout of 180 seconds
		do shell script "~/bin/ >> /var/tmp/console.log" -- append the result to the console log
	end timeout
on error errMsg -- display a dialog only if an error occurs
	display dialog errMsg giving up after 10
end try

You should probably create a new script using the Script Editor on your system, paste the above, and save it as an application. Remember to untick the startup screen checkbox.

You’ll notice that it runs a Perl script called /bin/ – you should create that directory, put the following Perl script in it, and make it executable. I keep both and in the same /bin directory.


# This script will download the astronomy picture of the day and set
# it as the current desktop background.

# originally by Harold Bakker,

# changes by Alex Schroeder <>

use strict;
use LWP::UserAgent;
use File::Temp qw/tempfile/;

my $ua = LWP::UserAgent->new;
my $response = $ua->get("");
if ($response->is_success
   and $response->content =~ /href\="image\/([^\/]+)\/(.*?)"/) {
  my $url = "$1/$2";
  my $filename = $2;
  $response = $ua->get($url);
  if ($response->is_success) {
    my ($fh, $tempfile) = tempfile(UNLINK=>0);
    print $fh $response->content;
    close $fh;
    open(F, "|/usr/bin/osascript") or die "Cannot run Apple Script: $!";
    print F <<END;
tell application "Finder"
	set pFile to POSIX file "$tempfile" as string
	set desktop picture to file pFile
	end tell
  } else {
    die $response->status_line;
} else {
  die $response->status_line;

This should work fine as long you restart your computer about once a day. I haven’t made sure that it will try to reuse images, saving the last one in a save place, etc.


Add Comment



Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. See Info for text formatting rules. You can edit the comment page if you need to fix typos. You can subscribe to new comments by email without leaving a comment.

To save this page you must answer this question:

Please say HELLO.