Diary SiteMap RecentChanges About Contact Calendar

Search:

Matching Pages:

Perl

My adventures using Perl. I use it for Oddmuse and whenever I need to get things done quickly.

2014-04-26 Down the rabbit hole of software installing

It all started with this:

alex@Megabombus:~$ inkscape
dyld: Library not loaded: /usr/local/lib/libpng15.15.dylib
  Referenced from: /usr/local/bin/inkscape
  Reason: image not found
Trace/BPT trap: 5

It seemed I had 1.5 and 1.6 in the Homebrew Cellar, but only 1.6 was installed. I tried to have both libraries linked but missed this hint:

ln -s /usr/local/Cellar/libpng/1.5.18/lib/libpng15.15.dylib /usr/local/lib/libpng15.15.dylib

(It would have been 1.5.17 in my case.)

Instead I deleted both versions of libpng and hoped that reinstalling all the stuff would work.

brew update && brew upgrade

But look at this:

alex@Megabombus:~$ brew reinstall inkscape
==> Reinstalling inkscape
intltool: Unsatisfied dependency: XML::Parser
Homebrew does not provide Perl dependencies; install with:
  cpan -i XML::Parser
Error: An unsatisfied requirement failed this build.

Now we’re switching from Homebrew issues to Perl issues!

alex@Megabombus:~$ cpanm XML::Parser
--> Working on XML::Parser
Fetching http://www.cpan.org/authors/id/T/TO/TODDR/XML-Parser-2.41.tar.gz ... OK
Configuring XML-Parser-2.41 ... OK
Building and testing XML-Parser-2.41 ... FAIL
! Installing XML::Parser failed. See /Users/alex/.cpanm/work/1398513458.18494/build.log for details. Retry with --force to force install it.
alex@Megabombus:~$ less /Users/alex/.cpanm/work/1398513458.18494/build.log

Looking at the log file:

PERL_DL_NONLAZY=1 /Users/alex/perl5/perlbrew/perls/perl-5.18.1/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
Can't load '/Users/alex/.cpanm/work/1398513458.18494/XML-Parser-2.41/blib/arch/auto/XML/Parser/Expat/Expat.bundle' for module XML::Parser::Expat: dlopen(/Users/alex/.cpanm/work/1398513458.18494/XML-Parser-2.41/blib/arch/auto/XML/Parser/Expat/Expat.bundle, 2): no suitable image found.  Did find:
        /Users/alex/.cpanm/work/1398513458.18494/XML-Parser-2.41/blib/arch/auto/XML/Parser/Expat/Expat.bundle: mach-o, but wrong architecture at /Users/alex/perl5/perlbrew/perls/perl-5.18.1/lib/5.18.1/darwin-2level/DynaLoader.pm line 194.
 at /Users/alex/.cpanm/work/1398513458.18494/XML-Parser-2.41/blib/lib/XML/Parser.pm line 18.
Compilation failed in require at /Users/alex/.cpanm/work/1398513458.18494/XML-Parser-2.41/blib/lib/XML/Parser.pm line 18.

WTF “wrong architecture”? Now I’m thinking of our recent upgrade of the laptop to OSX Mavericks. Oh, and before that I migrated my account from the old Mac Mini using Mac OS 10.6.8 to this laptop… Considering that we’re using Perlbrew and that the resulting Perl is stored in my home directory, this might indeed be an issue.

Thus, deeper into the rabbit hole we go!

alex@Megabombus:~$ perlbrew install perl-stable
Fetching perl 5.18.2 as /Users/alex/perl5/perlbrew/dists/perl-5.18.2.tar.bz2
Download http://www.cpan.org/src/5.0/perl-5.18.2.tar.bz2 to /Users/alex/perl5/perlbrew/dists/perl-5.18.2.tar.bz2
Installing /Users/alex/perl5/perlbrew/build/perl-5.18.2 into ~/perl5/perlbrew/perls/perl-5.18.2
...

I think I will go and take some pretty pictures in the mean time.

In the end, everything seems to be ready.

alex@Megabombus:~$ cpanm XML::Parser
!
! Can't write to /Library/Perl/5.16 and /usr/local/bin: Installing modules to /Users/alex/perl5
! To turn off this warning, you have to do one of the following:
!   - run me as a root or with --sudo option (to install to /Library/Perl/5.16 and /usr/local/bin)
!   - Configure local::lib your existing local::lib in this shell to set PERL_MM_OPT etc.
!   - Install local::lib by running the following commands
!
!         cpanm --local-lib=~/perl5 local::lib && eval $(perl -I ~/perl5/lib/perl5/ -Mlocal::lib)
!
XML::Parser is up to date. (2.41)
alex@Megabombus:~$ cpanm --local-lib=~/perl5 local::lib && eval $(perl -I ~/perl5/lib/perl5/ -Mlocal::lib)
--> Working on local::lib
Fetching http://www.cpan.org/authors/id/E/ET/ETHER/local-lib-2.000011.tar.gz ... OK
Configuring local-lib-2.000011 ... OK
==> Found dependencies: ExtUtils::MakeMaker
--> Working on ExtUtils::MakeMaker
Fetching http://www.cpan.org/authors/id/B/BI/BINGOS/ExtUtils-MakeMaker-6.96.tar.gz ... OK
Configuring ExtUtils-MakeMaker-6.96 ... OK
Building and testing ExtUtils-MakeMaker-6.96 ... OK
Successfully installed ExtUtils-MakeMaker-6.96 (upgraded from 6.63_02)
Building and testing local-lib-2.000011 ... OK
Successfully installed local-lib-2.000011
2 distributions installed

I did notice something strange, though. When I opened a new terminal:

ERROR: The installation "perl-5.18.1" is unknown.

I had to manually change perl-5.18.1 to perl-5.18.2 in my ~/.perlbrew/init file:

# DO NOT EDIT THIS FILE
export PERLBREW_MANPATH="/Users/alex/perl5/perlbrew/perls/perl-5.18.2/man"
export PERLBREW_PATH="/Users/alex/perl5/perlbrew/bin:/Users/alex/perl5/perlbrew/perls/perl-5.18.2/bin"
export PERLBREW_PERL="perl-5.18.2"
export PERLBREW_ROOT="/Users/alex/perl5/perlbrew"
export PERLBREW_VERSION="0.66"

Yeah, ignore the comment on the first line. I wonder why perlbrew didn’t do this automatically when I uninstalled 5.18.1 and installed 5.18.2. Oh well.

And now, Inkscape!

alex@Megabombus:~$ brew reinstall inkscape
==> Reinstalling inkscape
==> Installing inkscape dependency: boost-build
==> Downloading https://github.com/boostorg/build/archive/boost-1.55.0.tar.gz
######################################################################## 100.0%
==> ./bootstrap.sh
=º  /usr/local/Cellar/boost-build/1.55.0: 269 files, 3.0M, built in 65 seconds
==> Installing inkscape
==> Downloading https://downloads.sourceforge.net/project/inkscape/inkscape/0.48.4/inkscape-0.48.4.tar.gz
Already downloaded: /Library/Caches/Homebrew/inkscape-0.48.4.tar.gz
==> ./configure --prefix=/usr/local/Cellar/inkscape/0.48.4 --enable-lcms --disable-poppler-cairo
==> make install
collect2: ld returned 1 exit status
make[1]: *** [inkview] Error 1
make[1]: *** Waiting for unfinished jobs....
make[1]: *** [inkscape] Error 1
make: *** [install-recursive] Error 1

READ THIS: https://github.com/Homebrew/homebrew/wiki/troubleshooting

These open issues may also help:
inkscape fails on 10.9.2 (https://github.com/Homebrew/homebrew/issues/28556)

Whaaaaaat!?

OK, following the instructions of mistydemeo:

alex@Megabombus:~$ rm -rf /Library/Caches/Homebrew/inkscape--bzr
alex@Megabombus:~$ brew install inkscape --HEAD
Warning: inkscape-HEAD already installed
alex@Megabombus:~$ brew uninstall inkscape
Uninstalling /usr/local/Cellar/inkscape/HEAD...
alex@Megabombus:~$ brew install inkscape --HEAD
==> Cloning lp:inkscape/0.48.x
Not checking SSL certificate for xmlrpc.launchpad.net.
You have not informed bzr of your Launchpad ID, and you must do this to
write to Launchpad or access private data.  See "bzr help launchpad-login".
==> ./autogen.sh/s / Build phase:Apply phase:adding file 280/282                                                    
==> ./configure --prefix=/usr/local/Cellar/inkscape/HEAD --enable-lcms --disable-poppler-cairo
==> make install
🍺  /usr/local/Cellar/inkscape/HEAD: 853 files, 81M, built in 22.6 minutes

Testing… YES!! 😺 👍

Tags: RSS RSS RSS

Add Comment

2012-07-20 Perl and UTF-8

I maintain a wiki engine called Oddmuse. It’s the software used to run my blog, for example. It is written in an older scripting language called Perl. Perl predates Unicode. That’s why the use of UTF-8 or UTF-16 is not mandated. That, in turn, means that usually bytes are interpreted as an UTF-8 encoded character is only visible as two bytes.

Consider this regular expression to match WikiWords: [A-Z][a-z]+[A-Z][a-z]+

How would you extend it to parse ÖlPlattform?

Assume the following Perl code was written in a source file that was UTF-8 encoded:

$str = "OelPlattform";
print "OelPlattform YES\n" if $str =~ /[[:upper:]][[:lower:]]+[[:upper:]]\w+/;
$str = "ÖlPlattform";
print "ÖlPlattform YES\n" if $str =~ /[[:upper:]][[:lower:]]+[[:upper:]]\w+/;

This will just print OelPlattform YES because what looks like “ÖlPlattform” actually starts with the bytes C3 96 and C3 is not an upper case letter. It’s actually unclear what it is. In a Latin-1 environment the C3 would print as ×the dreaded sign of encoding errors!

I wanted to keep Oddmuse encoding agnostic. Users could specify a different encoding which would be served together with the page HTML such that they could have wikis using GB 2312. This is why Oddmuse contained the following line and similar code:

# we treat input and output as bytes
eval { local $SIG{__DIE__}; binmode(STDOUT, ":raw"); };

This resulted in problems when some packages I was using did in fact produce UTF-8 and so I had to use code as follows:

eval { local $SIG{__DIE__}; binmode(STDOUT, ":utf8"); } if $HttpCharset eq 'UTF-8';
print RSS($3 ? $3 : 15, split(/\s+/, UnquoteHtml($4)));
eval { local $SIG{__DIE__}; binmode(STDOUT, ":raw"); };

I’m not sure why I surrounded it all with an eval—I assume it was to support an older version of Perl but I’m not sure.

Ok, so I wanted to get rid of all that.

The solution seems deceptively simple: add use utf8; to the source files and open all files using the UTF-8 encoding layer.

When printing UTF-8 to STDOUT, you need to tell Perl that STDOUT can in fact handle multi-byte characters. Since the HTML produced is UTF-8 encoded, I know that this is true. If you don’t, you’ll get “wide character in print” warnings.

binmode(STDOUT, ':utf8');

You need to be careful with all input and output, however.

open(F, '<:encoding(UTF-8)', $RcFile);

The same is true for output:

open(OUT, '>:encoding(UTF-8)', $file)
  or ReportError(Ts('Cannot write %s', $file) . ": $!", '500 INTERNAL SERVER ERROR');

Oddmuse also offers the ability to include other pages (Transclusion) and to produce feeds. This can be a problem. The default page processing is to parse the raw text and start printing HTML as soon as possible because I have always felt that it was more expedient to start printing the top of the page while the rest was still being parsed. What happens when I don’t want to do this, eg. I’m in the middle of building the RSS feed?

The solution I had been using was to redirect STDOUT to a variable. Perl calls this a “memory file.” The problem is the encoding of this memory file:

Here’s what I had to write:

open(STDOUT, '>', \$page) or die "Can't open memory file: $!";
binmode(STDOUT, ":utf8");
PrintPageHtml();
utf8::decode($page);

I think this works because binmode tells all the print instructions that it’s ok to print multi-byte characters and utf8::decode makes sure that all those bytes are in fact decoded back to Perl’s internal representation.

Then I discovered that I needed to look at the bytes if I wanted to URL-encode strings:

utf8::encode($str); # turn to byte string
my @letters = split(//, $str);
my %safe = map {$_ => 1} ('a' .. 'z', 'A' .. 'Z', '0' .. '9', '-', '_', '.', '!', '~', '*', "'", '(', ')', '#');
foreach my $letter (@letters) {
  $letter = sprintf("%%%02x", ord($letter)) unless $safe{$letter};
}

Now that I’m looking at the above I wonder what sort of bugs I’m introducing with the inverse operation that I haven’t changed:

$str =~ s/%([0-9a-f][0-9a-f])/chr(hex($1))/ge;

I feel that this requires a call to utf8::decode when done! Strangely enough none of my tests have picked this up. question

(Actually I think I know why I haven’t stumbled across this problem: I only use the function to decode the Cookie, and all the functions accessing the cookie go through an extra encoding/decoding step that would not be necessary if I had fixed the URL-decoding function. lightbulb)

Another problem I stumbled upon: directories. Directories often ended up Latin-1 encoded.

utf8::encode($newdir);
return if -d $newdir;
mkdir($newdir, 0775)
  or ReportError(Ts('Cannot create %s', $newdir) . ": $!", '500 INTERNAL SERVER ERROR');

The reason I didn’t discover I had the same problem with filenames was that I’m using a compatibility layer on my Mac when I do my developments. The Mac uses UTF-8 NFD instead of UTF-8 NFC as is the standard on the web. Thus if you take bytes encoding a filename from the web and create the file, or if you go the other way, you have a problem. I store the index of all pages in a files. When a new page is created, I get the page name (NCF encoded) from the web, and store it in a file. When I read the file, the content contains the NFC bytes and with these, I cannot find the NFD encoded file (because the filesystem changed the encoding as it wrote the file). I hated it so much. Thus, the Mac compatibility layer does an extra encoding and decoding to get everything from NFD to NFC—and thereby protected me from this error.

As soon as I installed it on my sites, however—they all use Debian and ext3 filesystems, I think—I had a problem.

The necessary fix:

utf8::encode($file);
if (open(IN, '<:encoding(UTF-8)', $file)) {
  local $/ = undef;   # Read complete files
  my $data=<IN>;
  close IN;
  return (1, $data);
}

And:

utf8::encode($file);
open(OUT, '>:encoding(UTF-8)', $file)
  or ReportError(Ts('Cannot write %s', $file) . ": $!", '500 INTERNAL SERVER ERROR');
print OUT  $string;
close(OUT);

Another stumbling block was that the non-breaking space was no longer just a byte sequence like any other, namely C2 A0. Perl suddenly recognized it as whitespace! This is a problem if a path contains non-breaking spaces! The old code translated spaces to underscore characters, so that wasn’t really a possibility. But whenever I had been “smart” and used a non-breaking space, I now had a problem. The glob function splits its arguments on whitespace. Where there was one pattern, I now had two broken patterns!

Here’s an example:

glob(GetKeepDir(shift) . '/*.kp'); # files such as 1.kp, 2.kp, etc.

Here’s another example:

foreach (glob("$PageDir/*/*.pg $PageDir/*/.*.pg"))

The solution is to use File::Glob ':glob' and replace every occurence of glob with bsd_glob. Wow, my application was very much unsuited to filenames containing whitespace and I hadn’t even realized it!

foreach (bsd_glob("$PageDir/*/*.pg"), bsd_glob("$PageDir/*/.*.pg"))

Remember the regular expression to detect wiki words I used at the top? This was the actual regular expression I had been using:

$WikiWord = '[A-Z]+[a-z\x80-\xff]+[A-Z][A-Za-z\x80-\xff]*';

Essentially wiki words only worked for a first letter containing an ASCII upper case letter.

At first, I switched this to the following regular expression (trying to minimize changes):

$WikiWord = '[A-Z]+[a-z\x{0080}-\x{ffff}]+[A-Z][A-Za-z\x{0080}-\x{ffff}]*';

It turns out that Perl 5.8 chokes on this regular expression, howeveer. FFFE and FFFF are noncharacters. I had to change the regular expression.

$WikiWord = '[A-Z]+[a-z\x{0080}-\x{fffd}]+[A-Z][A-Za-z\x{0080}-\x{fffd}]*'; # exclude noncharacters FFFE and FFFF

I’m sure this list isn’t complete but I’m sure it’s long enough to illustrate my main point: this is painful. It’s HTML quoting all over again.

Tags: RSS RSS

Comments on 2012-07-20 Perl and UTF-8

Things to do for Oddmuse:

  1. fix that regular expression
  2. fix that cookie encoding issue

– Alex


Can it recognize new WikiWords as “ÖlPlattform”, thanks to change regular expression to match them?. As far I can understanding it a little, does it need change those regex and changes way of read (write?), string (from url, for instance) and files?

JuanmaMP 2012-07-22 01:16 UTC


Actually, I think that a simple change of the regular expressions is all that is needed. :)

AlexSchroeder 2012-07-22 05:11 UTC

Add Comment

2011-10-07 Oddmuse, Venus and Perl

http://www.emacswiki.org/pics/oddmuse-logo.png

I’ve been working on a submission form for the Old School RPG Planet. Today I added another little feature. This is how I like to develop code. No time pressure. One little step at a time. Keep polishing it.

The planet uses Planet Venus to collect the RSS and Atom feeds of many of the Old School RPG blogs out there. Planet Venus allows you to get the list of feeds via an URL. I’m hosting the list of feeds on Campaign Wiki itself (raw format). As you can see, it the format doesn’t look nice.

The thing I did, therefore, was to write a script that makes it easy for people who are not into the technical details to submit new blogs. It also makes it easier for me to submit new blogs!

The things it handles:

  • If you submit an invalid URL, it will prepend http:// and try again.
  • If it looks like we already have a similar looking feed on our page, it requires a confirmation by the user.
  • If you submit a web page, it will look for alternative links with MIME types application/rss+xml, application/atom+xml, application/xml (yeah) and text/xml (just making sure) and allow the user to pick one of them.
  • If you submitted a feed directly instead of a web page, it it uses that instead.
  • If the feed you picked is served with an invalid content type, it is rejected.
  • It extracts the title of the feed and adds it to the wiki page, sorting all the entries alphabetically.

I think it’s pretty cool.

If you look at the interface, you’ll note that it has a link to its own source code. I love this little Perl trick:

  1. Add __DATA__ at the end of the source file. Usually you would add actual data at the end. The script could read it using the DATA file handle.
  2. Serve source code using seek DATA, 0, 0; print "Content-type: text/plain; charset=UTF-8\r\n\r\n", <DATA>; This resets the current position of the DATA file handle to the beginning of the source file. Tadaa! :)

Tags: RSS RSS RSS RSS

Add Comment

2011-04-30 Map Drawing Using Polygons

I’m currently working on randomly generating islands using the ideas presented in Polygonal Map Generation by Amit. Check out his Flash demo! I am nowhere as far, yet. I’m writing my code in Perl and producing SVG output.

http://farm6.static.flickr.com/5303/5671163434_e3b86d4dde.jpg

See below for source code used. I’d install it on a public server, but unfortunately there are quite some dependencies…

Tags: RSS RSS RSS

#! /usr/bin/perl -w
# Copyright (C) 2011  Alex Schroeder <alex@gnu.org>
#
# This program is free software: you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free Software
# Foundation, either version 3 of the License, or (at your option) any later
# version.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# this program. If not, see <http://www.gnu.org/licenses/>.

use strict;
use CGI qw(:standard);
use SVG;
use Math::Geometry::Voronoi;
use Class::Struct;
use Math::Fractal::Noisemaker;
use List::Util qw(min max);
use Data::Dumper;

my $points = 3000;
my $width  = 1000;
my $height =  550;
my $center_x = $width / 2;
my $center_y = $height / 2;
my $radius = 500;

my %color = (beach => '#a09077',
	     ocean => '#44447a',);

struct World => { points => '@',
		  centroids => '@',
		  voronoi => '$',
		  height => '@',
		};

sub add_random_points {
  my ($world) = @_;
  for (my $i = 0; $i < $points; $i++) {
    push(@{$world->points}, [rand($width), rand($height)]);
  };
  # print(join("\n", map {join(",", $_->[0], $_->[1])} @{$world->points}));
  return $world;
}

sub add_voronoi {
  my ($world) = @_;
  $world->voronoi(Math::Geometry::Voronoi->new(points => $world->points));
  $world->voronoi->compute;
}

sub add_centroids {
  my ($world) = @_;
  $world->centroids([]); # clear
  foreach my $polygon ($world->voronoi->polygons) {
    push(@{$world->centroids}, centroid($polygon));
  }
}

sub centroid {
  my ($cx, $cy) = (0, 0);
  my $A = 0;
  my $polygon = shift;
  my ($point_index, @points) = @$polygon; # see Math::Geometry::Voronoi
  my $point = $points[$#points];
  my ($x0, $y0) = ($point->[0], $point->[1]);
  for $point (@points) {
    my ($x1, $y1) = ($point->[0], $point->[1]);
    $cx += ($x0 + $x1) * ($x0 * $y1 - $x1 * $y0);
    $cy += ($y0 + $y1) * ($x0 * $y1 - $x1 * $y0);
    $A += ($x0 * $y1 - $x1 * $y0);
    ($x0, $y0) = ($x1, $y1);
  }
  $A /= 2;
  $cx /= 6 * $A;
  $cy /= 6 * $A;
  return [$cx, $cy, $point_index];
}

sub add_height {
  my $world = shift;
  $Math::Fractal::Noisemaker::QUIET = 1;
  my $grid = Math::Fractal::Noisemaker::square();
  $world->height([]); # clear
  my $scale = max($height, $width); # grid is a square
  foreach my $point (@{$world->points}) {
    my $x = int($point->[0]*255/$scale);
    my $y = int($point->[1]*255/$scale);
    my $h = 0; # we must not skip any points!
    $h = $grid->[$x]->get($y) / 255
      unless $x < 0 or $y < 0 or $x > 255 or $y > 255;
    push(@{$world->height}, $h);
  }
}

sub raise_point {
  my ($world, $x, $y, $radius) = @_;
  my $i = 0;
  foreach my $point (@{$world->points}) {
    my $dx = $point->[0] - $x;
    my $dy = $point->[1] - $y;
    my $d = sqrt($dx * $dx + $dy * $dy);
    my $v = max(0, $world->height->[$i] - $d / $radius);
    $world->height($i, $v);
    $i++;
  }
}

sub svg {
  my $world = shift;
  my $svg = new SVG(-width => $width,
		    -height => $height, );
  foreach my $polygon ($world->voronoi->polygons) {
    my ($point_index, @points) = @$polygon; # see Math::Geometry::Voronoi
    my $x = $world->points->[$point_index]->[0];
    my $y = $world->points->[$point_index]->[1];
    next if $x < 0 or $y < 0 or $x > $width or $y > $height;
    my $z = int($world->height->[$point_index] * 255);
    my $color = $z == 0 ? $color{ocean} : "rgb($z,$z,$z)";
    my $path = join(",", map { map { int } @$_ } @points);
    $svg->polygon(points => $path,
		  fill => $color,
		  style => { 'stroke-width' => 1,
			     'stroke' => 'black'});
  }
  return $svg->xmlify();
}

sub response {
  print header(-type=>'image/svg+xml');
  print shift;
}

sub main {
  if (path_info eq '/source') {
    seek DATA, 0, 0;
    print "Content-type: text/plain; charset=UTF-8\r\n\r\n", <DATA>;
  } else {
    srand(param('seed') || time);
    my $world = new World;
    add_random_points($world, $points);
    add_voronoi($world);
    for (my $i = 2; $i--; ) {
      # Lloyd Relaxation
      add_centroids($world);
      $world->points($world->centroids);
      add_voronoi($world);
    }
    # skip corner improvement
    # skip Delaunay triangulation
    add_height($world);
    raise_point($world, $center_x, $center_y, $radius);
    # draw
    response(svg($world));
  }
}

main ();

__DATA__

Add Comment

2010-10-15 Web Standards Dream Bubble

I maintain the Old School RPG Planet. The list of feeds it manages is saved on a wiki page. I wanted to write a little script that will allow me to quickly add feeds to that list. And I did! There’s now a way to submit new feeds to the feed instead of editing the wiki page.

The problem? The thing tries to parse web pages, trying to discover feed addresses. And that works well for sites that validate. But the two first Blogspot sites I tried each had over two hundred errors! Once the markup is borked, parsing doesn’t work, and thus feed discovery doesn’t work.

Now, if I need to work around broken markup, I start wondering why tried to standardize HTML… What a glorious waste of time! In the end, we just treat it as tag soup anyway. >{

If you’re still interested in the source code, no problem. Lately all my CGI-scripts are able to spew forth their source code.

Unfortunately it is not complete, yet. It doesn’t update the wiki page. I didn’t bother once I realized that the entire parsing idea was not going to work. :'(

Update: Wohoo, replaced HTML and XML parsing with regular expression matching, wrote what I needed, and finished the script! [1] :)

Tags: RSS RSS RSS RSS

Comments on 2010-10-15 Web Standards Dream Bubble

Standardizing HTML is an excellent idea, and it works pretty well now. What doesn’t work is the attempt at only describing th behavior of a subset of possible markup, called for some reason “valid HTML”, and completely ignored all the other possibilities that humans can produce and expect to work somehow. Fortunately, the HTML5 standard finally standardized the error handling, so you can be finally sure that no matter how inventive the author of the document, the information that you get from parsing it is the same for all the standard-conforming parsers you use.

Throwing errors at human-generated content is kind of a silly approach, especially when the human who created it is long gone and unable to correct the errors. It seems much easier to just assume that every input must mean something, even if you are risking that it’s not quite the same thing that the author had in mind. To be honest I am really surprised that Perl, which follows this philosophy itself somewhat, doesn’t have a forgiving HTML parser that you could use.

RadomirDopieralski 2010-10-16 19:00 UTC


Well, I need to check. Right now I am using a Perl module that uses libxml2 in the background. I did that because of XPath support. Perhaps switching to a SAX parser will help…

AlexSchroeder 2010-10-16 19:28 UTC


Wow, falling back to regular expressions and it actually seems to work! :)

AlexSchroeder 2010-10-16 22:59 UTC


The awesome answer on Stack Exchange notwithstanding:

You can't parse [X]HTML with regex. Because HTML can't be parsed by regex. Regex is not a tool that can be used to correctly parse HTML. As I have answered in HTML-and-regex questions here so many times before, the use of regex will not allow you to consume HTML. Regular expressions are a tool that is insufficiently sophisticated to understand the constructs employed by HTML. HTML is not a regular language and hence cannot be parsed by regular expressions. Regex queries are not equipped to break down HTML into its meaningful parts. so many times but it is not getting to me. Even enhanced irregular regular expressions as used by Perl are not up to the task of parsing HTML. You will never make me crack. HTML is a language of sufficient complexity that it cannot be parsed by regular expressions. Even Jon Skeet cannot parse HTML using regular expressions. Every time you attempt to parse HTML with regular expressions, the unholy child weeps the blood of virgins, and Russian hackers pwn your webapp. Parsing HTML with regex summons tainted souls into the realm of the living. HTML and regex go together like love, marriage, and ritual infanticide. The <center> cannot hold it is too late. The force of regex and HTML together in the same conceptual space will destroy your mind like so much watery putty. If you parse HTML with regex you are giving in to Them and their blasphemous ways which doom us all to inhuman toil for the One whose Name cannot be expressed in the Basic Multilingual Plane, he comes. HTML-plus-regexp will liquify the n​erves of the sentient whilst you observe, your psyche withering in the onslaught of horror. Rege̿̔̉x-based HTML parsers are the cancer that is killing StackOverflow it is too late it is too late we cannot be saved the trangession of a chi͡ld ensures regex will consume all living tissue (except for HTML which it cannot, as previously prophesied) dear lord help us how can anyone survive this scourge using regex to parse HTML has doomed humanity to an eternity of dread torture and security holes using regex as a tool to process HTML establishes a breach between this world and the dread realm of c͒ͪo͛ͫrrupt entities (like SGML entities, but more corrupt) a mere glimpse of the world of reg​ex parsers for HTML will ins​tantly transport a programmer's consciousness into a world of ceaseless screaming, he comes, the pestilent slithy regex-infection wil​l devour your HT​ML parser, application and existence for all time like Visual Basic only worse he comes he comes do not fi​ght he com̡e̶s, ̕h̵i​s un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment, HTML tags lea͠ki̧n͘g fr̶ǫm ̡yo​͟ur eye͢s̸ ̛l̕ik͏e liq​uid pain, the song of re̸gular exp​ression parsing will exti​nguish the voices of mor​tal man from the sp​here I can see it can you see ̲͚̖͔̙î̩́t̲͎̩̱͔́̋̀ it is beautiful t​he final snuffing of the lie​s of Man ALL IS LOŚ͖̩͇̗̪̏̈́T ALL I​S LOST the pon̷y he comes he c̶̮omes he comes the ich​or permeates all MY FACE MY FACE ᵒh god no NO NOO̼O​O NΘ stop the an​*̶͑̾̾​̅ͫ͏̙̤g͇̫͛͆̾ͫ̑͆l͖͉̗̩̳̟̍ͫͥͨe̠̅s ͎a̧͈͖r̽̾̈́͒͑e n​ot rè̑ͧ̌aͨl̘̝̙̃ͤ͂̾̆ ZA̡͊͠͝LGΌ ISͮ̂҉̯͈͕̹̘̱ TO͇̹̺ͅƝ̴ȳ̳ TH̘Ë͖́̉ ͠P̯͍̭O̚​N̐Y̡ H̸̡̪̯ͨ͊̽̅̾̎Ȩ̬̩̾͛ͪ̈́̀́͘ ̶̧̨̱̹̭̯ͧ̾ͬC̷̙̲̝͖ͭ̏ͥͮ͟Oͮ͏̮̪̝͍M̲̖͊̒ͪͩͬ̚̚͜Ȇ̴̟̟͙̞ͩ͌͝S̨̥̫͎̭ͯ̿̔̀ͅ – 3891 votes for not being able to parse HTML or XHTML with regex

AlexSchroeder 2010-10-19 11:01 UTC

Add Comment

2009-10-02 I hate the Perl SMTP libraries

At home I have Net::SMTP::TLS and Net::SMTP::SSL installed and I’ve managed to use both to send mail via my Google account.

On one of my hosting services, I have only Net::SMTP::SSL, and it just won’t work.

Debug output at home:

Net::SMTP::SSL>>> Net::SMTP::SSL(1.01)
Net::SMTP::SSL>>>   IO::Socket::SSL(1.24)
Net::SMTP::SSL>>>     IO::Socket::INET(1.31)
Net::SMTP::SSL>>>       IO::Socket(1.31)
Net::SMTP::SSL>>>         IO::Handle(1.28)
Net::SMTP::SSL>>>           Exporter(5.58)
Net::SMTP::SSL>>>   Net::Cmd(2.29)
Net::SMTP::SSL=GLOB(0x186fc04)<<< 220 mx.google.com ESMTP 24sm915314eyx.9
Net::SMTP::SSL=GLOB(0x186fc04)>>> EHLO localhost.localdomain
Net::SMTP::SSL=GLOB(0x186fc04)<<< 250-mx.google.com at your service, [80.219.173.68]
Net::SMTP::SSL=GLOB(0x186fc04)<<< 250-SIZE 35651584
Net::SMTP::SSL=GLOB(0x186fc04)<<< 250-8BITMIME
Net::SMTP::SSL=GLOB(0x186fc04)<<< 250-AUTH LOGIN PLAIN
Net::SMTP::SSL=GLOB(0x186fc04)<<< 250-ENHANCEDSTATUSCODES
Net::SMTP::SSL=GLOB(0x186fc04)<<< 250 PIPELINING
Net::SMTP::SSL=GLOB(0x186fc04)>>> AUTH LOGIN
Net::SMTP::SSL=GLOB(0x186fc04)<<< 334 VXNlcm5hbWU6
Net::SMTP::SSL=GLOB(0x186fc04)>>> a2Vuc2FuYXRh
Net::SMTP::SSL=GLOB(0x186fc04)<<< 334 UGFzc3dvcmQ6
Net::SMTP::SSL=GLOB(0x186fc04)>>> VGgsYmFpZA==
Net::SMTP::SSL=GLOB(0x186fc04)<<< 235 2.7.0 Accepted
Net::SMTP::SSL=GLOB(0x186fc04)>>> MAIL FROM:<kensanata@gmail.com>

Notice the AUTH LOGIN command.

Debug output on my host:

Net::SMTP::SSL>>> Net::SMTP::SSL(1.01)
Net::SMTP::SSL>>>   IO::Socket::SSL(1.16)
Net::SMTP::SSL>>>     IO::Socket::INET(1.31)
Net::SMTP::SSL>>>       IO::Socket(1.30_01)
Net::SMTP::SSL>>>         IO::Handle(1.27)
Net::SMTP::SSL>>>           Exporter(5.62)
Net::SMTP::SSL>>>   Net::Cmd(2.29)
Net::SMTP::SSL=GLOB(0xa025520)<<< 220 mx.google.com ESMTP 10sm135225eyz.42
Net::SMTP::SSL=GLOB(0xa025520)>>> EHLO localhost.localdomain
Net::SMTP::SSL=GLOB(0xa025520)<<< 250-mx.google.com at your service, [83.137.100.36]
Net::SMTP::SSL=GLOB(0xa025520)<<< 250-SIZE 35651584
Net::SMTP::SSL=GLOB(0xa025520)<<< 250-8BITMIME
Net::SMTP::SSL=GLOB(0xa025520)<<< 250-AUTH LOGIN PLAIN
Net::SMTP::SSL=GLOB(0xa025520)<<< 250-ENHANCEDSTATUSCODES
Net::SMTP::SSL=GLOB(0xa025520)<<< 250 PIPELINING
Net::SMTP::SSL=GLOB(0xa025520)>>> MAIL FROM:<kensanata@gmail.com>
Net::SMTP::SSL=GLOB(0xa025520)<<< 530-5.5.1 Authentication Required. Learn more at                              
Net::SMTP::SSL=GLOB(0xa025520)<<< 530 5.5.1 http://mail.google.com/support/bin/answer.py?answer=14257 10sm135225eyz.42

Notice the error: Authentication Required.

Why is the same script (I checked twice – I sure hope I’m not confusing anything) not using the AUTH LOGIN command?

I don’t understand.

  my $mail = new MIME::Entity->build(To => $from, # test!
				     From => $from,
				     Subject => 'Test Net::SMTP::SSL',
				     Path => $fh,
				     Type => "text/html");
  my $smtp = Net::SMTP::SSL->new($host, Port => 465,
				 Debug => 1);
  $smtp->auth($user, $password);
  $smtp->mail($from);
  $smtp->to($from); # test!
  $smtp->data;
  $smtp->datasend($mail->stringify);
  $smtp->dataend;
  $smtp->quit;

Source is available. [1]

Output of perl -MNet::SMTP::SSL -wle 'for (keys %INC) { next if m[^/]; $m = $_; $m =~ s[/][::]g; $m =~ s/\.pm$//; print "$m ", $m->VERSION || "<unknown>" }' as suggested on #perl:

At homeRemote system
Net::SSLeay 1.35
IO::Handle 1.28
List::Util 1.14
SelectSaver 1.00
IO::Socket 1.31
warnings 1.03
Symbol 1.05
Scalar::Util 1.14
IO::Socket::INET 1.31
Exporter 5.58
Errno 1.09
IO::Socket::SSL 1.24
warnings::register 1.00
XSLoader 0.02
Net::Config 1.11
Net::Cmd 2.29
utf8 1.04
Config <unknown>
IO 1.25
IO::Socket::UNIX 1.23
Carp 1.03
bytes 1.01
Exporter::Heavy 5.58
Net::SMTP 2.31
vars 1.01
strict 1.03
Net::SMTP::SSL 1.01
constant 1.04
Socket 1.77
AutoLoader 5.60
DynaLoader 1.05
Net::SSLeay 1.35
XSLoader 0.08
IO::Handle 1.27
warnings::register 1.01
Net::Config 1.11
List::Util 1.19
SelectSaver 1.01
Net::Cmd 2.29
IO::Socket 1.30_01
warnings 1.06
utf8 1.07
IO::Socket::UNIX 1.23
IO 1.23_01
Symbol 1.06
bytes 1.03
Carp 1.08
Net::SMTP 2.31
Scalar::Util 1.19
Exporter::Heavy 5.62
IO::Socket::INET 1.31
Net::SMTP::SSL 1.01
strict 1.04
vars 1.01
Exporter 5.62
constant 1.13
Socket 1.80
Errno 1.1
IO::Socket::SSL 1.16
AutoLoader 5.63

Hm…

Update: I found the problem and submitted a bug: The remote system is a Debian system, and the admin installed libnet-smtp-ssl-perl. If you look at the Net::SMTP code, however, you’ll see the following:

sub auth {
  my ($self, $username, $password) = @_;

  eval {
    require MIME::Base64;
    require Authen::SASL;
  } or $self->set_status(500, ["Need MIME::Base64 and Authen::SASL
todo auth"]), return 0;

There is therefore a dependency on Authen::SASL. If you don’t have that module, sending your email will fail in a non-obvious way, as seen above. Installing libauthen-sasl-perl fixes the problem.

Tags:

Comments on 2009-10-02 I hate the Perl SMTP libraries

Thanks for the heads up on the Authen::SASL dependency…been working on it for hours and getting nowhere.

– Fred 2009-12-09 18:34 UTC


I’m not sure what to make of the response given to the bug report. Does Gregor agree with me or not? It’s weird. :)

AlexSchroeder 2009-12-09 23:24 UTC

Add Comment

2009-09-23 Astronomy Picture of the Day for Mac OSX Desktop Background

Based on Harold Bakker's APOD script, I offer the following solution. I don’t keep my computer running, so I just install the following Apple Script as a login item (→ System Preferences → Accounts → Login Items):

(* Place your with timeout statement within a try... on error statement to prevent the script from 
stopping when a timeout occurs. *)
try
	-- Give the script a three minute timeout to prevent problems when this is run as a login item
	with timeout of 180 seconds
		do shell script "~/bin/apod.pl >> /var/tmp/console.log" -- append the result to the console log
	end timeout
on error errMsg -- display a dialog only if an error occurs
	display dialog errMsg giving up after 10
end try

You should probably create a new script using the Script Editor on your system, paste the above, and save it as an application. Remember to untick the startup screen checkbox.

You’ll notice that it runs a Perl script called /bin/apod.pl – you should create that directory, put the following Perl script in it, and make it executable. I keep both apod.app and apod.pl in the same /bin directory.

#!/usr/bin/perl

# This script will download the astronomy picture of the day and set
# it as the current desktop background.

# originally by Harold Bakker, harold@haroldbakker.com
# http://www.haroldbakker.com/

# changes by Alex Schroeder <alex@gnu.org>
# http://emacswiki.org/alex/

use strict;
use LWP::UserAgent;
use File::Temp qw/tempfile/;

my $ua = LWP::UserAgent->new;
my $response = $ua->get("http://antwrp.gsfc.nasa.gov/apod/astropix.html");
if ($response->is_success
   and $response->content =~ /href\="image\/([^\/]+)\/(.*?)"/) {
  my $url = "http://antwrp.gsfc.nasa.gov/apod/image/$1/$2";
  my $filename = $2;
  $response = $ua->get($url);
  if ($response->is_success) {
    my ($fh, $tempfile) = tempfile(UNLINK=>0);
    print $fh $response->content;
    close $fh;
    open(F, "|/usr/bin/osascript") or die "Cannot run Apple Script: $!";
    print F <<END;
tell application "Finder"
	set pFile to POSIX file "$tempfile" as string
	set desktop picture to file pFile
	end tell
END
  } else {
    die $response->status_line;
  }
} else {
  die $response->status_line;
}

This should work fine as long you restart your computer about once a day. I haven’t made sure that it will try to reuse images, saving the last one in a save place, etc.

Tags: RSS RSS RSS RSS

Add Comment

2009-09-11 Random Subsector Generator

Once I had the name generator, I was ready to write up the rest of the script. The subsector UWP list generator will also compute the temperature for internal purposes, but doesn’t print it because it’s not part of the UWP.

I decided that systems with code Amber and piratets are considered code Red. The rules just say that “Red codes are given out at the discretion of the Referee.”

The cool thing is that you can paste & copy the resulting list into the map generator and generate the map to go along with it.

Tags: RSS RSS RSS RSS

Add Comment

2009-08-03 Strange Perl Issue

Today I had to make the following change to Oddmuse because that fixes an image upload issue on my dad’s blog. What’s going on? He’s using the following:

  • Apache/2.2.3 (Debian) DAV/2 SVN/1.4.2 mod_jk/1.2.18 mod_ssl/2.2.3 OpenSSL/0.9.8c mod_wsgi/2.3 Python/2.5
  • Perl v5.8.8
  • CGI: 3.15

Any ideas? The net result was that <$file> resulted in no content if run within the eval block.

*** wiki.pl.~1.925.~	Fri Jul  3 11:23:01 2009
--- wiki.pl	Tue Aug  4 00:20:26 2009
***************
*** 3548,3554 ****
      $type = $q->uploadInfo($filename)->{'Content-Type'};
      ReportError(T('Browser reports no file type.'), '415 UNSUPPORTED MEDIA TYPE') unless $type;
      local $/ = undef;   # Read complete files
!     eval { require MIME::Base64; $_ = MIME::Base64::encode(<$file>) };
      $string = '#FILE ' . $type . "\n" . $_;
    } else {
      $string = AddComment($old, $comment) if $comment;
--- 3548,3555 ----
      $type = $q->uploadInfo($filename)->{'Content-Type'};
      ReportError(T('Browser reports no file type.'), '415 UNSUPPORTED MEDIA TYPE') unless $type;
      local $/ = undef;   # Read complete files
!     my $content = <$file>; # Apparently we cannot count on <$file> to always work within the eval!?
!     eval { require MIME::Base64; $_ = MIME::Base64::encode($content) };
      $string = '#FILE ' . $type . "\n" . $_;
    } else {
      $string = AddComment($old, $comment) if $comment;

Tags:

Comments on 2009-08-03 Strange Perl Issue

This sounds like the issue we have on EmacsWiki as well.

AaronHawley 2009-08-04 07:39 UTC


Thanks for the reminder! I had forgotten about it. :(

AlexSchroeder 2009-08-04 09:03 UTC

Add Comment

2009-06-29 Perl Installation on Mac OS 10.3

Does anybody read these at all? I need to write things down so I won’t forget. I’m trying to install XML::Parser and running into a tiny. I need to run it as root using sudo because I can’t install it using my ordinary account. I hate this and try to remedy the situation once a year. 😊

So it’s that time of the year again. I look at the o conf output and can’t find the place where I get to say I want to use sudo make install command. I’m also greeted by the following message when I start the CPAN shell, so I’m guessing this could be part of the problem:

  There's a new CPAN.pm version (v1.9402) available!
  [Current version is v1.7602]

I’m trying to run sudo cpan Bundle::CPAN to see where that takes me… I actually had to run it several times (three? four?) but it worked in the end. Amazing.

Tags: RSS RSS RSS RSS

Add Comment

More...

Define external redirect: WikiWords