Diary SiteMap RecentChanges About Contact Calendar

Search:

Matching Pages:

Oddmuse

RSS Feed Oddmuse is the wiki engine running all the wikis at emacswiki.org – including EmacsWiki itself. My main interests are:

See OddmuseRoadmap for my thoughts about the design of Oddmuse.

2014-07-01 Oddmuse Setup Using Makefiles

Here’s how I do it:

My home directory contains two directories for every site. One directory is the document root for the site, and other directory is the data directory for the wiki. Thus, you’ll see alexschroeder.ch (includes wiki.pl) and alexschroeder (the wiki data directory which isn’t published via the web server).

The home directory has a clone of the Oddmuse sources in ~/src/oddmuse and a Makefile as follows:

all:
	rm -rf ~/src/oddmuse/build; \
	cd ~/src/oddmuse; \
	git pull; \
	make prepare;

install:
	cd alexschroeder.ch; make
	cd alexschroeder/modules; make
	cd communitywiki.org; make
	cd communitywiki/modules; make
	...

Remember that those commands are all indented using a TAB.

The command make prepare prepares a copy of all the Oddmuse sources with version information in the ~/src/oddmuse/build directory.

In every document root you’ll find another Makefile containing a wiki.pl wrapper script as follows:

#!/usr/bin/perl
package OddMuse;
$DataDir = '/home/alex/alexschroeder';
do 'current.pl';

It also contains a Makefile which will update current.pl. The Makefile usually also does a bunch of other, site-specific things. Update CSS files, update other scripts, and so on.

current.pl: ~/src/oddmuse/build/wiki.pl
	cp $< $@

Every modules directory also contains a Makefile. These Makefiles are all just linked because these are always identical:

concat.pl: source/*.pl
	cat $^ > $@

source/%.pl: ~/src/oddmuse/build/%.pl
	cp $< $@

This is a bit weird. Years ago, I believed that concatenating all those module files would speed up the wiki. Who knows whether that’s true. I never measured it.

Anyway, you copy all the modules you want into the source directory and run make in the modules directory. This will update any outdated copies in the source directory and concatenate them all into one single file called concat.pl. If you want to delete modules or add new modules, delete concat.pl and run make again.

So, at the top level, I can run make && make install. This pulls the new revisions from git, runs make prepare which populates the build directory with all the version information in the source files. The new wiki.pl script replaces existing copies of current.pl and the new modules get copied to the source directories, all of the modules are concatenated into a single concat.pl file for the modules directory.

Easy…

Well, it makes life easier for people maintaining multiple Oddmuse wikis.

Tags: RSS

Add Comment

2014-06-27 New Hosting Sometime Soon

https://farm4.staticflickr.com/3918/14354716818_79324b7034_o.png

Last year, I wondered about webhosting in Switzerland. Today @sagonetworks had another outage and I think I’m going to move my wikis to @edisat. This will take a few days, but I’ve already started the process. Moving data. Setting up the system. When the time comes, I’ll be locking all the wikis on the old host, change the DNS entries and as soon as you get the update, the wiki will be editable again because you’ll be connecting to the new system.

Note the outage in the graphic to the right, and the tapering out of requests as the new DNS entries start sending traffic to the new host. I love my Monitoring. 👍

The new host is called Kallobombus.

Tags: RSS RSS RSS

Add Comment

2014-06-23 Oddmuse Migration

Recent developments of Oddmuse forced me to write a little upgrade extension that helps users upgrade their installations. I’m unhappy with the entire thing.

Yes, I think upgrades are sometimes necessary. Ideally, users would never need to care. Additionally, users should never have to worry about installing all the upgrades they missed, in order. The usual way of doing this involves an installer. That’s tricky, too! What about permissions? Idiosyncratic installations? It’s messy.

What I did was write an extension that disables itself once it has run. This makes sure it has the same user, group, and permissions as the wiki itself.

And yet, I just discovered that something isn’t right with my permissions and ownership settings. I’ve had to run my fix-data-dir-permissions script again. The setup involves my account (alex) and the account that runs CGI scrips (www-data). Thus, the files and directories must belong to www-data and to the group alex. I am also part of group alex, which is why I will have write permissions for new files even though www-data creates them.

#!/bin/bash

if test "$1" == "-n"; then
    echo Not executing
    ECHO=echo
    shift
fi

if test -z "$1"; then
    F=`basename $0`
    echo Usage: $F [-n] DATADIR
    exit 1
fi

if test ! -d "$1"; then
    echo You need to provide an existing directory
    pwd
    exit 2
fi

echo fixing directory permissions for `pwd`/$1
find $1 -type d -not -perm 2770 -exec $ECHO sudo chmod 2770 {} \;
echo fixing file permissions
find $1 -type f -not -perm 0660 -exec $ECHO sudo chmod 0660 {} \;
echo fixing ownership
find $1 \( -not -group alex -or -not -user www-data \) -exec $ECHO sudo chown www-data.alex {} \;

Tags: RSS

Comments on 2014-06-23 Oddmuse Migration

😭 It seems like bash is much harder than most people think. Weirdly, perl has a reputation of a hard-to-learn language, but bash is much, much worse, even though it takes an hour to learn it.

So, let me go through it line by line:

if test "$1" == "-n"; then

The use of test hardly makes any sense. You have bash shebang, and by using == you’re relying on bash completely (you don’t plan to have any compatibility), so why would you use test? Why not the bash [[ ]] syntax? Like that:

if [[ $1 == '-n' ]]; then

You have many quoting errors later… When in doubt, quote every variable everywhere. Extra quotes never hurt. However, you don’t have to quote left side of the expression in [[ ]] and there are some other cases when quotes are not needed, but it wont be too bad if you have excessive ones. Okay, next:

if test -z "$1"; then
# to
if [[ -z $1 ]]; then

Actually, it’s exactly the same thing as in the previous one.

F=`basename $0`

But this one is much worse. First of all, don’t use backticks, that’s an old way. Use $() ! It is much cleaner, you will not confuse these with regular quotes, but most importantly you can nest it! That was just a style error, but look! No quotes around $0. And yes, you need them, otherwise it will break if your path has whitespace.

You can see the difference here:

alex@Margo:~$ a='hello world/test' && basename $a
hello
alex@Margo:~$ a='hello world/test' && basename "$a"
test

As an exercise, guess what would happen if a='* world/test'

Basically you’ve got the same quoting error on the next line. Sooo, I changed both lines like this:

    echo "Usage: $(basename -- "$0") [-n] DATADIR" >&2

Some people would instantly scream “HEY! These quotes are wrong! Your $0 variable is not quoted, because first " matches with second " and third matches with fourth, leaving $0 in the middle unquoted”. Well, that’s not the case. At least not in bash. Bash is smart enough to treat this situation correctly, quotes inside $() will not match with quotes outside of it. This actually makes sense, because how else would you quote parameters if it worked like some people expect?

Now, when the lesson is learned, we can forget basename and use parameter expansion instead.

echo "Usage: ${0##*/} [-n] DATADIR" >&2 # Wheee!

Well, that’s it, basically. I have also added >&2 to print into stderr instead of stdout, quoted other unquoted variables, added single quotes around plain strings (this is just an indication for plain strings, it gives a nice syntax highlighting too!), added -- in case of weird file names with leading slashes (this is covered in Pitfall #3), changed pwd call to $PWD. Many of these are minor style mistakes, but I believe that some people might commit suicide after seeing bash scripts like these again and again :)

If you have an hour or so, read through this list http://mywiki.wooledge.org/BashPitfalls , it helps so much.

I hope you don’t mind these corrections, I just feel like if I don’t explain such issues people will continue doing mistakes.

Here is the whole script:

#!/bin/bash
ECHO=''
if [[ $1 == '-n' ]]; then
    echo 'Not executing'
    ECHO=echo
    shift
fi

if [[ -z $1 ]]; then
    echo "Usage: ${0##*/} [-n] DATADIR" >&2
    exit 1
fi

if [[ ! -d $1 ]]; then
    echo 'You need to provide an existing directory' >&2
    exit 2
fi

echo "fixing directory permissions for $PWD/$1"
find -- "$1" -type d -not -perm 2770 -exec $ECHO sudo chmod 2770 {} \;
echo 'fixing file permissions'
find -- "$1" -type f -not -perm 0660 -exec $ECHO sudo chmod 0660 {} \;
echo 'fixing ownership'
find -- "$1" \( -not -group alex -or -not -user www-data \) -exec $ECHO sudo chown www-data.alex {} \;

Oh, and $ECHO is unquoted because unquoted variables indicate ✨MAGIC✨ and that’s what you’re doing with $ECHO :)

AlexDaniel 2014-06-28 05:13 UTC



AlexSchroeder
Wow, this is awesome! Thank you very much for all the explanations.

AlexSchroeder 2014-06-28 08:28 UTC


Owww, I missed one. If you don’t initialize ECHO then it will pass any variable from the environment. For example if you run your script like this:

ECHO=rm ./thatscript mydir
#or
export ECHO=rm
./thatscript mydir

You might face interesting results (with ECHO being passed to -exec, ouch!). That’s why it is a good idea to initialize variables in bash

AlexDaniel 2014-06-28 13:48 UTC



AlexSchroeder
Haha, this is great. Thank you! :)

AlexSchroeder 2014-06-28 16:23 UTC



Harald
Another nice resource: https://google-styleguide.googlecode.com/svn/trunk/shell.xml

Also make sure to treat variables that may have spaces in them (looking at directory paths and file names) correctly or you may end up with broken stuff.

Oh, and look at getopts.

Your first sentence is true though: Instead of bash, it makes sense to look at and use a more modern scripting language, dropping 20-30 years of baggage along the way.

Harald 2014-06-29 09:32 UTC


@Harald, actually Bash is not that bad. There were some wrong decisions like -r not being default for read, but these are minor. Do you think that all variables should be quoted automatically? And that programmers should “unquote” variables instead? I don’t think so.

Also, what about variables with spaces? I have fixed all these issues. Can you spot something that I missed?

Yeah, getopts is the way to go. I just felt like it is not worth the hassle to introduce getopts just for one tiny parameter.

Now, about that google-styleguide… I don’t suggest to read it, it is plain *cough*bullshit*cough*.


Well, I can quickly go through most of the mistakes:

for dir in ${dirs_to_cleanup}; do

Clearly $dirs_to_cleanup is not quoted, this indicates that there is something wrong. Better use array and "${dirs_to_cleanup[@]}".

Also, what is the point of using curlies like ${var} everywhere??? They’d better teach to put quotes everywhere.


rm "${dir}/${ORACLE_SID}/"*

That’s rm without -- . Well, not a huge issue, but that’s a styleguide or what?


if [[ "$?" -ne 0 ]]; then

Funny. Left side of the expression is quoted when it does not have to be. But even if it had to be quoted, what would be the benefit to quote $? ? It is always an integer. Oh, and if it is an integer, then it would be better to use (( )) for arithmetic expressions, like: if (( $? != 0 )); then


case "${expression}" in

Does not have to be quoted, actually, but at least it does not hurt.


while read f; do
  echo "file=${f}"
done < <(ls -l /tmp)

WHOA! Holy sh…!!! That’s amazing… Bash pitfall #1, parsing ls, read without -r. It just couldn’t be worse.


# -z (string length is zero) and -n (string length is not zero) are
# preferred over testing for an empty string
if [[ -z "${my_var}" ]]; then
  do_something
fi

But hey, it is as simlpe as if [[ ! $my_var ]]; then , why would they complicate things so much?

# Instead of this as errors can occur if ${my_var} expands to a test
# flag
if [[ "${my_var}" ]]; then
  do_something
fi

What??? Which error?? Am I missing something?


zip_version="$(dpkg --status zip | grep Version: | cut -d ' ' -f 2)"

A rule of thumb is that you should never pipe any of these commands together: sed, awk, grep, perl, cut and some others. The reason is that we usually need such functionality that is present in most of them, and indeed with a little trickery these tools will usually do each others’ job. However, they use completely different approaches. In this particular case you can use grep with -P flag (perl regexes! Woohoo!). Like this:

zip_version=$(dpkg --status zip | grep -Po 'Version: \K.*')

But there are other solutions as well.


if ! mv "${file_list}" "${dest_dir}/" ; then

Just look at that variable. It says “file list”. Aha, sure, this would definitely work.


addition=$((${X} + ${Y}))

Stupidity all over the place. This should be:

((addition = X + Y))


At least that "$file_list" example will give me a good laugh for the next few days. 😂

AlexDaniel 2014-07-02 08:09 UTC

Add Comment

2013-12-17 PDF Button

I’m experimenting with a PDF button for this website. In the past, I suggested Print Friendly & PDF. Yesterday I learned about wkhtmltopdf, which does the same thing without depending on a remote service and their ad revenue. On a typical Debian host, you need to apt-get install wkhtmltopdf. This installs a binary and all the required libraries. The problem is that this version needs an X11 server in order to work, which you don’t have when using it on your website. In addition to a regular installation, you need to install a statically compiled binary which has been compiled with a patched version of Qt and no longer requires an X11 server.

In your Oddmuse config file:

$Action{pdf} = \&DoPdf;
push(@KnownLocks, 'pdf');

sub DoPdf {
  my $id = shift;
  RequestLockDir('pdf');
  local $StyleSheet = 'http://alexschroeder.ch/alex-2012.css';
  my $html = PageHtml($id);
  my $source = "$TempDir/document.html";
  my $status = '500 INTERNAL SERVER ERROR';
  open(HTML, '>:utf8', $source)
    or ReportError("Cannot write $source: $!", $status);
  # see GetHeader
  print HTML GetHtmlHeader(NormalToFree($id), $id);
  print HTML $q->start_div({-class=>'header'});
  print HTML $q->h1({-style=>'font-size: x-large'}, GetPageLink($id));
  print HTML $q->end_div(); # header
  print HTML $q->start_div({-class=>'wrapper'});
  # get rid of letter-spacing
  my $sperrung = '<em style="font-style: normal; letter-spacing: 0.125em; padding-left: 0.125em;">';
  $html =~ s/$sperrung/<em>/g;
  my $newthought = '<em style="font-style: normal; font-variant:small-caps; letter-spacing: 0.125em;">';
  $html =~ s/$newthought/<em style="font-style: normal; font-variant:small-caps">/g;
  print HTML $html;
  # see PrintFooter
  print HTML $q->end_div(); # wrapper
  print HTML $q->start_div({-style=>'font-size: smaller; '});
  print HTML $q->hr();
  print HTML $FooterNote;
  # see DoContrib
  SetParam('rcidonly', $id);
  SetParam('all', 1);
  my %contrib = ();
  for my $line (GetRcLines(1)) {
    my ($ts, $pagename, $minor, $summary, $host, $username) = @$line;
    $contrib{$username}++ if $username;
  }
  print HTML $q->p(Ts('Authors: %s',
                      join(', ', map { GetPageLink($_) }
                           sort(keys %contrib))));
  print HTML $q->end_div(); # footer
  print HTML $q->end_html;
  print HTML "\n";
  close(HTML);
  my $target = "$TempDir/document.pdf";
  my $error = `/home/alex/bin/wkhtmltopdf --print-media-type --quiet '$source' '$target'`;
  ReportError("The conversion of HTML to PDF failed", $status) if $error;
  open(PDF, '<:raw', $target) or ReportError("Cannot read $target: $!", $status);
  local $/ = undef;
  my $pdf = <PDF>;
  close(PDF);
  ReportError("$target is empty", $status) unless $pdf;
  binmode(STDOUT, ':raw');
  print GetHttpHeader('application/pdf');
  print $pdf;
  ReleaseLockDir('pdf');
}

sub PrintMyContent {
  my $id = UrlEncode(shift);
  if ($id and $IndexHash{$id}) {
print qq{
<form action="$FullUrl"><p>
<input type="hidden" name="action" value="pdf" />
<input type="hidden" name="id" value="$id" />
<input type="submit" value="PDF" />
</p></form>
}
  }
};

Let me know if it works for you while I try to figure out whether I need this at all. The position of the PDF button at the very bottom of the page is probably less than ideal.

As you can tell, the markup using increased Wikipedia:letter-spacing is messing it all up, which is why I had to fix it.

Tags: RSS

Comments on 2013-12-17 PDF Button

Hi Alex, Long time ago, I wondered about to include a patch like this for my wiki, but eventually most browsers have an utility like this, and others like breadcrumbs, etc.. Is there any advantage?. Thanks.

JuanmaMP 2013-12-19 08:52 UTC



AlexSchroeder
On my Mac, I don’t need it. Printing to PDF is simple. On Windows, however, I need to install a software PDF printer if I want to do this. Just recently my sister asked me for help converting a Word document she had written. She didn’t manage the installation of the PDF printer software. That reminded me of the fact that for some users, a PDF button might still be necessary. I personally don’t like the “save as HTML” feature of most browsers because it results in HTML + a directory of CSS files, images, ads, scripts, and so on. PDF feels “safe”.

AlexSchroeder 2013-12-19 09:03 UTC

Add Comment

2013-12-01 Anonymizing the Oddmuse log files

I’ve just implemented a new non-optional Oddmuse feature. I’m removing all hostnames and IP numbers of older log entries. The log entries older than 90 days are stored in a different log file in order to speed up the generation of RecentChanges. During maintenance, these log entries are copied from one file to the other and I’m now taking advantage of this copying to remove the hostname or IP number.

Basically I find that as a person, I dislike invasions of privacy and I feel that in some small form, software engineers are inviting it because often it’s easier to do. We often model things to never forget, e.g. version control.

One of the important pages on Meatball was ForgiveAndForget. Forgetting is human.

At the same time, with Snowden and the NSA, I feel that as a hoster I’m more comfortable if I cannot provide the logs an agency is looking for.

Furthermore, I’ve had a very small number of emails from users asking me to remove their hostnames from the log files because they had accidentally edited the wiki from work. Pages containing their hostname will eventually be deleted but log entries were not. Now they’re anonymized and people can feel safer knowing that the traces will eventually disappear again.

The idea is that you would only need hostnames or IP numbers to fight spam and vandalism: Add regular expressions matching either hostname or IP number of spammers or vandals to your list of banned hosts and prevent the attack from continuing. After a few days, however, this information is no longer required. In this day and age of privacy invasion, I think software should take a pro-active stance. The log entries must be anonymized.

The existing log file for the older entries is not changed. If you want to do the right thing, there’s a script called anonymize.pl in the contrib directory to do just that.

Just call it in your data directory. Example:

alex@psithyrus:~/oddmuse$ perl ~/src/oddmuse/contrib/anonymize.pl 
Wrote anonymized 'oldrc.log'.
Saved a backup as 'oldrc.log~'

See Oddmuse:Upgrading Issues for a more technical explanation of what’s going on.

Tags: RSS RSS RSS

Add Comment

2013-08-14 Comments on this Wiki Blog

I’ve done a few changes. Let’s see whether it works out.

  1. when viewing ordinary pages, previous comments and the comment form are shown ← this is what I wanted to add
  2. people who follow the RSS feeds will therefore easily find the obvious comment form when following the link ← this is what I hope to achieve
  3. when leaving a comment, you end up on the comment page, which might be confusing
  4. when looking at RecentChanges, which has some extra magic associated with its name, the comments are not shown
  5. when looking at older revisions, the page history, and many other variants, the comments are not shown
  6. when looking at a journal page such as Diary, the comments are not shown
  7. when looking at a journal page such as Diary, you can still see links to inline the comment page
  8. inlined comment pages don’t come with an automatic comment form—you need to click on the “Add comment” link at the end
  9. comment pages are still excluded from the usual feeds (I wonder whether I should change this)

I think the Wiki + Blog combo still works. I’m just trying to make it less weird. :)

Source code for your config file, if you’re an Oddmuse user. The source code below also includes my Google +1 setup (Oddmuse:Google Plus One Module) because my code needs to avoid the situation where a page shows two +1 buttons. As for comments within journals, I use Oddmuse:Dynamic Comments Extension.

# Google +1 list

push(@MyAdminCode, sub {
       my ($id, $menuref, $restref) = @_;
       push(@$menuref, ScriptLink('action=plusone',
				  T('Google +1 Buttons'),
				  'plusone'));
     });

$Action{plusone} = \&DoPlusOne;

sub DoPlusOne {
  print GetHeader('', T('All Pages +1'), ''),
    $q->start_div({-class=>'content plusone'});
  print $q->p(T("This page lists the twenty last diary entries and their +1 buttons."));
  my @pages;
  foreach my $id (AllPagesList()) {
    push(@pages, $id) if $id =~ /^\d\d\d\d-\d\d-\d\d/;
  }
  splice(@pages, 0, $#pages - 19); # last 20 items
  print "<ul>";
  foreach my $id (@pages) {
    my $url = ScriptUrl(UrlEncode($id));
    print $q->li(GetPageLink($id),
		qq{ <g:plusone href="$url"></g:plusone>});
  }
  print "</ul>";
  print $q->end_div();
  PrintFooter();
}

# two step Google +1 button to protect your privacy
# http://my.opera.com/QuHno/blog/adding-the-google-1-button-to-a-webpage-without-violating-the-users-privacy

*MyOldGetCommentForm=*GetCommentForm;
*GetCommentForm=*MyNewGetCommentForm;

sub MyNewGetCommentForm {
  return MyOldGetCommentForm(@_) . q{
<script type="text/javascript">
function loadScript(jssource,thelink) {
   var jsnode = document.createElement('script');
   jsnode.setAttribute('type','text/javascript');
   jsnode.setAttribute('src',jssource);
   document.getElementsByTagName('head')[0].appendChild(jsnode);
   document.getElementById(thelink).innerHTML = "";
 }
 var plus1source = "https://apis.google.com/js/plusone.js";
</script>
<p id="plus1">
  <a href="javascript:loadScript(plus1source,'plus1')">
    <img src="/pics/plusone-h24.png" alt="Show Google +1" />
  </a>
</p>
<!-- <g:plusone></g:plusone> -->
<div class="g-plusone" id="my_plusone"></div>
<script type="text/javascript">
  document.getElementById("my_plusone").setAttribute("data-size", "medium");
  document.getElementById("my_plusone").setAttribute("data-href", document.location.href);
</script>
};
}

# make sure journal pages set a global variable which we then use to
# hide the comment form

*MyOldPrintJournal = *PrintJournal;
*PrintJournal = *MyNewPrintJournal;

my $MyPagePrintedJournal;

push(@MyInitVariables, sub {
       $MyPagePrintedJournal = 0;
     });

sub MyNewPrintJournal {
  $MyPagePrintedJournal = 1;
  return MyOldPrintJournal(@_);
}

# list comments and comment form at the bottom of every normal page

*MyOldPrintFooter = *PrintFooter;
*PrintFooter = *MyNewPrintFooter;

sub MyNewPrintFooter {
  my ($id, $rev, $comment) = @_;
  if (!$MyPagePrintedJournal
      and GetParam('action', 'browse') eq 'browse'
      and $id and $CommentsPrefix
      and $id ne $RCName
      and $id !~ /^$CommentsPrefix(.*)/o) {
    my $target = $CommentsPrefix . $id;
    my $page = '';
    $page = PageHtml($target) if $IndexHash{$target};
    print $q->div({-class=>'comment'},
		  $q->h2(T('Comments')),
		  $page);
    # don't include Google +1 button twice
    print MyOldGetCommentForm("$CommentsPrefix$id", $rev, $comment);
  }
  MyOldPrintFooter(@_);
}

Tags: RSS RSS RSS

Add Comment

2013-01-14 Javascript

I just rewrote my Tag Cloud to use the Google Treemap.

What it does is the following:

  1. The cloud action runs the old code (text representation).
  2. The tagcloud action runs the new code.
  3. The new code basically takes the example Google Treemap and replaces the data with my tags. These tags are not nested.
  4. On click, we call the tag action which shows the last blog pages with that tag.

Try it! → Tag Cloud. (I realize that the tag cloud isn’t very interesting in and of itself. I just enjoyed using the Google tools and learning a little bit of Javascript on the way.)

$Action{cloud} = $Action{tagcloud};
$Action{tagcloud} = \&MyTagCloud;

sub MyTagCloud {
  print GetHeader('', T('Tag Cloud'), '');
  # open the DB file
  require DB_File;
  tie %h, "DB_File", $TagFile;
  my $max = 0;
  my $min = 0;
  my %count = ();
  foreach my $tag (grep !/^_/, keys %h) {
    $count{$tag} = split(/$FS/, $h{$tag});
    $max = $count{$tag} if $count{$tag} > $max;
    $min = $count{$tag} if not $min or $count{$tag} < $min;
  }
  untie %h;
  # ignore 90% of all tags
  my @values = sort values %count;
  $min = GetParam('min', $values[int($#values * 0.9)]);
  # https://developers.google.com/chart/interactive/docs/gallery/treemap
  print <<EOT;
    <script type="text/javascript" src="https://www.google.com/jsapi"></script>
    <script type="text/javascript">
      google.load("visualization", "1", {packages:["treemap"]});
      google.setOnLoadCallback(drawChart);
      function drawChart() {
        // Create and populate the data table.
        var data = google.visualization.arrayToDataTable([
          ['Location', 'Parent', 'Mentions', ''],
          ['Tags', null, 0, 0],
EOT
  foreach my $tag (sort keys %count) {
    my $n = $count{$tag};
    next unless $n > $min;
    print "          ['$tag', 'Tags', $n, 0],\n";
  }
  print <<EOT;
        ]);
        // Create and draw the visualization.
        var tree = new google.visualization.TreeMap(document.getElementById('treemap'));
        tree.draw(data);
        google.visualization.events.addListener(tree, 'select', selectHandler);
        function selectHandler() {
          var selection = tree.getSelection();
          var item = selection[0];
          window.location = "/alex?action=tag;id=" + data.getValue(item.row,0);
        }
      }
    </script>
EOT
  print $q->start_div({-class=>'content cloud'});
  print $q->p(ScriptLink('action=tagcloud;min=0', T('Include all tags')),
              Ts('(currently showing tags with more than %s occurences)', $min));
  print $q->p(Ts('Or switch to a %s.', ScriptLink('action=cloud', T('text format'))));
  print $q->start_div({-id=>'treemap', -style=>'height: 1000px;'});
  print $q->end_div();
  print $q->end_div();
  PrintFooter();
}

Tags: RSS RSS RSS

Add Comment

2012-08-14 Two-step verification

My websites have not been sending any email notifications out to people who had subscribed to page edits. The reason was that I had switched my Google account to use 2-step verification. Only today did I realize that the error messages were trying to tell me that I needed an application specific password for my script.

Everything should be back in working order. :)

I feel relieved. For a while I thought I’d have to dig around in the Perl SMTP libraries. Ugh!

And I feel much better with 2-step verification. The Mat Honan story was scary.

Tags: RSS

Add Comment

2012-03-07 Tinkering with the CSS

I’ve started tinkering with the CSS again. No more secret theme switching by clicking on my face up there. 😊

I’m going for black and white and the default blue for all links. The previous classification for site links, links to sister sites, links to Wikipedia and all other links didn’t really work with all the redirections I had implemented. Thus, be gone!

I’m sticking to Garamond as my favorite font even though I must admit it looks absolutely horrible when viewed on Windows without Clear Type. Oh well! Most people will be reading the site via a feed reader anyway, I guess.

I’ll also try to stick to a larger than average font-size. I haven’t decided whether I should be using Google Web Fonts.

Let me know if there is something you’d like to see changed. 😊

Tags: RSS RSS

Comments on 2012-03-07 Tinkering with the CSS


Andreas Gohr
How does the referrer thingy at the bottom of the post work? The two pages linked seem not to link to this post?

Regarding styling: I’d prefer a non-serif font.

Andreas Gohr 2012-03-08 10:14 UTC



AlexSchroeder
The referrer thing works as follows: if a visitor to the wiki has referrer information, the wiki fetches this page and searches for a link to the page. If that works, it remembers the link. This means that eventually referrals expire if nobody ever follows them. Normally this works very well. Some sites such as Blogspot offer users a blog-roll. As far as I understand it, users will then see the n most recently updated entries or a random selection. Either way, by the time other visitors try and check the referral, it’s no longer being displayed.

If you check Self:action=refer you will notice that at the moment the authors of Dreams of Mythic Fantasy, The City of Iron and Troll and Flame seem to have me listed on their blog-roll. All of them are Blogspot blogs. :)

As for sans-serif, try this link. (Your preference is set in a cookie.) Feel free to switch back.

AlexSchroeder 2012-03-08 11:46 UTC



AlexSchroeder
For Pierre and others who preferred the old look: Try this! switch to the Red Beige theme and switch back to the default theme

AlexSchroeder 2012-03-23 15:28 UTC

Add Comment

2012-02-02 Oddmuse Channel

This is why I like the Oddmuse channel on Freenode so much… 😊

16:51 CapnDan
Honestly I don’t understand why people get so religious about different versions of Unix.
16:52 kensanata
Everybody likes to imagine that their decision was rational.
16:52 kensanata
Therefore they infuse the first decision and the facts that influenced it with overproportional importance.
16:52 kensanata
NETBSD IS THE ONLY SECURE OPERATING SYSTEM!!!!
16:53 CapnDan
yeah, there’s a name for that, I was reading about it a few months ago.
16:53 CapnDan
A popular psychology word from last year.
16:53 kensanata
“Moron Syndrome”?
16:53 CapnDan
lol

(Dan later discovers that the term he was looking for is cognitive inertia.)

Tags: RSS RSS

Add Comment

More...

Define external redirect: MultilingualWiki WikiLog ForgiveAndForget AlexDaniel FeatureKarma