Diary SiteMap RecentChanges About Contact 2013-07 Calendar

Search:

Matching Pages:

2013-07-26 Extracting Starred URLs from Google Reader Takeout Data

Google Reader was shut down. Luckily Google Takeout allowed you to download all of your data before the shut it down. I did that. I wanted to extract all the URLs to the articles I starred in order to post them on this blog… maybe.

Here’s how I did it. First, take a look at the file starred.json.

    (setq starred-items (with-current-buffer "starred.json (Google Reader-takeout.zip)"
			  (goto-char (point-min))
			  (json-read)))
    (mapcar (lambda (item) (car item)) starred-items)
    ⇒ (items direction updated author title id)

I’m interested in items, which happens to be an array. Let’s see what each item contains.

    (mapcar (lambda (item) (car item))
            (aref (cdr (assoc-string "items" starred-items)) 0))
    ⇒ (origin annotations comments author content replies alternate updated published title categories id timestampUsec crawlTimeMsec)

As it happens, the URL I’m interested in is part of alternate. Let’s make sure there’s always exactly one entry:

    (mapc (lambda (item)
	    (when (not (= 1 (length (cdr (assoc-string "alternate" item)))))
	      (error "%S" item)))
	  (cdr (assoc-string "items" starred-items)))

Phew! Let’s produce a first list of URLs and the respective titles:

    (mapc (lambda (item)
	    (let ((href (cdr (assoc-string "href"
					   (aref (cdr (assoc-string "alternate" item)) 0))))
		  (title (cdr (assoc-string "title" item))))
	      (insert (format "* [%s %s]\n" href title))))
	  (cdr (assoc-string "items" starred-items)))

I hate feedproxy URLs and so I absolutely wanted to get rid of all the URLs starting with http://feedproxy.google.com/. This required a bit more code since neither url-retrieve-synchronously nor url-retrieve do exactly what I want.

    (defun redirection-target (url)
      (save-match-data
	(let ((url-request-method "HEAD")
	      (retrieval-done nil)
	      (spinner "-\|/")
	      (n 0))
	  (url-retrieve url
			(lambda (status &rest ignore)
			  (setq retrieval-done t
				url (plist-get status :redirect)
				url (replace-regexp-in-string "blogspot\\.ch" "blogspot.com" url)
				url (replace-regexp-in-string "\\?utm.*" "" url))))
	  (while (not retrieval-done)
	    (sit-for 1)
	    (message "Waiting... %c" (aref spinner (setq n (mod (1+ n) (length spinner))))))
	  url)))

Now I can run the following search an replace operation in the buffer where I generated my list:

    (while (re-search-forward "http://feedproxy\\.google\\.com/\\S-+" nil t)
      (replace-match (redirection-target (match-string 0))))

Phew, thank you, Emacs!

Tags: RSS

Comments

Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. See Info for text formatting rules. You can edit the comment page if you need to fix typos. You can subscribe to new comments by email without leaving a comment.

To save this page you must answer this question:

Please say HELLO.

Show Google +1