> XHTML 1.0 Transitional Invalid_ 

c u t u p @ c u t u p . o r g


[googlereader lynx api] last modified: 03/25/2007 02:03 pm

Here's a way to get a count of unread items in google reader in shell. They haven't really released an API yet, and afaik one of the reasons is they only have cookie based auth set up, so it's not as easy as it should be, but this works. It uses lynx and awk.

1) Find lynx.cfg. On my mac with fink it's /sw/etc/lynx.cfg, otherwise probably in /etc somewhere. Find the PERSISTENT_COOKIES line and make sure it's uncommented and set to TRUE.

2) Navigate to google.com/reader, login and accept the cookies, then quit. Now you have the cookie you need wherever your lynx.cfg saves them.

3) Now this shell command:

lynx -dump http://www.google.com/reader/api/0/unread-count?all=true| awk -F\> '/feed/{x=1}/count/{if(x==1)y+=0+$2;x=0}END{print y}'

will print the count of unread items to STDOUT for use in your nefarious projects. I throw it into Geektool.

Anyone know any better way to get this? Seems like there's a 20 digit userID that everyone has - but I haven't found a way to to use that to get this information.

[firefox api] last modified: 03/25/2007 02:03 pm

Lately, I've been customizing the heck out of my Firefox bookmarks. I thought I'd write about what I've been doing. It's a couple sort of trivial things, but taken together it's been pretty darn uselful, and none of this stuff was as immediately obvious to me as maybe it should have been.

Since I use del.icio.us almost none of my bookmarks are just plain urls, they are mostly hacks of Firefox's Quick Search feature. I use the Bookmarks Synchronizer extension to publish a copy of my bookmarks online, so as well as having a backup I have a way to standardize changes across machines, which makes it feel more worthwhile to spend time on this.

Here's the deal. If you make a new bookmark in Firefox you get this:


What's important are the location and keyword fields.

To make a custom search bookmarklet, you perform the search you want and copy the resulting url. For example, if I go to altavista and search for poop, I end up at this url:


Now, if I make a bookmark to that url, with Title altavista, replace poop in the url with %s and make that the Location, and set the Keyword as alta, entering alta 31d1 will search for 31d1 in altavista. So you see, that %s gets replaced with everything after the keyword. Note that just entering alta into the url bar will search altavista for %s.

Where it starts to get interesting is how this can interact with any script.

What I do is have (some of) my (php) scripts accept the full $_ENV['QUERY_STRING'], parse the string, and perform whatever actions. For example, a todo list script might take:


as "delete item 17",

http://url\_to\_script/?blah blah blah

as "add new item 'blah blah blah', and of course


should display the list. To make our Firefox bookmark extra nice we make it so


redirects to http://url\_to\_script.

Then i can make my bookmark Location http://url\_to\_script/?%s and set a keyword, and my browser is a little closer to a command line.

With a little javascript we can get a little deeper into browser as command line stuff. The excellent del.icio.us super-fast bookmarklet is a case in point. It allows you to post a page to your del.icio.us by typing keyword tag1 tag2 ... tagx into the url bar while you are on the page, also it puts any highlighted text into the extended field. Unfortunately, typing the keyword with no tags posts the page to del.icio.us under the tag %s, and I don't know enough javascript to fix that. My feeling is that the bare keyword should just take you to your del.icio.us page, or do nothing.

What I like about making scripts to fit this style is that the logic is all server side, and there are many ways to access and alter the data. You have nice command line style bookmarks, you still have the regular url you can access from any browser, and it is almost trivially easy to write a bash script using lynx -dump or curl -s to access and alter the data through a terminal. This is what I do with my todo list. I access it through a bash script all the time, and the script contains barely any logic.

The drawback for the bash script is, of course, it only works when I'm online. I'm still experimenting with different ways of keeping and synching a local copy, but that's not really the point of this post.

The upshot is i have a ton of bookmarks that i access through keywords instead of urls, and many of them can take arguments. For example, I search del.icio.us now with a bookmark with Location: http://del.icio.us/tag/%s and Keyword: ds. Since del.icio.us expects tags to be separated by + signs, to search for audio and mp3 I'd enter ds audio+mp3.

Sites where the url structure is the API lend themselves to this so well, the whole thing gives me lots of ideas. I feel like making web-based scripts with an eye to hooking into this functionality of Firefox is an easy way to get a lot of extras for free.

[All Posts] [top]