My Rockbox Experience

Through a circuitous route, I ended up installing both an updated Sansa firmware on my Clip+ last week, and then a few hours later the Rockbox firmware. I had a vague hope against hope that either would correct the one thing I don’t much like about the player – the fact that the fast-forward and rewind are silent. Unfortunately, neither one changed that. I did however like a lot about Rockbox. I loved that I could change the display to the “mixtape” theme which makes it look like an old school Walkman complete with spools that change in proportion to the percentage of the file played.

The one thing about Rockbox that was unfriendly to my primary use of podcast listening was the way it restarted every file every time. That’s a bummer when one is 2 hours into a 3 hour show to lose your place by accidentally bumping the skip button (a pretty common occurrence with me.) There is a an automatic bookmarking system that is both complex and hard to get going. Even worse, it bookmarks on stop (not pause) so it doesn’t even address the accidental skip problem. Coincidentally I ran across this dude who was bugged by the same thing and posted some patches less than 24 hours before I started using it.

My previous post relates my story trying to get this set up on my Macbook, which seems to be a platform the Rockbox developers don’t much care about. On my work laptop I was able to get the source, apply the original patches and build it. As I fiddled with the UISimulator I didn’t quite like the heuristic of that work, which did the automatic position saving only when the file was over 20 minutes long. I have plenty of podcast files under 20 minutes long that I still want the position preserved on, so I changed the logic from the file being greater than 20 minutes to the path beginning with “/PODCASTS/” which is a built-in directory on the Sansa Clip anyway. The original firmware treats the /MUSIC hierarchy differently than /PODCASTS and I was cool with that, so I just preserved it. I made myself a build this afternoon and I’ve been using it all day since then. I sure like it in general. However, I think I’m going to also add in the ability to test against the ID3 Genre tag so that if it is “Podcast” preserve the same behavior regardless of path. I also am going to see if I can figure out how to make the left skip button not reset to the beginning of the file if it is podcast by the above criteria and then only skip by full files. At that point, I’ll have the absolute best features of Rockbox matched up with the original Sansa Clip firmware. Rock on!

I don’t know that there is any point in trying to submit them back to the Rockbox project because the are pretty specific to my use case, specific to the Sansa way of doing things, and because the original dude seems to be catching static from the Rockbox developers for this feature that I find so wonderful. I guess I’ll offer it up to them if I can generalize it but they don’t appear that interested in it. I can say that the currently released automatic bookmarking feature is unusable and the dude’s patch is wonderful to me. This is what open source is all about, is it not? I like it better this way, so I make it that way. If they don’t want it release, I’ll be happy with my homebrew version and they can release whatever they want.

Open Source Fun

I’ve started playing around with Rockbox for my Sansa Cip+, about which I’ll post separately. However, I wanted to point out something I found interesting, funny and kind of pathetic. I downloaded the source and could build the firmware itself but the testable UI Simulator does not build on the Snow Leopard version of OS X. You’d think that the software failing to build on the most widely distributed Unix variant on a newest version that has been out for a year would be of interest to them, but you’d be wrong. Instead the advice was to set up a VM instance, install Linux on it and build from there. Oh boy. That’s a fun way to encourage participation – add yet another yak to shave. Pass.

As it happens, my day job workstation is Ubuntu and I spent today doing work with some slow compiles so I was able to build and test from there in the dead time waiting for builds. However, there was no chance of me going out of my way to set up a Linux VM on my Macbook in order to work with Rockbox software. At that point, you might as well tell me to “write a distributed map reduce function in Erlang”, ala this webcomic.

Software Engineering Radio Interview with Kent Beck

I often randomly stumble across podcasts and try a few episodes out. I just did that with Software Engineering Radio, mainly to hear this interview with Kent Beck. He is the person behind JUnit. In my job I try to to test driven development wherever possible (given that I work on a system with many many moving parts.) We have a unit test framework for our C code and if any problem can be reduced to a failing unit test case, that is always where I want to start. I’ve gotten to the point that in those cases where it just isn’t feasible, working without that test coverage makes me itchy and nervous.

There is a second order effect to all this that sometimes gets lost in the shuffle. When it is part of the culture of a team to always unit test their code, outside of the value of the testing itself if forces the code to be written in a testable manner. You tend to not have gigantic functions with many effects because those just aren’t testable. The same aspects that make it unit testable also make it more understandable when new people are brought on board to maintain it, so that is always a long term positive.

It’s been over 4 years now since I wrote Java code on a daily basis, so my understanding of the state of that art is slowly atrophying. One part of the interview I found really interesting was when Beck talked about JUnit Max. I had never previously heard of it, but it is an Eclipse plugin for doing continuous unit testing in the background as you work, in the same way Eclipse is always compiling after every change. There are some interesting heuristics in it, such as more recent tests run first as well as tests that have failed most recently. The idea is to put any failures in front of the developer as quickly after saving a code file as possible, so by guestimating which ones have a higher probability of failure, JUnit Max tries to get to that point faster. Also interesting is that it is not free, but $100/year for a license. This seems like a very good way to monetize open source. JUnit itself is free, and Beck et all have given it to the work for free. This value added enhancement which is primarily of value to those whose time is money (high rate consultants or high wage employees) is not free, but also priced such that practically any organization is going to see a payback quickly. The guy has saved the development world billions of dollars by allowing them to find bugs earlier, it makes sense to kick him $100/license for saving you a little more time if you need it. Everyone is a winner.

Google Chrome on OS X Now Has Greasemonkey Built-in!

One of the big bummers for me of using Google Chrome on OS X was the lack of Greasemonkey script support. There were some various hackish ways to make it work for Windows versions but I couldn’t find a reasonable way on OS X. Tonight I realized part of why that is. The support is there, but there just is no interface around it. However, if you click on a Greasemonkey script it installs and is visible right there on the chrome://extensions page.

The main script I want to run is the “Enhance ComicBookDB” script, which adds some links and changes some defaults to that site. I clicked it and Chrome asked me if I wanted to install it. I did, reloaded Comic Book DB and voila, it was working. Wow! This is working for me in the 4.0.249.22 build. This makes me freakishly happy.

Popularity Contest: WordPress Plugin Hall of Shame

Yesterday I had my first ever bad experience with a WordPress plugin auto-upgrade and it was really really bad. I did the auto-upgrade of the Popularity Contest plugin, first upgrade in a very long time. These upgrades have become so routine that I don’t much think twice about executing them any more. I clicked the link and BANG my blog stopped working. Completely. Totally. All pages, including the wp-admin pages, said:

Error: Popularity contest cannot be installed.

I had to move the plugin out of the directory in order to get anything to work again, then I had to clear my SuperCache as the non-functioning pages had been cached for some of them, like the all important front page.

This morning before work I took a few seconds to look at it, and I found some egregious code. The plugin has a get_settings() method which has this code (forgive the white space munging):

// If the DB tables are not in place, lets check to see if we can install
if (!count($settings)) {
// This checks to see if we need to install, then checks if we can install
// For the can install to work in MU the AKPC_MU_AUTOINSTALL variable must be set to 1
if (!$this->check_install() && $this->can_autoinstall()) {
$this->install();
}
$settings = $this->query_settings();

if (!count($settings)) {

// trigger_error('Popularity Contest Cannot Install', E_USER_WARNING);
wp_die('<b>Error:</b> Popularity contest cannot be installed.');
}
}

Now, this appears to have a couple of really bad problems. One, it is doing wp_die for a configuration problem! Seriously, WTF? I think that violates the contract between a plugin and the main WordPress process. If you have some sort of problem, die gracefully rather than just shutting down the whole blog. Second, it seems like my case will always trigger that die. The $settings array comes from:

select * from wp_ak_popularity_options;

I had the plugin previously installed so that it didn’t need to install but I was running all default options so that my wp_ak_popularity_options table was zero rows. At this point, the plugin decided that was such a bad problem that it needed to shut down all of WordPress including the wp-admin pages you would use to create those option records. Oh boy.

What I did to get back running was to comment out the wp_die line. At that point, the plugin actually installed and I was able to get to the settings page for it. I did a “Save” even though I had all default values just to get rows into that table. Now my table has 4 rows in it and presumably even if I do another automatic upgrade this one should keep working. This plugin is now on my watchlist, though. It’s burned me hard once.

I think for people having this problem just commenting out the wp_die line will get most or all of them back up and running. I’m not sure what the thinking was that went into this logic but it was a really terrible bit of thinking with really severe consequences. Not to the WordPress plugin author community: calling wp_die is really fricking serious. Don’t do it unless continuing to run will delete the blog. Otherwise handle your problems yourself.

Update: For bonus points in the version I have installed, the setup page gives you code to cut and paste into your template that doesn’t actually work. It tells you to use:

show_top_ranked_in_last_days($limit, $before, $after, $days = 45)

when in fact the actual function is:

akpc_most_popular_in_last_days()

The former is an internal one not available outside the plugin. The latter is the external API call that is available to your template.

Update 2: I’ve opted to remove the plugin as well as the WordPress Mobile Edition by the same guy. The author takes shutting down my blog too lightly, and defends this whole thing with “hey, it’s a beta so what do you expect?” I was installing via the Plugin GUI on my WP Admin page. If it was risky, I had no way of knowing that unless seeing a “b” in the version number was supposed to communicate that. Over on his plugin page, Alex King defends the use of the wp_die call as being reasonable. I don’t see that rendering the blog unusable is a reasonable way to deal with it, even if the configuration tables were missing. His code wasn’t even checking that, as it treated an empty table as the same as a not-created table. Bad juju.

Alex, thanks for your time in creating this plugin. I appreciate the use I got out of it for the time I did. However, my blog is important to me and I can’t have plugin upgrades shutting it down.

Firebug Lite for IE

Last night I had to do some work on the CREATE South website, fixing problems making it render funny on Internet Explorer. The first thing I did when I sat down to start was to google on what Firebug like things there are for IE. To my shock and pleasant surprise, you can get a simulacrum of Firebug on any browser via Javascript. It’s called Firebug Lite and it worked great.

I set up the bookmarklet that will inject Firebug javascript into the current page, and from that point on you have something very similar to the Mozilla plugin right there. I wasn’t doing hardcore debugging in Javascript like I have found Firebug to be so useful for, but it helped me sort out some mismatched <div> tags and showed me where the problems were. I had no inkling before last night such a thing existed, but wow is it awesome and useful!

Tables vs. CSS, Round 732

Over three years ago, I wrote a rant about the difficulties of using pure CSS for layout. That post got some Google traction and still slowly collects comments. Today via Signal vs Noise I saw another person saying much the same thing years on, that yes doing layout in pure CSS is more pure and technically correct but using tables gets it done and allows you to move on with your life. Hilariously a post entitled “CSS Trolls Begone” began to collect CSS trolls almost immediately. CSS fundamentalists are exactly as big a pain as any other form of fundamentalist and for the same reason – disagreeing with them pokes a big hole in the purity of their belief system and they feel the need to “correct you”.

Listen, I’d prefer to do all my layout work in pure CSS too. When I read up on best practices, implement those and then test on 3 different browsers with 3 different results – I’m done. It’s too expensive and out of the range of feasibility. If you are so concerned about the purity of CSS, then make it your personal crusade to get the CSS layout support standardized across all HTML rendering engines. Until then, you can rage at me until your face turns blue, call me stupid for failing to implement layout in CSS, do whatever you want but I’ll do the pragmatic thing that works.

I don’t get to live in the world of theory and optimality. I’m here on the ground, with deadline and timelines and finite resources and bills to pay on top of an endless array of projects I’d like to do. I need to get shit done and off my plate for new shit to come in. If I can do that by hacking a table in 20 minutes, when 4 hours of CSS layout work still leaves me with a non-scalable layout that renders differently in IE than Firefox then Safari, I’m going low tech.

CSS troll comments routed to /dev/null because my resources are finite and life is too short.

Microblogs Need GUIDs

I’m listening to the 8/2/2008 episode of the Gillmor Gang where Steve is talking with Dustin Sailings of Twitterspy. As they are talking about Twitter and Identi.ca and such, a realization hit me. Because I know nothing about how any of these microblogs are implemented this might be naive and redundant but let me throw it out there.

Microblogs absolutely need GUIDs. Particularly if we are talking about federating together identi.ca powered services that exchange messages, it is highly important that we be able to uniquely identify them. Since every microblog post originated somewhere, I believe this GUID should almost always be the URL of the individual message on the originating service.

For example, I make a tweet on Twitter. FriendFeed picks that up and aggregates that in my feed. That FriendFeed message should have a GUID that is the original Twitter URL. If I have a ping.fm or TwitterFeed or any other reposting type service running, they should all pass in the GUID as they do the push from Twitter to other services. If I post originally to Identi.ca and it pushes to Twitter, just reverse that notion. Then in cases like where your blog automatically posts messages to Twitter, the GUID should be the permalink of your blog post. This would enable easy deduplication. For example, now FriendFeed could see that the Twitter notification of the blog post is something it has already seen from the blog itself. It can only show a single occurrence, not the avalanche of duplicate messages we now see.

The same basic principle would hold with Flickr entries that get posted to Twitter or similar services. Use the Flickr page as the GUID so that it is easy to tell that the notification from Twitter, Plurk and FriendFeed are all the same thing so whatever interface you are using should show it only once. I think the benefits of this fall out very quickly. This seems like it would be simple to add in if it doesn’t already exist, simple to add to every bit of message flow and simple to use at all the user interface ends. If the idea is that in the future these services will be distributed and federated, this sort of thing should happen sooner rather than later.

Statistically Indistinguishable from Perfect

Here’s a summary of why the Amazon S3 service went down last week. The title of this post comes from a statement at the end of their goals for their own service level. I really like that turn of a phrase. The takeaway lessons from this are 1) engineering services at this scale is always an adventure 2) failures of this magnitude tend to come from the places you would never think to look and 3) cloud computing as a model has a set of risks associated with it that tend to be glossed over when people talk about the ease of setup and cost of the service.

Firebug Works with Firefox 3!

For those of you like me who have upgraded to the Firefox 3 betas but were taking a big hit because of the loss of Firebug functionality, there is hope! Although it appears that Firebug 1.0.5 doesn’t automatically upgrade to 1.1, but 1.1 does in fact work with Firefox 3. I found this out by accident and it really made me happy since part of what I’m doing today is fiddling with <div> tags on the Create South website. It sure makes life better to have Firebug helping me with that.

Tag, You’re It

Over on AmigoFish, this week I implemented tagging. Last night I got autocompletion working. This is not just autocompletion, but the ability to type in a list of values, separated by commas or spaces, and to get the autocompletion on just the last tag in the list. It took a little trickery but I’m really pleased with the outcome. For those of you who are members, log in and take a look (and if you aren’t a member, you should sign up!) At either series or episode views you can add tags and see this spiffiness in action.

There is one tweak I made last night that isn’t live, and that’s how it handles when there is nothing at the end (ie, there is something in the field but the last characters are spaces or commas). Currently, you get a list of numeric things, the very first tags alphabetically. When I push up my tweak, you won’t get any list in this situation, only when there is a sensible value to search on. Let me know what you think, friends and neighbors.

Pretty Print RSS Feeds in Mozilla Again

A while back, I railed about how Mozilla intercepts RSS feeds. Now some guy has created a web service that will inject some stuff into the RSS to make Mozilla treat it like XML again. I’ve tested this out and it works great. One thing to watch out for is that when you do this, your original URL is getting passed as a parameter to this CGI so if you are, like me, typically cutting and pasting it into something else, then you have something URL encoded that you have to fix.

Even with that, I love this. It will save me lots of time from now on by not requiring me to view source to get information out of RSS feeds. Thanks, Mihal!

The Big Rewrite

A few years ago, Derek Sivers announced a plan to rewrite CD Baby’s web interface in Ruby on Rails, and he had a blog for his progress. That blog went dark a while ago, and recently he announced his plan to abandon that project. His insights are interesting, and I in general would agree with him. As much as I enjoy Ruby on Rails, and for as much as I could not have built and maintained AmigoFish without it, I wouldn’t recommend people rewrite existing systems in RoR without a very good reason, and maybe not ever. It’s fantastic for greenfield development when you can pick everything, but less so when you have to mold it into the shape to fit your legacy systems. When you have a functioning business generating real cash already in that legacy system, you should be very careful farting around with the fundamentals.

Bonus Link: Chad Fowler on why the big rewrite seldom works via Loud Thinking.

Need Fortran Help

I have a lazy web request. Can anyone point me to an example Makefile for the Lahey LF95 compiler? I’m helping someone out with a Fortran project but I know nothing about it. I can’t figure out what bits to flip to have GNU make build the objects and link them together and for the life of me I have not been able to Google up an example of a working project. Help!

Firebug

A tool I’ve found useful at both work and home is the Mozilla plugin Firebug. At work, since it gives you a JavaScript debugging environment that is far superior to the built in we use it to figure out how things are working. Being able to examine variables and call functions from the console command line inside Firebug has made troubleshooting problem JavaScript wildly easier than it is any other method. For this alone, it would be a keeper.

It also does CSS debugging, after a fashion. It will show you the structure of your webpage, and as you hover over the HTML, it will both show you the stylesheet that controls that particular chunk of the page (including the main and inherited bits) but will highlight it on the page itself. Really, it has to be seen to be believed. What tweaks I’ve done to my new template successfully were all done with that. Before I had this tool, making any effort to debug CSS was a wild crapshoot. I couldn’t always tell what DIV was in control where or what part of the page it controlled. Now that’s trivial. If you do any webpage development at all – including customizing your blog’s template – this is a download you need.

RIP, “Digital Bill” Douthett

I didn’t see Digital Bill of the Wizards of Technology this year at PME and sadly I never will again because he died on Wednesday. I found out from this cartoon which is an odd way to learn but Bill to the core. I didn’t know him well, but I did know him enough that I have fond memories of what little time I spent in his presence. He was just the right amount of nutty – he was funny and a trip to be around but he didn’t frighten you.

Condolences and best wishes to his family and friends, of whom there are very many. As I read tributes to him, many of them say “I never met him in person but he is a dear friend.” That makes me feel both happy and a little guilty that I met him several times. I wish I could yield back some of that to people that knew him better than I did.

Goodbye Bill, you will be missed.

PJ in Rails Day

EGC friend PJ Cabrera is competing in Rails Day 2006. Good luck PJ! The prizes are nice, including a MacBook Pro, but I wasn’t going to do it. The funny thing is that pretty much everyone in the contest, if they did 24 hours of paying work for a customer, could buy themselves a MacBook with the proceeds. Me personally, I pretty much have my own Rails Day every night and weekend.

RailsConf

I’m thinking seriously about going to RailsConf in Chicago (announcement here). I’m getting more into Rails all the time, and really enjoying it. It’s predominantly what I use for my fun-time programming nowadays. I’d like to be there with a lot of the game-stepper-uppers and hear about the intermediate to advanced topics, and meet some of the other mixed nuts in this dish.

The only downside is that it is happening at the Wyndham out by O’Hare. Seriously, when you have a conference at an airport hotel 30 minutes or more outside the city, it is almost irrelevant where you actually are. It could be Boston, Seatac, or Ontario California – you don’t actually see anything other than the hotel, conference center and possibly a Ruby Tuesday or Benihana nearby. Might as well hold it the airport Hilton in Joplin, Missouri to be the geographic center of the potential attendees. Oh well, at least that’s near the house of my buddy Coop.

Update: That problem took care of itself when the conference sold out earlier today. Maybe next year.

Ruby on Rails Scaffolding Hint

For those of you who ever have to do Rails work with an older database that was not created according to RoR naming specs and to which other things have been coded such that you can’t just rename everything, here’s something I found out the hard way. I created my model and used set_table_name to point to the correct table in the DB. I could do a :scaffold in the controller and everything worked fine. However, if I tried to generate the scaffolding so I could modify it, I’d get the dreaded “Before updating scaffolding from new DB schema, try creating a table for your model” error. I was befuddled why a scaffolding that was working would give such an error on generate.

After much googling, I found out that the scaffold generator ignores your existing model, and only has the capacity to create from the tablename it expects based on your model and controller names. To get around this, I renamed the table to what Rails expected, generated the scaffold, named the table back to the original and then used the set_table_name to point to the original table name. I think it’s kind of a pain this way, but the workaround is to temporarily make the name what RoR wants just long enough for the generation, and then set it back. It beats copying the scaffolding from a different class and changing everything by hand, which is what I did the last time this came up.

What Happened to Crawler Etiquette?

Looking at the server logs for this weblog a few minutes ago, I noticed the Everest crawler from Vulcan (which namechecks owner Paul Allen on that page) downloading pages. I also notice that they were distressingly frequent, fetching pages 30K and larger every few seconds. I filled out their feedback form, letting them know that I have denied them at the server level as long as they are running at this high of a resource burden. I always thought one minute was the most frequent ethical limit on crawler accesses, and I consider hitting more often than once every ten seconds to be clearly abusive.

Am I alone in being concerned about this? More and more I see crawlers that hit at this level. Does everyone one of these crawler authors think they are the only one out there? When you have several dozen crawlers all hitting your site every few seconds, it becomes a big issue for an average citizen. I get a little pissed off when I have to increase the size of my iron just to service your fricking abusive swarms of robots. Uncool, dudes, uncool.

Back when I last did crawler programming with Perl’s RobotUA module, it’s default was to enforce that you couldn’t hit the same domain more than one minute. Has this completely dropped out of the radar? I think anyone building a robot or a crawler or even a crawler module should institute this minimum as a default. As crawlers you are guests on these servers, so be good ones. Nothing sucks worse than a project whose value depends one the resources of others but is then a shitty steward of them. I just did one of these projects that involves consuming RSS feeds and when I can I use the modification times to avoid fetching feeds I can avoid. I try not to not fetch too often, even though my project’s responsiveness would be improved by checking everyone’s feed more often. We’ve all got to coexist, so sometimes you have to bust out the golden rule.

Update: A member of the Everest team was stand up enough to leave an apologetic comment on this post, for which I thank them. It should also be noted that they were ethical enough to have an identifiable UserAgent that allowed me to find them. Since I wrote this, I have had two others with just default Java strings in there hit at a much higher level than Everest. Uncool, uncool.