Wednesday, February 25, 2009

Using the "finally" block in Python to write robust applications

This is the first post in my series of three on using XMLRPC to run tests remotely in python (such as javascript and selenium tests in web browsers) and get the results. If that doesn't concern you, this post is probably still relevant; I'd just like to cover the groundwork of making code that is stable and repeatable even in the face of [un]expected problems. Luckily for us, python has a wonderful "finally" block which can be used to properly clean up or "finish" regardless of Bad Things. Let's look at an example of a common problem this can solve:

getLock()
doStuff()
releaseLock()


We need exclusive access to a resource, so we get a lock. We do some stuff, and then release the lock. The problem is that if doStuff raises an exception, the lock never gets released, and your application can be in a broken state. You want to release the lock no matter what. So what you should do is:

getLock()
try:
  doStuff()
finally:
  releaseLock()


Now save a SIGKILL, the lock is going to be released. This is pretty basic, but it is impressive how robust the finally block is. You can "return" in the try block or even "sys.exit()" and the code in the finally block will still be executed.

I recently used this with XMLRPC to safely tell the remote machine to clean up if the local script ran into problems or even got a SIGTERM from a keyboard interrupt. Here's a more elaborate example:

proxy = xmlrpclib.ServerProxy(remoteIP)
try:
  result = proxy.RunTests()
  if result is None:
    sys.exit(1)
  else:
    return result
except:
  sys.exit(2)
finally:
  proxy.CloseFirefox()


The remote machine ("proxy") is running some tests in firefox. While it does this it sets a lock so no one else can run the same tests. If something goes wrong, this lock needs to be reset and firefox needs to be closed so they can run again later. If it gets a result, return it. If it doesn't or something goes wrong, we still clean up but now we can exit with an error code. One of the neatest things about this for me was Ctrl+C'ing the script on my computer and watching the remote machine cleanly quit firefox and release the lock for another process to use.

This is great whenever you need to put something in a temporary state, or change the state after an operation no matter what happens. Think of locks, temporary files, memory usage, or open connections where it is important to close them. Conversely however, make sure you DON'T use an approach like when it isn't appropriate.

for client in clientsToPing[:]:
  try:
    ping(client)
  finally:
    clientsToPing.remove(client)


This is potentially incorrect behavior, because if you failed to ping your client you may want to keep it on the list to try again next time. However, you also may only want to attempt this once and then the above approach would be correct!

In my next post I am going to turn more specifically to remote browser testing and explain how exactly to set up both ends of the connection. After that I'll finish by making a post on using twisted + SSL to retrieve posted results over HTTPS.

Monday, February 23, 2009

Ensuring that you test what your users use

Recently I've come across two pitfalls when testing one of my python applications. In two different cases the tests will run fine in my checkout, but fail miserably for anyone else (because the application is broken). What was happening?

1) I had a new file which is required to run, but I forget to 'vcs add' it. Because the file existed in my sandbox, everything was well. But no one else was getting this file, so they couldn't even run the application. This one is somewhat easy to detect because a 'vcs st' should show that file as unknown status. In that way ensuring a clean status before running the tests can help avoid this. However this won't work well in an automated fashion because there are often unversioned files, and you typically want to run the tests before committing anyway.

2) A time or two I thought I had completely removed/renamed a dependency but forgot to clean up an import somewhere along the line. Even though the original .py file was gone, a .pyc file by the old name still existed, which allowed the lingering import to work. Again however, for anyone else getting a fresh checkout or release, this file would not be avaible and the app would be unusable.

How can you avoid having problems like this? Well, from a myopic viewpoint you could have your test suite delete any .pyc files before running. Then to address the first issue, you could also test that a 'vcs st' has no unknown files, and explicitly ignore any unversioned files you expect. But still, other things could creep up. And while having another machine as your "buildbot" would avoid the first issue, you are still prone to an attack from the second. To really make sure you are testing with the same stuff that you release, you need to be testing releases. In other words, you need to be putting your version through your shipping process, whatever that is, and then testing the final product.

So now that I've realized this is what I should be doing, I'm not quite sure what the simplest and easiest way to do it in an automated fashion is. For python, perhaps this could be achieved by getting a fresh export/checkout of the code in a temporary directory, adding that directory to sys.path, and importing and running the tests. I am sure this is a common problem; is there a common solution?

Saturday, February 21, 2009

Delicious, cheap, and easy whole wheat wraps!

Some of my bloodpact blogging friends have written recipes, and I thought I would add one of my favorite and simplest recipes to the mix. Wraps are a great food delivering device, be it for eggs or veggies, meat or rice and beans. It also happens to be easy to make your healthy wraps without any special tools, for roughly 10 cents a piece! Best of all, you can use as few as two ingredients if you so desire.

the product will make you happy.

Here's what you'll need.
  • 1C whole wheat flour (or white)
  • 1/4C cold water
  • 1/4 teaspoon salt (recommended)
  • 1 tablespoon olive oil (optional)
  • 1/4 teaspoon baking powder (optional)
  • seasonings such as oregano, onion powder, or herbs (optional)
Mix all the ingredients together in a bowl, adding extra water 1 tablespoon at a time until it forms a cohesive ball. Knead it for a minute or two and then divide into 2-4 smaller balls, depending on if you want large, medium, or small wraps. If you are patient, put them back in the bowl and cover with a towel for about 10 minutes.

Now roll them out. For this I recommend rolling between two non-stick surfaces, such as flexible cutting mats (a dollar store or grocery store should have 3 packs for $3-5), silicone baking sheets (also can be had for a few bucks), or something similar. So put a dough ball on a non-stick surface, then optionally put another on top. Now roll it out with a rolling pin if you have one, or a bottle of wine/oil/beer if you don't. It is easiest if you flatten it out by hand as much as you can first! It may take a couple tries to get them as flat (and as such wide) as you like, but you will definitely improve. Or you could just get a tortilla press online, though I have yet to give in.

Now heat a pan on the stove to medium-high heat. Once it seems up to heat, throw a wrap on the pan (no greasing necessary). After 15-30 seconds you should be able to jiggle the pan and have the wrap freely slide around, and this is useful for ensuring it doesn't burn. Give it about 1-2 minutes on that side, until you start to see a bubble or two, then flip it over for another 1-2 minutes. Now sit it on a towel to cool, and repeat for the rest of your future wraps! As they are delicious warm, I like use one right after I make them.

I love making these wraps because it is much cheaper than buying them, can be as healthy as I want, and makes eating them much more enjoyable knowing that I hand-crafted each one. Common uses are eggs with veggies in the morning or your typical taco fare. You can also make mini flatbread pizzas, or sandwich some cheese and veggies in between two for an extra tasty treat! Let me know what you think!

Friday, February 20, 2009

Finding new albums by your favorite bands

The other week I felt a little disconnected from recent music. I was sure that some of my favorite artists had released new albums that I wasn't aware of, but I wasn't sure how to be notified of it. I use last.fm when listening to music most of the time, and have been for about 5 years, so I already have a long and dynamic list of my favorite artists, many of which I haven't been keeping up to date with. There are also many places that offer feeds of recent albums, including a private torrent site, Waffles. As a curious programmer I decided to bridge the gap here and write a little proof of concept script that would grab my top last.fm artists and query Waffles to see what was new.

About an hour and 100 lines of python later, I had a working proof-of-concept. Right now it only supports Last.fm as a favorites source, and Waffles as a release source, but expanding it by following the same interface should be fairly straightforward. It is up for anyone to branch at https://launchpad.net/nutunes. For any programmer it should be pretty easy to make a favorite source which reads from a text file and use other services like allmusic.com, amazon, or perhaps even iTunes to get a list of recent releases for a given artist. Feel free to branch and create a merge proposal for integrating with other services. As the code is less than 100 lines, I hope it more or less documents itself. To use it just run nutunes.py and it will prompt for your last.fm username, and Waffles credentials.

What other ways are people currently staying on top of music of all the bands you are interested in? Heck, maybe Last.fm already has this option, but I sure didn't notice it, and learning and experimenting is fun!

Eye tracking and UI framework / window manager integration

Eye tracking is the technique of watching the user's eyes with a camera and figuring out where on the screen he or she is looking. While some computer users with disabilities use this technology as their primary input device, it hasn't become very popular. However I think that with webcams being integrated into the majority of new laptops, and multi-core processors with some cycles to spare for image processing becoming ubiquitous, eye tracking deserves to become more popular.

I don't believe the technology is accurate enough (yet) to replace your mouse, but it could still improve usability in a few ways. Imagine having the equivalent of onMouseIn and onMouseOut events on widgets when writing a user interface, but for where the user is looking instead. Applications could leverage onLookIn and onLookOut events at the widget level and open a whole new realm of functionality and usability. Videos and games could pause themselves when you look away, or bring up certain on-screen displays when you look at certain corners of the screen. If an application sees you are studying a certain element for a period of time, it may ask if you need help.

It would also be interesting to see eye tracking leveraged on the window manager level. Most people use focus follows click to focus windows, and some enjoy focus follows mouse, but imagine focus follows (eye) focus! Using multiple monitors would become much easier if your keyboard input was automatically directed to the application, or even specific field, which you were looking at. Eye gestures, like mouse gestures, could be potentially useful as well, such as glancing off-screen to move to the virtual desktop in that direction.

Apple and Linux both seem to be in a good position to implement something like this. Apple has control of both the hardware and the software including the OS, and has been integrating cameras in laptops for a while. As a result they are in a great position to pioneer this field and really have something unique to bring to the table in terms of a completely new user experience. However in the open-source world, Linux is also in a decent spot to do this as the UI frameworks and window managers are all patchable and most webcams are supported out of the box.

Eye tracking has the potential to enable us to use computers in ways that were previously impossible. What are your thoughts on eye tracking? Does it have a future in the computing world and where can it take us? And how long will it be before we will take this technology for granted? :)

Thursday, February 19, 2009

Webhooks and feeds as complementary technologies -OR- How webhooks can enable a collective intelligence

Yesterday I wrote about my observation that feeds seemed sort of like the precursor to webhooks, but that each had distinct advantages. Adam left a comment confirming my thoughts on their pros and cons, but then pointed out how they can be used together to get the best of both worlds. I really liked the implications and wanted to expand upon how webhooks could enable the next generation of feed readers, and further, really lead us towards a more collective intelligence.

The way things work now is that you have an aggregator, such as Google Reader, which polls all of your feeds every once and a while (although in this specific case surely doing some caching behind the scenes for users with the same feeds). This is suboptimal for two reasons. First, you don't get instant updates. Statistically, you will on average receive an update pollFrequency/2 minutes after it is posted. If you want to be able to respond to something in a quicker fashion, this may not cut it. Second is that the polling is causing unnecessary load on the server.

Now let's try it with just webhooks. You inform all the event producers you are interested in about your aggregator callback, and you get instant updates for all of them, with no wasted polling. However when your aggregator is off, you aren't receiving updates! This means you can miss updates, and you have no way to catch up.

Combining these two however, we can solve the problems of each technology with the other and pick up none of their downfalls. Use webhooks to tell your event producers about your aggregator as was done previously in the webhooks model. But now, the producer is also supplying a feed. This means that when your aggregator is up it will receive instant updates and doesn't need to poll. However when you start it up after having been down, it can use the feed to catch back up; no missed events!

I think this new model has the potential to improve aggregators, as well as make them more usable for applications where speed is important. It could also have a much greater impact though. Twitter is a good example of this I think, wherein you could tweet about things you need fast feedback on such as a meal choice at a restaurant, the best way to do something you are working on, or perhaps even more urgent things such as needing a ride. All of these things could and surely are done in the traditional model, but with the push revolution they become more useful as quicker responses are more likely. People will become more likely to produce things requiring (potentially much) faster feedback, and this feeds into itself as people become more likely to respond, knowing that their responses are more relevant because less time has passed. I think it is an evolution that, while initially potentially sounding subtle and unimportant, can help lead is into a more collective intelligence that we couldn't imagine living without once we have it.

Tuesday, February 17, 2009

Are feeds the pre-cursor to webhooks?

Recently I've been reading Timothy and Jeff talk about webhooks. Webhooks are essentially an amazingly simple way to be notified about arbitrary events on the web. In this model, any event producer allows you to supply a URL, which it will post to on each future action with the relevant details, whatever they may be. Then the other day when I was using Google Reader, something struck me: it felt a lot like webhooks, but turned on its head.

Anything that offers a feed such as RSS or Atom can be plugged into Google Reader; things like blogs and their comments, twitter searches, commits, downloads, bugs, and build results. As I started plugging more and more diverse things into Reader, I realized that it was basically like the "pull" equivalent of webhook's "push" nature. Instead of telling all these event producers where to contact me, I'm telling Reader where to learn about all the recent events.

I may be thinking too shallowly, but in the webhooks world Reader would be the service offering the interface. Then, instead of all these different things offering feeds, you could just plug Reader's hook into them and be notified instantly. Currently, for example, when I ask a question on a blog post, I'll throw the comments feed for that post into Reader so I don't have to keep checking back on the site; Reader will bring the potential answers to me. With webhooks though, I would reverse this and provide the service with the URL of my event consumer.

It seems like, as technology and the internet often does, feeds are evolving into what users need them to be. Services are seeing that people want to follow and be kept up to date without having to check back on hundreds of different sites. That's way too much time and information, especially when it all looks different. However, by plugging the feeds of all those things into an aggregator, we gain a central notification place for all these events, and it becomes much more managable.

So will webhooks replace the current paradigm that I'm using here, or complement it? They seem to each have their pros and cons. Feeds allow a history, and you won't miss an update because your aggregator was down; it will catch it on the next poll. However webhooks are instant and can be more efficient as you don't have the need for polling at all, but if the producer loses your hook, you're out of the loop.

So an overflow of interesting events occurring on the web necessitated a standard way to view them, and we got feeds. Are webhooks the next step of this evolution, or something else entirely?

UPDATE: Mark Lee responds.

Monday, February 16, 2009

Cracking On-Screen Keyboards with Visual Keyloggers

A few financial sites including HSBC and the US Treasury have recently added an extra measure of security to their site. Instead of simply requiring a username and password, an on-screen keyboard was added, requiring you to "type" in a second password with your mouse:



The logic behind this is that if a user's computer becomes compromised with a keylogger, the attacker could only obtain the username and primary password. The secondary password would remain uncomprised as it doesn't involve keypresses. This didn't seem too useful to me however, so for my "Image Understanding" class I decided to see if it was possible to create a "visual keylogger" which could capture this secondary password. It wasn't too difficult, and essentially demonstrated that the extra password was more inconvenience than security. Let me outline the basic process.

In order to do this, you need to be able to capture the contents of the screen at certain intervals. It seems like a fair assumption that if you (as the attacker of a comprimised system) can capture keyboard input, you can also grab screenshots. The goal is to turn a sequence of these screenshots of someone typing with an on-screen keyboard into a single string output equivalent to the password typed.

First we want to record the position of the mouse at each shot. This would normally be a trivial function by asking the OS; however, in my case I was writing this for an Image Understanding class and had to use the sequence of images as my sole input. As such, I used a basic templating approach to locate the mouse by a few of its key features. This was surprisingly robust; however, asking the OS for the mouse position is an easier, even more robust, and more likely attack vector in real life.

Now we need to figure out when the user clicked a key. Any keyboard used for a password purpose is going to give some form of feedback when a key is clicked, such as an asterisk in a password field, so the user knows if they have successfully clicked a key. The easiest way then to notice this is to subtract the color values of each screenshot from the previous one, giving you a new image with non-zero pixel values for each changed pixel. Among other things like cursor movement and web animations, the aforementioned asterisk feedback is going to be present in this image.



For each new image then, subtract and look for this feedback. If it's there, that's a key press! Combine this with the position of the mouse and you know where the user clicked. Now it gets slightly tricky. You know where they clicked, but if you grab that section of the screen, you'll get something like this:



because the mouse had to be over the key to click it. This is rather easily worked around, however, by going backwards in your mouse position cache until it is a certain threshold away from the clicked position, and grabbing the key image at that point.

After the user enters the complete password, you are going to be left with an array of keyboard images. For any human, this is quite sufficient. For my class however, it was not, and it would not be for any large-scale operation where automation is desired. What we need to do is clean it up by throwing away any pixels under a certain darkness threshold, then cropping the result:



Ta-da! Now we have something that any OCR (optical character recognition) algorithm should be able to chomp through in its sleep cycles. If you are writing for a specific keyboard, you can also just have an array of what each key looks like in binary form and compare to get the answer.

And there you have it! With the combination of a few basic computer vision techniques, we can expand a keylogger to understand input from visual keyboards and render this security annoyance useless. A fun note is that the order/position of the keys is irrelevant. The US treasury website uses an on-screen keyboard as well, but shuffles the keys each attempt. As is hopefully obvious from this algorithm, there is no assumption of a keyboard layout; the keys could shuffle every single click and it wouldn't matter.

Sunday, February 15, 2009

Lightweight personal finance just got easier with wxBanker in Jaunty!

wxBanker, your (hopefully) favorite lightweight personal finance application, has recently been accepted into Jaunty! Aren't familiar with it? Check out the screenshots! It took a few months of getting over the debian packaging learning curve, and about as much work getting and responding to the reviews from MOTUs, but I did it. I plan on doing a quick point release this week and releasing wxBanker 0.4.0.3, which will sport updated translations in 15 languages (thanks translators!) as well as a minor bug fix or three. Once I get that out and into Jaunty, I'll turn my complete focus (I hope) to the 0.5 series. If you'd like to translate wxBanker to your own language or improve a few of the lacking existing translations (Bosnian, Dutch, and Portuguese particularly), I'd love it! Head over to https://launchpad.net/~wxbanker-translators and join the fun!

The 0.5 series is a refactor and a bit of a painful one, as I didn't have a great handle on how to efficiently and smoothly refactor. I could probably do a much better job managing it now, but that's how experience works. All in all it is a much cleaner structure and led to more and better tests, which should allow more agile development and protect against future regressions. Some of the main upcoming features I'd like to get in are transaction tagging, recurring transactions, reporting, online syncing (via mint.com), and csv imports. I'd love for at least a few of those to make it into 0.5, and there is good progress on some of them including csv imports thanks to an impressive branch from Karel. On the other hand 0.5 has already been ongoing for about 3 months and I might cut a release into a PPA and get feedback while I add new features, sorting out any 0.5 issues and leading to a robust 0.6 release.

Overall I have learned a TON from this project, in no small part thanks to Launchpad. wxBanker started out as a terminal application which stored everything in a pickled linked tuple that I used for myself and added features as I needed them. Eventually I added a GUI (in wxPython) and registered the project on Launchpad, to get free hosted version control as well as more formal bug tracking (instead of a text file :). The combination of Launchpad and [wx]Python being cross-platform made it accessible to everyone, and took it from a project used only by myself to a project available in 15 languages with code contributions from multiple people.

So in conclusion, thanks everyone, enjoy wxBanker 0.4.0.3 in Jaunty, and look forward to future versions in my PPA. If you're not on Jaunty yet, you can install 0.4 from my PPA, which I'll be updating to 0.4.0.3 as soon as I release it. I'd love for any of your contributions, suggestions, questions, or criticisms to end up on Launchpad. I'd love to make it as usable and intuitive as possible, so anything that is unclear or confusing would be awesome to hear.

Saturday, February 14, 2009

Playing Spelunky in Ubuntu with Wine and Compiz

Spelunky is an awesomely addictive indie game, which I highly recommend checking out. It is sort of like Mario meets Nethack. As a side-scrolling, cave-adventuring character, you must venture deeper into the cave, defeating enemies, saving damsels, and collecting money and other fun items. A unique aspect that it takes from roguelike games is that there is really no concept of saving; each "play" typically lasts only a few minutes (much less at first), but you can get shortcuts to different zones to skip ahead once you get better. Also like Nethack, it is a surprisingly rich, detailed, and polished game for all its simplicity.



The unfortunate thing about Spelunky is that only runs in Windows. My initial attempts at running it in Wine proved unsuccessful, but after a while (and playing it on a friend's machine which gave me enough to interest to keep trying), I figured out just the right combination of tricks to make it quite playable in Ubuntu. Here's what you need to do:

  • download Spelunky, make sure you are using compiz, and install wine and compizconfig-settings-manager.
  • set wine to run in an emulated desktop window. To do this, run "winecfg", go to the Graphics tab, and check "Emulate a virtual desktop". The size isn't really important; I use 800x600.
  • extract Spelunky and create a file (if it doesn't exist) called "settings.cfg" and put these contents in it:

1
1
0
0
1
15
15


This is the default settings, tweaked to the only setting that allows it to run in Wine. You can't configure it normally since it only works in this way, so I had to have my friend change settings on his Windows box and watch what numbers changed. Thankfully for you I have done the hard work.

  • Run Spelunky! Double-clicking it should probably work, but if you want it to play nicely with other applications using audio at the same time, force it to use pulseaudio by running "padsp wine spelunky_0_99_5.exe". It should show a configuration window, but you can't really interact with it. Just click in it and hit enter.
  • Now you can play Spelunky, but it is at a 1x zoom level, the only one that plays nicely with Wine. This isn't very usable! Run "ccsm" (System -> Preferences -> CompizConfig Settings Manager") and enable "Enhanced Zoom Desktop" from the second group, "Accessibility". Now you can zoom in by holding your Super key (Windows/Apple/Ubuntu key) and scrolling your mouse wheel in or out. Using this technique you can make Spelunky effectively full screen and if you sit it right in the corner, you will forget you are doing anything strange within a few minutes.

Okay, that's it! Some initial setup is involved but once you've gotten it to work the first time, all you need to do in the future is run Spelunky and zoom in. As for help playing Spelunky, navigate over to the tutorial area once in the game, and you'll learn everything you need to know! Let me know if it works for you and how much you love Spelunky (it's worth it, I promise!) and don't forget to check out more fun indie games on tigdb.

Friday, February 13, 2009

Setting up a fingerprint reader with ThinkFinger in Ubuntu 8.10

If your laptop has a fingerprint reader installed in it, there's a decent chance you can set it up very easily in Ubuntu to login and [gk]sudo. Since the manpage isn't particularly helpful, I'll guide you through setting it up with the ThinkFinger library, which is compatible with most popular readers installed in Lenovo/Thinkpads, Dells, and Toshibas.
  1. Install the necessary libraries: sudo apt-get install thinkfinger-tools libpam-thinkfinger
  2. Integrate thinkfinger with PAM (Pluggable Authentication Modules): sudo /usr/lib/pam-thinkfinger/pam-thinkfinger-enable
  3. Now acquire your fingerprint: run tf-tool --acquire. If you get an error here (not a failed swipe, you just need to swipe better), running it with sudo might be necessary. If you still get an error that thinkfinger can't interact with your reader, it probably isn't supported, sorry! Otherwise, keep swiping your finger until you get two successful swipes.
  4. Finally, make sure it worked: run tf-tool --verify and swipe your finger. Try this a few times, and if it doesn't have a good success rate, do another acquire (the previous step), perhaps slower and more intentionally.

Now you can log in by swiping your finger at the password prompt, and more usefully in my opinion, swipe your finger instead of entering the root password at terminal and graphical password prompts. This is one of those little things that, once you get used to it, is hard to ever live without. Check it out:


By the way, while there may be valid security concerns with fingerprint readers, don't listen to the critics who say you can just breathe on it to get a swipe. 2D fingerprint scanners may work this way, but laptop fingerprint readers take a reading in both space and time. Try using tf-tool --verify and finding out for yourself; you can blow and breathe on your fingerprint reader all day without getting it to even recognize a scan, let alone a failed one.

When It's Good to be Bad

Jonah wrote a post about how awful the band Brokencyde is, and I mostly agree with the points mentioned. However I would also like to propose that if you aren't particularly talented, it is more profitable to be epically bad than it is to be just moderately bad or even mediocre. I also think it is more entertaining and useful to society as a whole, thus being the utilitarian venture to embark upon if you aren't traditionally talented. So let's begin my defense.

I suspect the magnitude of profitability can perhaps be simplified as the multiplication of two factors: the number of people exposed to your product and the probability that a random person would purchase your product if exposed to it. In this model, a very ubiquitous band that people love is the biggest winner; lots of people are exposed to the product and a good percentage buy it. Mediocre bands do alright but not nearly as well. The percentage of people willing to purchase is on the same order, perhaps half as much or so, but the exposure is WAY less. The market is saturated with mediocre to decent bands and no one has time to find or listen to them all. As a result, these bands aren't nearly as ubiquitous, leading to sales and thus profits which are orders of magnitude less. As you get worse and worse, your market is more and more saturated, you are less and less interesting, and profits continue to drop, in a roughly linear to inversely square fashion. But it doesn't approach zero; once you cross a certain threshold of awfulness something magical happens:



You become interesting again! The market actually becomes LESS saturated as it becomes challenging to be worse than that threshold. You have become so awful that you are fascinating and captivating, entertaining and hilarious! Sure, the chance that a random person buys your product has now dropped by an order of magnitude or two, but your exposure increased by many more. You aren't quite playing with the big dogs, but you can sell leaps and bounds more than the average guy. All for being notably worse than most other bands.

So clearly it can make sense selfishly and financially to be a Brokencyde or a Williang Hung, but are you harming society in the process? I don't think so. Surely a few people will legitimately be offended and wish such bands didn't exist, but I think the majority of us are at least entertained by their existence which makes us laugh or smile, have an interesting discussion with friends, or at least have a great gag gift (another unique sales niche these bands get in on). As a result, individual Brokencyde's of the world increase the overall happiness of society more than an individual "average" band. Sure they're bad, but would we (or they) want it any other way?

Thursday, February 12, 2009

Typing on the Toilet

Yes, I'm blogging from the toilet, but there's no need for your mind to be in the gutter. I just moved in to a new apartment and haven't set up internet yet, and as it turns out the bathroom is the only place that gets an unencrypted wireless signal.

Overall I accomplished quite a bit today. I worked a 10-hour day at work, came home and made myself a nice dinner, and did a complete move from start to finish, including all packing and cleaning at the first place, and unpacking into the other. This whole moving process took only 2.5 hours as well, so that's got to be _some_ kind of world record. Granted, the two places are a few hundred feet apart, but I still feel accomplished.

In the Ubuntu world, I just noticed in Jaunty that when you switch backgrounds it does a rather sexy fade transition from the old background to the one. If you haven't seen it yet I would definitely recommend checking it out in a VM with Jaunty Alpha 4. Now all we need is more than two wallpapers that ship with Ubuntu! Install gnome-backgrounds by default, anyone? If you don't know about those, please install that package and check them out too, and subscribe to the linked bug report to get it more attention!

On a sadder note, recent Intrepid updates broke my fingerprint reader <--> PAM integration; did this happen to anyone else? There was one particular update asking me what I wanted to do about pam settings and I click the recommended suggestion, to use the new ones, but now pam doesn't understand that I have a fingerprint reader to use to login/[gk]sudo, et cetera. Granted, it is only a one-liner to re-configure this, but it seems quite suboptimal!

Okay, good night/morning/afternoon everyone :)

Wednesday, February 11, 2009

Failing tests: When are they okay?

As a developer on a team, when (if ever) is it reasonable to check in something which breaks one or more tests and causes the build to fail?

The most important aspect of a build seems to be that it accurately represents the health of your product. In an ideal world, if all the tests pass, I should be comfortable deploying that code. In a perhaps more realistic world, I should either be comfortable deploying to a part of the system (small random %, special testers group, etc), or at least be comfortable sending it off to QA with the expectation that it will pass (and be surprised if it doesn't). Conversely if the build fails I shouldn't be suspecting that one of the tests must have randomly failed or that someone forget to update a test to reflect application changes; I should be wondering what is wrong with the product.

But if you have set release dates, say every two weeks, is a broken build in week 1 a problem? Is it okay for developers to be using TDD and checking in the tests first, or checking in incomplete or broken code so that others can collaborate on it? In some ways it definitely seems reasonable to allow for this. After all, the release date is in the future and it is quite expected that features aren't completed yet or that bugs aren't fixed yet. You should be encouraged to commit early and often and you don't want to have to jump through hoops to collaborate.

However there seem to be disadvantages to this type of workflow. First of all, if a build is broken, it can't break. What am I talking about? If I check in my expected-to-fail test or my half-finished code and the build is failing, the next time someone unexpectedly breaks something, it isn't nearly as obvious. A passing test suite changing to a failing suite, in a good environment, should be blindingly visible. But what about a failing test suite continuing to fail, albeit in more ways? That's more subtle. From the second that happens you're accumulating debt and the faster you find it, the easier it will be to fix. But if you can't check in broken code, how do you easily collaborate with someone else on it over time?

Another problem that can arise is the accumulation of potential debt by replacing functional code with incomplete code. If you aren't able to make the timeline or the feature gets dropped for release, you now have a reverse merge on your hands. This could be particularly time consuming if others have been working in the same files that your work touches. On the other hand if you had been implementing it side-by-side and waiting to actually replace it until it worked, it would be no problem to ship the code in that state at any point if need be. Your deployment is more flexible and less error prone.


So how can you balance these tensions in an agile environment? I'd love any feedback that anyone has to provide. What are the reasons for prioritizing a passing build that I missed, and what other drawbacks exist?

Tuesday, February 10, 2009

The Joy of Trivial Wrappers in Python

Python is a great language, but if you have ever tried to do anything web-related beyond a basic page fetch, it gets complicated quickly. What are single actions in your mind become multiple operations in Python.

Take for example POSTing some variables to a page. You are going to have to import both urllib and urllib2, and know what is in each of these. Use urllib.urlencode to encode your post variables, then pass them into urllib2.urlopen to get a connection object, then read that. Yikes! Oh, does the site require cookies? That's another import and three lines of code; I hope you like reading up on CookiePolicy's!

Attempting to accomplish this task with built-in modules will likely result in something similar to:

import urllib, urllib2
from cookielib import CookieJar, DefaultCookiePolicy

cj = CookieJar( DefaultCookiePolicy(rfc2965=True) )
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
urllib2.install_opener(opener)
postVars = urllib.urlencode({"username": x, "password": y})
conn = urllib2.urlopen("http://example.com/login.php", postVars)
htmlResult = conn.read()



Compared to Java or C#, this is probably a terse solution. We are using Python however (for a reason), and that block of code sucks; that's not how anyone thinks. It is hard to remember, leads to copy-and-paste code, and isn't particularly readable. It also requires you to work with things you probably don't care about such as cookie policies, openers, and url encoding. You just want to send a page a message!

After forgetting between projects and having to re-discover how to implement this functionality a few times over already, I finally decided to write something to remember it for me. Suddenly we can write:

import web

web.enablecookies()
htmlResult = web.post("http://example.com/login.php", {"username": x, "password": y})



The web module is quite short and not even remotely impressive (you could write what I've exposed here in 5 or 6 lines), but it takes something I found tedious and verbose and turns it into something simple. It adapts the broken-down functionality of these libraries to the more abstract level that I think on. Everyone thinks (and works) differently, and surely for some people it WOULD make sense (and be necessary) to open connections and read from them (if at all) byte by byte.

My interest in posting this has less to do with this specific example, and more to do with finding out what other "thought adapters" people have written to make something easier, more readable, or more pleasant. I have a few of these and pull them as I need them for various projects. What about you?

Sunday, February 8, 2009

AWN dock (and Extras) 0.3.2 released! \o/

Avant Window Navigator has released version 0.3.2 today. This includes the release of the core dock, "awn", and all the applets and plugins, "awn-extras". There was a combination of about 130 bug fixes and feature requests closed in this release, including a few entirely new applets! One of my favorite new applets, moderately pointless I admit, is the Animal Farm applet which displays a cute animal who gives you a fortune on a click, thereupon changing to a different random animal. Below is a shot of 10 of them running :)


Other fun applets include a new customizable notification tray applet which supports transparency (with GTK 2.15+ in Jaunty), a flexible web comics applet, a new themeable clock, a simple to-do list, as well as plugins for Remember The Milk and Tomboy. Don't forget that great applets like Pandora, weather, calendar, and shinyswitcher (a desktop switcher) already exist and have been improved as well.


Awn-manager has also gotten a lot of love since the last release; managing themes and launchers should provide a much better user experience. Tons of bugs have been squashed in awn-manager and most changes will be reflected immediately...no need to restart AWN!

For more detailed information please check out Mark Lee's blog post, one of the main developers. To get it, check out the PPA, and don't forget to Digg it! I'll leave you with some more screenshots, all of which, including the ones above, are licensed under the WTFPL.


Saturday, February 7, 2009

Installing new languages and running applications in them

Awhile ago I promised to explain how to run applications in a different locale. This is quite useful as a developer, so that you can more robustly test your localization code. It can also be useful to translators who are translating to a locale which is not their default. Maybe you are learning a new language and want some extra practice in specific applications. Or, you may just want to see what your favorite application looks like in Russian or Hindi :)

There are basically only two steps to this rather simple process.

1. Install the desired languages. Go to System -> Administration -> Language Support. Scroll down the list, checking the "Support" box on the right for any language you want on your system. Once you are done click Apply and the necessary files will be downloaded for you. A logout and login is recommended by the application after this, and while not necessary, I recommend it as well (I've had a rare instance or so of segfaults when trying to use the locales before restarting the sesssion). If you want to follow along, ensure Russian is one of your choices.



2. Run the application under a different locale. You need to figure out the locale code that you want to run. In a terminal, run "locale -a" without the quotes, and you will see a list of all locales available on your system. If I want Russian, the one I am looking for in this list is "ru_RU.utf8". It is usually fairly obvious which one you want. Now, again in a terminal, just add "LC_ALL=ru_RU.utf8" before the application you want to run. If we want a Russian calculator for example, we would execute "LC_ALL=ru_RU.utf8 gcalctool". Ta-da!




This is a great way as a developer to make sure your applications are correctly detecting locales. I'd love to hear what you think and if there are any other reasons I missed that you may want to do this!

Friday, February 6, 2009

How Windows Vista, Digg, and Ubuntu Landed me a Sweet Job

A lot of people criticized Windows Vista when it first came out, and I was one of them. Another large group of people also don't believe there's money in open-source. However for me, the existence (and negatives) of Vista and the awesomeness of Ubuntu landed me a sweet job. How?

Around the time that Vista came out, I was using a laptop with a 1.7GHz Pentiun M processor and 1GB of RAM. I got Vista (Business edition) for free through school so I threw it on. I quickly grew to love the new start menu and many of the improved usability features it had. Unfortunately it ran like CRAP. It was so slow and unusable due to my machine specs that it was unbearable. I couldn't validate purchasing a new laptop at that time since Windows XP ran perfectly fine and really an OS SHOULD be able to run fine on those specs. But I also didn't want to go back to XP and lose the features that I liked from Vista.

Around that same time I was also browsing Digg and noticed a release announcement for this thing called Ubuntu, Feisty Fawn to be precise. Everyone seemed to be raving about it and I thought since it was free, I might as well give it a try and see if IT can run decently on my machine and allow me to do everything I wanted. As it turns out, it ran quite well and either supported everything I wanted out of the box, or was flexible enough to allow me to do it myself! Even better, it shipped and supported the applications I used on Windows but previously had to install and keep up-to-date myself, like Firefox, Thunderbird, and Pidgin.

Over time I started getting more into contributing to Ubuntu, first by finding bug reports matching problems I had and adding more information, then more general bug triaging help via BugSquad. Eventually I joined the BugControl team and also started contributing to projects like Avant Window Navigator (AWN). At one point the bug bot that announces new bugs in #ubuntu-bugs-announce went down so I wrote a new one (EeeBotu) which lives on to this day happily (I presume) announcing bugs. When I heard that community members could be sponsored to the next UDS (Ubuntu Developers Summit) in California, I excitedly applied and even more excitedly was accepted to attend courtesy of Canonical.

Around THIS period of time I had been applying to various jobs, one such job at a fun startup in California, Genius.com. They had recruited at my college, the Rochester Institute of Technology in Rochester, NY, and had picked some candidates including byself to be flown out for second interviews. Then the downturn in the economy came however, and they decided not to fly anyone out and re-evaluate new hires at a later time.

This was understandable, but I thought that maybe since I was going to be out in California for UDS anyway, they might want to take me up on a free interview. As it turns out they did. I was offered a job, accepted it, and after about a month I can say that it is a pretty sweet job!

So because Vista sucked (at least initially), I gave Ubuntu a try which I heard about on Digg, got involved and was sponsored to attend UDS in California, where I was able to interview at the company I currently work at.

So just remember, there's a positive side to every negative (thanks Microsoft!), and there IS money in open-source, at least indirectly (a huge thanks to Canonical!). Has Digg found a new business model?

In other news, I've joined a pact with 9 other friends to write a blog post a day for a month starting today, so if all goes well you will be hearing many (hopefully) interesting or fun things from me!