Sunday, September 20, 2009

wxBanker 0.6 preview available, now with recurring transactions!

I've just released wxBanker 0.5.9 in the wxbanker-testing PPA, which is a preview release for 0.6. If you aren't familiar with wxBanker already, it is a cross-platform, lightweight personal finance application, and you can find more info at https://launchpad.net/wxbanker.

While there are many improvements and fixes in 0.6, the main feature is recurring transactions, allowing you to automate repetitive transactions. They are functional in 0.5.9 with one caveat and the purpose of this preview release: there's no way (besides sqlitebrowser :) to modify existing recurring transactions. As such I'd love to get feedback on your impressions of recurring transactions, ideas on a nice configuration UI for them, and of course general feedback. With this I can implement the configuration UI and release 0.6.

Here's an example of a simple quarterly transaction:



and here is a more complex one, with specific days chosen, as well as the transaction being a transfer from another account:



When you start up wxBanker and there is a recurring transaction due, you will see something like:



I didn't reinvent recurrence rules but instead (luckily) found the dateutil module for python which includes rrule, an "implementation of the recurrence rules documented in the iCalendar RFC", so it should behave quite well.

As far as the needed configuration UI goes, there are a couple ideas that I've had so far. Below is the current top-left portion of the UI, for reference:



One idea was to have a third "Recurring Transactions" tab after the "Transactions" and "Summary" tabs, which appears if and only if recurring transactions exist. This would provide a list of recurring transactions and allow editing them via the same UI used for creation, as well as changing the account or removing them altogether. A second idea is to add another button next to the Add/Rename/Remove account buttons in the upper-left, for account configuration (this will be necessary for future features anyway), and allow modifying recurring transactions for that account there.

Another perhaps complementary way would be to provide a right-click action on existing transactions which were recurred, for editing. I'd also like to implement functionality similar to Google Calendar where editing a value on an existing transaction caused by a recurring transaction will ask you if you want to apply that change to just this transaction, all existing, or all future.

If you have ideas, please feel free to leave comments here or drop by #wxbanker on irc.freenode.net any time this week after 10AM PDT, to have a more interactive chat about them.

Please do let me know what you think from either just the screenshots here or actually playing around with the application!

Thursday, July 30, 2009

Launchpad is now an automatic, magical translation factory!

I've been using Launchpad to host my personal finance application wxBanker for a few years now. The thing I was hearing most often was that it wasn't localized; people wanted currencies to look the way they should in their country, and the application to be in their language. Let me explain how Launchpad helped me provide translations for my application, and how much of an utterly slothful breeze it has recently become.

Image courtesy of shirt.woot.com
Normally to handle translations, an application has to wrap strings with gettext, create a template, find translators and give the template to them, collect translation files back, and integrate them into the project. This is painful and is why many applications aren't localized, and shut out most of the world as a result. One of the amazing features of Launchpad however, happens to be Rosetta, which brings translators TO your translations via a simple web interface, and then makes those translations available to the developer. With Rosetta, translators don't need to understand gettext or translation file formats; they just need to know two languages!



So that's what a translator sees. Notice how Launchpad even shows how other applications translated the same string. So once you generate a template and upload it, you can assign a translation group to your project such as "Ubuntu Translators" so that your strings will be translated by volunteers on the Ubuntu project; if your project isn't relevant to the Ubuntu project, you can use the more generic Launchpad Translators group. Now all you have to do is wait for some translations, then download them and check it in to your code. Not too bad, right?

It isn't, but Launchpad has recently made it so much better. They started by adding an option to automatically import translation templates from your project. This means as you are developing, all you have to do is regenerate the template and commit, and new strings will show up for translators in Rosetta and be translated automatically (from the developer's perspective). Then today, they announced the other side of this, which is automatically committing those translations back into your code on a daily basis. This means that all I have to do is commit templates as I change strings, and Launchpad handles everything else. This is a profound tool for developers.

What's the next step? Well, from a developer's perspective the translation template is a tool to give to the translators or in this case Launchpad. In the future Launchpad could eliminate this by generating the template itself from the code (this is what developers are doing themselves, after all), so that truly all you have to do after you set up the initial i18n/l10n framework is commit code as normal, and Launchpad magically commits back translations.

All this work Launchpad is doing gives developers more time to develop while still having localized applications at a very minimal cost. This is continuous translation integration, and boy is it cool!

Monday, July 6, 2009

Simple timing of Python code

Often when I am writing in Python, I find that I want to see how long a particular function call or set of statements are taking to execute. Let's say I have the following code that gets executed frequently:

for i in range(10000000):
x = 934.12 ** 32.61 * i / 453.12 ** 0.23

and I want to know how long it takes to execute to see if it is slowing down my app and should be optimized. Previously I would surround it as such:

import time; x = time.time()
for i in range(10000000):
x = 934.12 ** 32.61 * i / 453.12 ** 0.23
print time.time() - x

This will print out the duration in seconds of that code segment, but is more work and typing than I want, and more cleaning up later. I realized that the new "with" statement in Python could probably help me out. Let's create a little timer class that cooperates with it:

class Timer():
def __enter__(self): self.start = time.time()
def __exit__(self, *args): print time.time() - self.start

Now all we have to do is:

with Timer():
for i in range(1000000):
x = 934.12 ** 32.61 * i / 453.12 ** 0.23

You can also try:

with Timer():
time.sleep(1.5)


For these, 0.28738 and 1.50169 are what I get, respectively. While something like this couldn't really replace application-wide profiling via a module like cProfile, it can be an extremely useful and quick way to see if your prototype is scalable or not. I usually end up having a debug.py or helpers.py file in my larger projects with little tools like this, and I'll probably end up adding this one as well.

Let me know if you are doing something similar, or if I've reinvented something that already exists. I'd also love to hear from people profiling their python code and what techniques they are using, as I am just starting to learn about it.

Sunday, May 31, 2009

Karmic Desktop UDS run-down!

I just got back from a wonderful UDS in beautiful Barcelona and thought I would provide a summary of what we can expect in the Karmic Koala 9.10 desktop. Keep in mind that I don't speak for Canonical and what follows is just my understanding of what is on the table for Karmic.

Overall it is gearing up to be a pretty radical and exciting release; there are some changes to the default application set as well as some major version upgrades of existing core components. We are trying to be fairly aggressive in terms of new stuff so that if Canonical wants Karmic+1 to be an LTS (Long Term Support) release, we can have fairly stabilized new technologies by then (thanks to 6 months of stabilization in the Karmic cycle) instead of having to wait until after the LTS (Karmic+2) to introduce them. Since many of these changes would be too radical to first appear in an LTS, if we don't upgrade now we may not be able to for a year, and have to maintain old versions for 3-5 years in the LTS.

On the messaging front, Ekiga will be dropped from the CD to save 10 megs, and Empathy will likely replace Pidgin due to a responsive upstream, voice/video support, and better GNOME integration. It also now has the ability to import accounts from Pidgin, so this should help out with migration. I checked it out a bit at UDS and was impressed with how useful it is with absolutely zero configuration. It will pull your name from the system and enable avahi (auto-discovery of people nearby, like bonjour) with no set up, which made it quite easy to get in contact with people at the conference. You can also supply your email and Jabber ID to the avahi interface to allow other people to discover that info as well.

It also sounds likely that Banshee will replace Rhythmbox as the default media player, and it is the official default of UNR (Ubuntu Netbook Remix) Karmic. This will bring a snazzier interface, better device support including iPods and Androids, and quite importantly an active and responsive upstream. I will admit to not being a huge fan of this transition for Karmic as it seemed too early for me (the lack of a folder watch is quite a regression for me, and it has been promised for the last 3 releases or so, so I'm not holding my breath), but after checking out 1.5 for a bit I will admit that it is growing on me. The user interface does seem nicer, and the lightweight video library which keeps track of what you haven't watched is nice. However, it does seem to use 3-10x more memory than RB which is very troubling (60-300MB compared to RBs fairly consistent 25MB), especially on the netbook scene. I've also had issues with it skipping occasionally, which is very unfortunate. Hopefully the UNR switch will put pressure on better memory management for Banshee.

Banshee syncing with an Android G1

Empathy and Banshee will probably replace their predecessors around alpha 2 of Karmic, and will be either left as default or reverted based on reported regressions and bugs. Keep in mind that if you end up not preferring these applications, the other ones still exist and you can continue to use them.

There are also going to be a bunch of underlying speed improvements, with the boot speed goal being 10-12 seconds. When Ubuntu talks about boot times, we are referring to the time from when grub starts (when Linux first gets control of the machine) to when the user is at a fully loaded desktop with no I/O. The main test machines being used by Canonical here are Dell Mini 9s, with auto-login enabled to get a consistent log-in time. This is pretty impressive as the boot goal was 25 seconds in Jaunty, which was met, and was aggressive itself as Intrepid booted in about 65 seconds on the Dell Mini 9.

grub2 is likely to be default for new installations (upgrades will have grub1 chainloaded to grub2), with ext4 as the default filesystem. The boot process will also be streamlined, with the grub timeout set to 0 and the boot menu hidden. There will instead be two new ways to boot into a different system now. First, there will be a key that can be pressed while booting to bring up an OS chooser, which will halt the current boot and restart into the chosen one. Another goal is to have the restart menu item in GNOME aware of installed OSes and allow the choice there, so you could select for example "Restart into OSX". All in all this means no racing to select the OS for dual-booters, and a faster boot process as well. /tmp is also hopefully going to be made a tmpfs, which means it will reside in RAM and overflow to a swapfile (which in recent Linuxes have on par performance with swap partitions). This means power savings, less disk I/O (especially great for SSDs), and of course blazingly fast performance which should help out a lot especially when, say, loading files from inside an archive The Gnome Display Manager (GDM, which handles the login screen) will also likely be upgraded to GDM2.

Finally let me fire off a few more changes. Power management is being improved all around, with one change already landed being that audio cards will be automatically powered down after 10 seconds of no sound. Encrypted Home directories will hopefully be easier to set up now with an option right in the graphical installer, and I'm working on a UI for managing this and encrypted Private directories in Karmic, more on that later. Firefox 3.5 should be the default version of Firefox. For notifications which want to display actions if the user is interested, there is work on morphing windows:


Ubuntu is also working on being social from the start (see desktop-karmic-social-from-the-start on gobby.ubuntu.com), perhaps installing Gwibber by default and asking the user if they want to integrate social sites (twitter, facebook) into the desktop when they visit them in Firefox, via an extension. There has also been work in looking for a better scanning application to replace xsane (perhaps GnomeScan), some look into using Gnome Control Center, and a common printing dialog.

Okay phew, that's what I've got to report! Let me know what you think of these decisions and changes, and if there is anything you were hoping for that didn't make it, or really anything else you've got to say!

Thursday, May 14, 2009

A teaser: Desktop integration with encrypted directories for Karmic

Recently I've been working on desktop integration with ecryptfs. Dustin Kirkland has done some awesome work enabling encrypted Private directories, as well as entirely encrypted Home directories, and I want to bring a UI to that goodness for the Karmic desktop.

UbuntuOne displays a banner at the top of its shares, and this inspired me to borrow the code for use with encrypted Private directories. After a bunch of hacking and debugging, I finally got something to show up:


Pretty exciting! There is much work to be done behind the scenes but this is an encouraging start. After I get this working I plan on making a UI for installing ecryptfs-utils (the necessary package), setting up an encrypted Private directory, and managing/configuring one (or an encrypted Home). This UI would perhaps be available from System -> Administration -> Encrypted Directories, and would allow a user to have a directory of encrypted files available in a few clicks, which is mounted/unmounted transparently at login/logout.

What do you think? Are you currently using an encrypted Home or Private directory? Would you be more likely to if there was a UI to set it up? Please share your thoughts and comments :) I'll be at UDS and can schedule a session on this if there is interest, as well.

Wednesday, May 6, 2009

Counting the number of Ubuntu users

There have been a few articles recently trying to estimate the number of Linux users, which is apparently a challenging problem. However I have to wonder why it can't be figured out at least at the distro level by simply storing hashes of IP addresses that hit Canonical's update site, and looking at the number of unique ones each week/month.

There are going to be people using mirrors, but this is a small percent to lose to at least get something in the right magnitude, and the most popular mirrors could probably do a similar thing and contribute their numbers anyway. The only other main drawback would be multiple Ubuntu machines under the same IP, which again seems like it would only result in a slight inaccuracy. You'd also lose a small percent to users infrequently using their computers such that they aren't updated on a monthly basis, but yearly results would pull back in any of these people using their computers frequently enough to warrant counting.

Alternatively, as others have suggested as well, if Google would just release their numbers for browsers hitting google.com, we'd probably have a solid idea as well.

Are there already accurate numbers for Ubuntu and if not, am I missing something with my proposal?

UPDATE: Jef pointed out that Fedora is already doing this at http://fedoraproject.org/wiki/Statistics#Yum_Data, which is pretty awesome! That shows about 14 million unique repository connections, so making a VERY rough, not remotely scientific estimate, we could use distrowatch to estimate that Ubuntu has 1.68 times the number of users as Fedora, and get something around the order of 24 million users that have connected.

Monday, April 20, 2009

Extending Java’s Semaphore to Allow Dynamic Resizing

Marshall wrote an excellent post explaining how to dynamically change the number of permits when using semaphores in Java, which I thought I'd share for anyone interested. This can be particularly useful if you have a long-running daemon which you don't wish to restart for changes such as this. If you are using semaphores in Java, or if you don't even use Java but just want to learn more about semaphores, I'd recommend giving it a read.

Monday, April 13, 2009

Email Deliverability & RFC 2142: Everything you wanted to know and never dared to ask

Today Franck Martin wrote an interesting post regarding RFC 2142 ("Mailbox names for common services, roles and functions") and how it relates to email deliverability. If you are running your own email server or own a domain, you may interested in reading it as it sums up what email addresses are expected to be manned at any domain and for what purposes.

For example, did you know that "if an Internet service provider’s domain name is COMPANY.COM, then the ABUSE@COMPANY.COM address must be valid and supported"? And are you manning your postmaster and hostmaster addresses?

Wednesday, April 8, 2009

Genius.com is Eagerly Anticipating PHP 5.3

Over at the Genius Engineering blog we just put up a post regarding our anticipation of PHP 5.3 and what we are looking forward to in it. We'd love to get some feedback and thoughts on it, and hear what other people are interested in regarding PHP 5.3 that we may have missed. If you've been following PHP 5.3, or haven't and want to catch up, check it out!

Monday, April 6, 2009

wxBanker 0.5 RC available for testing and translating!

Recently I've been hard at work on the next version of wxBanker, a lightweight personal finance application, and would like to get out the 0.5 release candidate for testing and translations. To check it out, add my wxbanker-testing PPA. It also runs on Windows and OSX (albeit less tested), so feel free to grab the source tarball (it's python, so no compilation necessary) and then check out the included README. The only hard dependency is wxpython, although installing numpy and the simplejson library for python allow for graphing and csv importing respectively.


wxBanker 0.5 has been a long time in the making, as it represents a large refactor of the underlying code to make everything much faster, more stable, and easier to extend and implement new functionality. This is especially noticeable in the start up time.

There is also a new transaction grid which allows for sorting, a CSV import option in Tools (so you can import transactions from your banks initially)


...and enhanced right-click actions for transactions which now work on multiple selected transactions:


See the included changelog for a full list of new things. Launchpad is also fairly well integrated into wxBanker, so to file a bug or do almost anything else, just use the help menu. The Spanish and French translations are complete but I could really use help everywhere else: https://translations.launchpad.net/wxbanker !


Check out https://wiki.ubuntu.com/wxBanker for more screenshots, or the wxBanker homepage at https://launchpad.net/wxbanker for more information including the users team/mailing list, and the translations team. If you are interested in the less exciting stable version, 0.4.1.0 can be found in the Ubuntu repos as of Jaunty, available in about 18 languages.

Enjoy, and let me know of any issues or comments you have, and thanks in advance for translations :)

Wednesday, February 25, 2009

Using the "finally" block in Python to write robust applications

This is the first post in my series of three on using XMLRPC to run tests remotely in python (such as javascript and selenium tests in web browsers) and get the results. If that doesn't concern you, this post is probably still relevant; I'd just like to cover the groundwork of making code that is stable and repeatable even in the face of [un]expected problems. Luckily for us, python has a wonderful "finally" block which can be used to properly clean up or "finish" regardless of Bad Things. Let's look at an example of a common problem this can solve:

getLock()
doStuff()
releaseLock()


We need exclusive access to a resource, so we get a lock. We do some stuff, and then release the lock. The problem is that if doStuff raises an exception, the lock never gets released, and your application can be in a broken state. You want to release the lock no matter what. So what you should do is:

getLock()
try:
  doStuff()
finally:
  releaseLock()


Now save a SIGKILL, the lock is going to be released. This is pretty basic, but it is impressive how robust the finally block is. You can "return" in the try block or even "sys.exit()" and the code in the finally block will still be executed.

I recently used this with XMLRPC to safely tell the remote machine to clean up if the local script ran into problems or even got a SIGTERM from a keyboard interrupt. Here's a more elaborate example:

proxy = xmlrpclib.ServerProxy(remoteIP)
try:
  result = proxy.RunTests()
  if result is None:
    sys.exit(1)
  else:
    return result
except:
  sys.exit(2)
finally:
  proxy.CloseFirefox()


The remote machine ("proxy") is running some tests in firefox. While it does this it sets a lock so no one else can run the same tests. If something goes wrong, this lock needs to be reset and firefox needs to be closed so they can run again later. If it gets a result, return it. If it doesn't or something goes wrong, we still clean up but now we can exit with an error code. One of the neatest things about this for me was Ctrl+C'ing the script on my computer and watching the remote machine cleanly quit firefox and release the lock for another process to use.

This is great whenever you need to put something in a temporary state, or change the state after an operation no matter what happens. Think of locks, temporary files, memory usage, or open connections where it is important to close them. Conversely however, make sure you DON'T use an approach like when it isn't appropriate.

for client in clientsToPing[:]:
  try:
    ping(client)
  finally:
    clientsToPing.remove(client)


This is potentially incorrect behavior, because if you failed to ping your client you may want to keep it on the list to try again next time. However, you also may only want to attempt this once and then the above approach would be correct!

In my next post I am going to turn more specifically to remote browser testing and explain how exactly to set up both ends of the connection. After that I'll finish by making a post on using twisted + SSL to retrieve posted results over HTTPS.

Monday, February 23, 2009

Ensuring that you test what your users use

Recently I've come across two pitfalls when testing one of my python applications. In two different cases the tests will run fine in my checkout, but fail miserably for anyone else (because the application is broken). What was happening?

1) I had a new file which is required to run, but I forget to 'vcs add' it. Because the file existed in my sandbox, everything was well. But no one else was getting this file, so they couldn't even run the application. This one is somewhat easy to detect because a 'vcs st' should show that file as unknown status. In that way ensuring a clean status before running the tests can help avoid this. However this won't work well in an automated fashion because there are often unversioned files, and you typically want to run the tests before committing anyway.

2) A time or two I thought I had completely removed/renamed a dependency but forgot to clean up an import somewhere along the line. Even though the original .py file was gone, a .pyc file by the old name still existed, which allowed the lingering import to work. Again however, for anyone else getting a fresh checkout or release, this file would not be avaible and the app would be unusable.

How can you avoid having problems like this? Well, from a myopic viewpoint you could have your test suite delete any .pyc files before running. Then to address the first issue, you could also test that a 'vcs st' has no unknown files, and explicitly ignore any unversioned files you expect. But still, other things could creep up. And while having another machine as your "buildbot" would avoid the first issue, you are still prone to an attack from the second. To really make sure you are testing with the same stuff that you release, you need to be testing releases. In other words, you need to be putting your version through your shipping process, whatever that is, and then testing the final product.

So now that I've realized this is what I should be doing, I'm not quite sure what the simplest and easiest way to do it in an automated fashion is. For python, perhaps this could be achieved by getting a fresh export/checkout of the code in a temporary directory, adding that directory to sys.path, and importing and running the tests. I am sure this is a common problem; is there a common solution?

Saturday, February 21, 2009

Delicious, cheap, and easy whole wheat wraps!

Some of my bloodpact blogging friends have written recipes, and I thought I would add one of my favorite and simplest recipes to the mix. Wraps are a great food delivering device, be it for eggs or veggies, meat or rice and beans. It also happens to be easy to make your healthy wraps without any special tools, for roughly 10 cents a piece! Best of all, you can use as few as two ingredients if you so desire.

the product will make you happy.

Here's what you'll need.
  • 1C whole wheat flour (or white)
  • 1/4C cold water
  • 1/4 teaspoon salt (recommended)
  • 1 tablespoon olive oil (optional)
  • 1/4 teaspoon baking powder (optional)
  • seasonings such as oregano, onion powder, or herbs (optional)
Mix all the ingredients together in a bowl, adding extra water 1 tablespoon at a time until it forms a cohesive ball. Knead it for a minute or two and then divide into 2-4 smaller balls, depending on if you want large, medium, or small wraps. If you are patient, put them back in the bowl and cover with a towel for about 10 minutes.

Now roll them out. For this I recommend rolling between two non-stick surfaces, such as flexible cutting mats (a dollar store or grocery store should have 3 packs for $3-5), silicone baking sheets (also can be had for a few bucks), or something similar. So put a dough ball on a non-stick surface, then optionally put another on top. Now roll it out with a rolling pin if you have one, or a bottle of wine/oil/beer if you don't. It is easiest if you flatten it out by hand as much as you can first! It may take a couple tries to get them as flat (and as such wide) as you like, but you will definitely improve. Or you could just get a tortilla press online, though I have yet to give in.

Now heat a pan on the stove to medium-high heat. Once it seems up to heat, throw a wrap on the pan (no greasing necessary). After 15-30 seconds you should be able to jiggle the pan and have the wrap freely slide around, and this is useful for ensuring it doesn't burn. Give it about 1-2 minutes on that side, until you start to see a bubble or two, then flip it over for another 1-2 minutes. Now sit it on a towel to cool, and repeat for the rest of your future wraps! As they are delicious warm, I like use one right after I make them.

I love making these wraps because it is much cheaper than buying them, can be as healthy as I want, and makes eating them much more enjoyable knowing that I hand-crafted each one. Common uses are eggs with veggies in the morning or your typical taco fare. You can also make mini flatbread pizzas, or sandwich some cheese and veggies in between two for an extra tasty treat! Let me know what you think!

Friday, February 20, 2009

Finding new albums by your favorite bands

The other week I felt a little disconnected from recent music. I was sure that some of my favorite artists had released new albums that I wasn't aware of, but I wasn't sure how to be notified of it. I use last.fm when listening to music most of the time, and have been for about 5 years, so I already have a long and dynamic list of my favorite artists, many of which I haven't been keeping up to date with. There are also many places that offer feeds of recent albums, including a private torrent site, Waffles. As a curious programmer I decided to bridge the gap here and write a little proof of concept script that would grab my top last.fm artists and query Waffles to see what was new.

About an hour and 100 lines of python later, I had a working proof-of-concept. Right now it only supports Last.fm as a favorites source, and Waffles as a release source, but expanding it by following the same interface should be fairly straightforward. It is up for anyone to branch at https://launchpad.net/nutunes. For any programmer it should be pretty easy to make a favorite source which reads from a text file and use other services like allmusic.com, amazon, or perhaps even iTunes to get a list of recent releases for a given artist. Feel free to branch and create a merge proposal for integrating with other services. As the code is less than 100 lines, I hope it more or less documents itself. To use it just run nutunes.py and it will prompt for your last.fm username, and Waffles credentials.

What other ways are people currently staying on top of music of all the bands you are interested in? Heck, maybe Last.fm already has this option, but I sure didn't notice it, and learning and experimenting is fun!

Eye tracking and UI framework / window manager integration

Eye tracking is the technique of watching the user's eyes with a camera and figuring out where on the screen he or she is looking. While some computer users with disabilities use this technology as their primary input device, it hasn't become very popular. However I think that with webcams being integrated into the majority of new laptops, and multi-core processors with some cycles to spare for image processing becoming ubiquitous, eye tracking deserves to become more popular.

I don't believe the technology is accurate enough (yet) to replace your mouse, but it could still improve usability in a few ways. Imagine having the equivalent of onMouseIn and onMouseOut events on widgets when writing a user interface, but for where the user is looking instead. Applications could leverage onLookIn and onLookOut events at the widget level and open a whole new realm of functionality and usability. Videos and games could pause themselves when you look away, or bring up certain on-screen displays when you look at certain corners of the screen. If an application sees you are studying a certain element for a period of time, it may ask if you need help.

It would also be interesting to see eye tracking leveraged on the window manager level. Most people use focus follows click to focus windows, and some enjoy focus follows mouse, but imagine focus follows (eye) focus! Using multiple monitors would become much easier if your keyboard input was automatically directed to the application, or even specific field, which you were looking at. Eye gestures, like mouse gestures, could be potentially useful as well, such as glancing off-screen to move to the virtual desktop in that direction.

Apple and Linux both seem to be in a good position to implement something like this. Apple has control of both the hardware and the software including the OS, and has been integrating cameras in laptops for a while. As a result they are in a great position to pioneer this field and really have something unique to bring to the table in terms of a completely new user experience. However in the open-source world, Linux is also in a decent spot to do this as the UI frameworks and window managers are all patchable and most webcams are supported out of the box.

Eye tracking has the potential to enable us to use computers in ways that were previously impossible. What are your thoughts on eye tracking? Does it have a future in the computing world and where can it take us? And how long will it be before we will take this technology for granted? :)

Thursday, February 19, 2009

Webhooks and feeds as complementary technologies -OR- How webhooks can enable a collective intelligence

Yesterday I wrote about my observation that feeds seemed sort of like the precursor to webhooks, but that each had distinct advantages. Adam left a comment confirming my thoughts on their pros and cons, but then pointed out how they can be used together to get the best of both worlds. I really liked the implications and wanted to expand upon how webhooks could enable the next generation of feed readers, and further, really lead us towards a more collective intelligence.

The way things work now is that you have an aggregator, such as Google Reader, which polls all of your feeds every once and a while (although in this specific case surely doing some caching behind the scenes for users with the same feeds). This is suboptimal for two reasons. First, you don't get instant updates. Statistically, you will on average receive an update pollFrequency/2 minutes after it is posted. If you want to be able to respond to something in a quicker fashion, this may not cut it. Second is that the polling is causing unnecessary load on the server.

Now let's try it with just webhooks. You inform all the event producers you are interested in about your aggregator callback, and you get instant updates for all of them, with no wasted polling. However when your aggregator is off, you aren't receiving updates! This means you can miss updates, and you have no way to catch up.

Combining these two however, we can solve the problems of each technology with the other and pick up none of their downfalls. Use webhooks to tell your event producers about your aggregator as was done previously in the webhooks model. But now, the producer is also supplying a feed. This means that when your aggregator is up it will receive instant updates and doesn't need to poll. However when you start it up after having been down, it can use the feed to catch back up; no missed events!

I think this new model has the potential to improve aggregators, as well as make them more usable for applications where speed is important. It could also have a much greater impact though. Twitter is a good example of this I think, wherein you could tweet about things you need fast feedback on such as a meal choice at a restaurant, the best way to do something you are working on, or perhaps even more urgent things such as needing a ride. All of these things could and surely are done in the traditional model, but with the push revolution they become more useful as quicker responses are more likely. People will become more likely to produce things requiring (potentially much) faster feedback, and this feeds into itself as people become more likely to respond, knowing that their responses are more relevant because less time has passed. I think it is an evolution that, while initially potentially sounding subtle and unimportant, can help lead is into a more collective intelligence that we couldn't imagine living without once we have it.

Tuesday, February 17, 2009

Are feeds the pre-cursor to webhooks?

Recently I've been reading Timothy and Jeff talk about webhooks. Webhooks are essentially an amazingly simple way to be notified about arbitrary events on the web. In this model, any event producer allows you to supply a URL, which it will post to on each future action with the relevant details, whatever they may be. Then the other day when I was using Google Reader, something struck me: it felt a lot like webhooks, but turned on its head.

Anything that offers a feed such as RSS or Atom can be plugged into Google Reader; things like blogs and their comments, twitter searches, commits, downloads, bugs, and build results. As I started plugging more and more diverse things into Reader, I realized that it was basically like the "pull" equivalent of webhook's "push" nature. Instead of telling all these event producers where to contact me, I'm telling Reader where to learn about all the recent events.

I may be thinking too shallowly, but in the webhooks world Reader would be the service offering the interface. Then, instead of all these different things offering feeds, you could just plug Reader's hook into them and be notified instantly. Currently, for example, when I ask a question on a blog post, I'll throw the comments feed for that post into Reader so I don't have to keep checking back on the site; Reader will bring the potential answers to me. With webhooks though, I would reverse this and provide the service with the URL of my event consumer.

It seems like, as technology and the internet often does, feeds are evolving into what users need them to be. Services are seeing that people want to follow and be kept up to date without having to check back on hundreds of different sites. That's way too much time and information, especially when it all looks different. However, by plugging the feeds of all those things into an aggregator, we gain a central notification place for all these events, and it becomes much more managable.

So will webhooks replace the current paradigm that I'm using here, or complement it? They seem to each have their pros and cons. Feeds allow a history, and you won't miss an update because your aggregator was down; it will catch it on the next poll. However webhooks are instant and can be more efficient as you don't have the need for polling at all, but if the producer loses your hook, you're out of the loop.

So an overflow of interesting events occurring on the web necessitated a standard way to view them, and we got feeds. Are webhooks the next step of this evolution, or something else entirely?

UPDATE: Mark Lee responds.

Monday, February 16, 2009

Cracking On-Screen Keyboards with Visual Keyloggers

A few financial sites including HSBC and the US Treasury have recently added an extra measure of security to their site. Instead of simply requiring a username and password, an on-screen keyboard was added, requiring you to "type" in a second password with your mouse:



The logic behind this is that if a user's computer becomes compromised with a keylogger, the attacker could only obtain the username and primary password. The secondary password would remain uncomprised as it doesn't involve keypresses. This didn't seem too useful to me however, so for my "Image Understanding" class I decided to see if it was possible to create a "visual keylogger" which could capture this secondary password. It wasn't too difficult, and essentially demonstrated that the extra password was more inconvenience than security. Let me outline the basic process.

In order to do this, you need to be able to capture the contents of the screen at certain intervals. It seems like a fair assumption that if you (as the attacker of a comprimised system) can capture keyboard input, you can also grab screenshots. The goal is to turn a sequence of these screenshots of someone typing with an on-screen keyboard into a single string output equivalent to the password typed.

First we want to record the position of the mouse at each shot. This would normally be a trivial function by asking the OS; however, in my case I was writing this for an Image Understanding class and had to use the sequence of images as my sole input. As such, I used a basic templating approach to locate the mouse by a few of its key features. This was surprisingly robust; however, asking the OS for the mouse position is an easier, even more robust, and more likely attack vector in real life.

Now we need to figure out when the user clicked a key. Any keyboard used for a password purpose is going to give some form of feedback when a key is clicked, such as an asterisk in a password field, so the user knows if they have successfully clicked a key. The easiest way then to notice this is to subtract the color values of each screenshot from the previous one, giving you a new image with non-zero pixel values for each changed pixel. Among other things like cursor movement and web animations, the aforementioned asterisk feedback is going to be present in this image.



For each new image then, subtract and look for this feedback. If it's there, that's a key press! Combine this with the position of the mouse and you know where the user clicked. Now it gets slightly tricky. You know where they clicked, but if you grab that section of the screen, you'll get something like this:



because the mouse had to be over the key to click it. This is rather easily worked around, however, by going backwards in your mouse position cache until it is a certain threshold away from the clicked position, and grabbing the key image at that point.

After the user enters the complete password, you are going to be left with an array of keyboard images. For any human, this is quite sufficient. For my class however, it was not, and it would not be for any large-scale operation where automation is desired. What we need to do is clean it up by throwing away any pixels under a certain darkness threshold, then cropping the result:



Ta-da! Now we have something that any OCR (optical character recognition) algorithm should be able to chomp through in its sleep cycles. If you are writing for a specific keyboard, you can also just have an array of what each key looks like in binary form and compare to get the answer.

And there you have it! With the combination of a few basic computer vision techniques, we can expand a keylogger to understand input from visual keyboards and render this security annoyance useless. A fun note is that the order/position of the keys is irrelevant. The US treasury website uses an on-screen keyboard as well, but shuffles the keys each attempt. As is hopefully obvious from this algorithm, there is no assumption of a keyboard layout; the keys could shuffle every single click and it wouldn't matter.

Sunday, February 15, 2009

Lightweight personal finance just got easier with wxBanker in Jaunty!

wxBanker, your (hopefully) favorite lightweight personal finance application, has recently been accepted into Jaunty! Aren't familiar with it? Check out the screenshots! It took a few months of getting over the debian packaging learning curve, and about as much work getting and responding to the reviews from MOTUs, but I did it. I plan on doing a quick point release this week and releasing wxBanker 0.4.0.3, which will sport updated translations in 15 languages (thanks translators!) as well as a minor bug fix or three. Once I get that out and into Jaunty, I'll turn my complete focus (I hope) to the 0.5 series. If you'd like to translate wxBanker to your own language or improve a few of the lacking existing translations (Bosnian, Dutch, and Portuguese particularly), I'd love it! Head over to https://launchpad.net/~wxbanker-translators and join the fun!

The 0.5 series is a refactor and a bit of a painful one, as I didn't have a great handle on how to efficiently and smoothly refactor. I could probably do a much better job managing it now, but that's how experience works. All in all it is a much cleaner structure and led to more and better tests, which should allow more agile development and protect against future regressions. Some of the main upcoming features I'd like to get in are transaction tagging, recurring transactions, reporting, online syncing (via mint.com), and csv imports. I'd love for at least a few of those to make it into 0.5, and there is good progress on some of them including csv imports thanks to an impressive branch from Karel. On the other hand 0.5 has already been ongoing for about 3 months and I might cut a release into a PPA and get feedback while I add new features, sorting out any 0.5 issues and leading to a robust 0.6 release.

Overall I have learned a TON from this project, in no small part thanks to Launchpad. wxBanker started out as a terminal application which stored everything in a pickled linked tuple that I used for myself and added features as I needed them. Eventually I added a GUI (in wxPython) and registered the project on Launchpad, to get free hosted version control as well as more formal bug tracking (instead of a text file :). The combination of Launchpad and [wx]Python being cross-platform made it accessible to everyone, and took it from a project used only by myself to a project available in 15 languages with code contributions from multiple people.

So in conclusion, thanks everyone, enjoy wxBanker 0.4.0.3 in Jaunty, and look forward to future versions in my PPA. If you're not on Jaunty yet, you can install 0.4 from my PPA, which I'll be updating to 0.4.0.3 as soon as I release it. I'd love for any of your contributions, suggestions, questions, or criticisms to end up on Launchpad. I'd love to make it as usable and intuitive as possible, so anything that is unclear or confusing would be awesome to hear.

Saturday, February 14, 2009

Playing Spelunky in Ubuntu with Wine and Compiz

Spelunky is an awesomely addictive indie game, which I highly recommend checking out. It is sort of like Mario meets Nethack. As a side-scrolling, cave-adventuring character, you must venture deeper into the cave, defeating enemies, saving damsels, and collecting money and other fun items. A unique aspect that it takes from roguelike games is that there is really no concept of saving; each "play" typically lasts only a few minutes (much less at first), but you can get shortcuts to different zones to skip ahead once you get better. Also like Nethack, it is a surprisingly rich, detailed, and polished game for all its simplicity.



The unfortunate thing about Spelunky is that only runs in Windows. My initial attempts at running it in Wine proved unsuccessful, but after a while (and playing it on a friend's machine which gave me enough to interest to keep trying), I figured out just the right combination of tricks to make it quite playable in Ubuntu. Here's what you need to do:

  • download Spelunky, make sure you are using compiz, and install wine and compizconfig-settings-manager.
  • set wine to run in an emulated desktop window. To do this, run "winecfg", go to the Graphics tab, and check "Emulate a virtual desktop". The size isn't really important; I use 800x600.
  • extract Spelunky and create a file (if it doesn't exist) called "settings.cfg" and put these contents in it:

1
1
0
0
1
15
15


This is the default settings, tweaked to the only setting that allows it to run in Wine. You can't configure it normally since it only works in this way, so I had to have my friend change settings on his Windows box and watch what numbers changed. Thankfully for you I have done the hard work.

  • Run Spelunky! Double-clicking it should probably work, but if you want it to play nicely with other applications using audio at the same time, force it to use pulseaudio by running "padsp wine spelunky_0_99_5.exe". It should show a configuration window, but you can't really interact with it. Just click in it and hit enter.
  • Now you can play Spelunky, but it is at a 1x zoom level, the only one that plays nicely with Wine. This isn't very usable! Run "ccsm" (System -> Preferences -> CompizConfig Settings Manager") and enable "Enhanced Zoom Desktop" from the second group, "Accessibility". Now you can zoom in by holding your Super key (Windows/Apple/Ubuntu key) and scrolling your mouse wheel in or out. Using this technique you can make Spelunky effectively full screen and if you sit it right in the corner, you will forget you are doing anything strange within a few minutes.

Okay, that's it! Some initial setup is involved but once you've gotten it to work the first time, all you need to do in the future is run Spelunky and zoom in. As for help playing Spelunky, navigate over to the tutorial area once in the game, and you'll learn everything you need to know! Let me know if it works for you and how much you love Spelunky (it's worth it, I promise!) and don't forget to check out more fun indie games on tigdb.

Friday, February 13, 2009

Setting up a fingerprint reader with ThinkFinger in Ubuntu 8.10

If your laptop has a fingerprint reader installed in it, there's a decent chance you can set it up very easily in Ubuntu to login and [gk]sudo. Since the manpage isn't particularly helpful, I'll guide you through setting it up with the ThinkFinger library, which is compatible with most popular readers installed in Lenovo/Thinkpads, Dells, and Toshibas.
  1. Install the necessary libraries: sudo apt-get install thinkfinger-tools libpam-thinkfinger
  2. Integrate thinkfinger with PAM (Pluggable Authentication Modules): sudo /usr/lib/pam-thinkfinger/pam-thinkfinger-enable
  3. Now acquire your fingerprint: run tf-tool --acquire. If you get an error here (not a failed swipe, you just need to swipe better), running it with sudo might be necessary. If you still get an error that thinkfinger can't interact with your reader, it probably isn't supported, sorry! Otherwise, keep swiping your finger until you get two successful swipes.
  4. Finally, make sure it worked: run tf-tool --verify and swipe your finger. Try this a few times, and if it doesn't have a good success rate, do another acquire (the previous step), perhaps slower and more intentionally.

Now you can log in by swiping your finger at the password prompt, and more usefully in my opinion, swipe your finger instead of entering the root password at terminal and graphical password prompts. This is one of those little things that, once you get used to it, is hard to ever live without. Check it out:


By the way, while there may be valid security concerns with fingerprint readers, don't listen to the critics who say you can just breathe on it to get a swipe. 2D fingerprint scanners may work this way, but laptop fingerprint readers take a reading in both space and time. Try using tf-tool --verify and finding out for yourself; you can blow and breathe on your fingerprint reader all day without getting it to even recognize a scan, let alone a failed one.

When It's Good to be Bad

Jonah wrote a post about how awful the band Brokencyde is, and I mostly agree with the points mentioned. However I would also like to propose that if you aren't particularly talented, it is more profitable to be epically bad than it is to be just moderately bad or even mediocre. I also think it is more entertaining and useful to society as a whole, thus being the utilitarian venture to embark upon if you aren't traditionally talented. So let's begin my defense.

I suspect the magnitude of profitability can perhaps be simplified as the multiplication of two factors: the number of people exposed to your product and the probability that a random person would purchase your product if exposed to it. In this model, a very ubiquitous band that people love is the biggest winner; lots of people are exposed to the product and a good percentage buy it. Mediocre bands do alright but not nearly as well. The percentage of people willing to purchase is on the same order, perhaps half as much or so, but the exposure is WAY less. The market is saturated with mediocre to decent bands and no one has time to find or listen to them all. As a result, these bands aren't nearly as ubiquitous, leading to sales and thus profits which are orders of magnitude less. As you get worse and worse, your market is more and more saturated, you are less and less interesting, and profits continue to drop, in a roughly linear to inversely square fashion. But it doesn't approach zero; once you cross a certain threshold of awfulness something magical happens:



You become interesting again! The market actually becomes LESS saturated as it becomes challenging to be worse than that threshold. You have become so awful that you are fascinating and captivating, entertaining and hilarious! Sure, the chance that a random person buys your product has now dropped by an order of magnitude or two, but your exposure increased by many more. You aren't quite playing with the big dogs, but you can sell leaps and bounds more than the average guy. All for being notably worse than most other bands.

So clearly it can make sense selfishly and financially to be a Brokencyde or a Williang Hung, but are you harming society in the process? I don't think so. Surely a few people will legitimately be offended and wish such bands didn't exist, but I think the majority of us are at least entertained by their existence which makes us laugh or smile, have an interesting discussion with friends, or at least have a great gag gift (another unique sales niche these bands get in on). As a result, individual Brokencyde's of the world increase the overall happiness of society more than an individual "average" band. Sure they're bad, but would we (or they) want it any other way?

Thursday, February 12, 2009

Typing on the Toilet

Yes, I'm blogging from the toilet, but there's no need for your mind to be in the gutter. I just moved in to a new apartment and haven't set up internet yet, and as it turns out the bathroom is the only place that gets an unencrypted wireless signal.

Overall I accomplished quite a bit today. I worked a 10-hour day at work, came home and made myself a nice dinner, and did a complete move from start to finish, including all packing and cleaning at the first place, and unpacking into the other. This whole moving process took only 2.5 hours as well, so that's got to be _some_ kind of world record. Granted, the two places are a few hundred feet apart, but I still feel accomplished.

In the Ubuntu world, I just noticed in Jaunty that when you switch backgrounds it does a rather sexy fade transition from the old background to the one. If you haven't seen it yet I would definitely recommend checking it out in a VM with Jaunty Alpha 4. Now all we need is more than two wallpapers that ship with Ubuntu! Install gnome-backgrounds by default, anyone? If you don't know about those, please install that package and check them out too, and subscribe to the linked bug report to get it more attention!

On a sadder note, recent Intrepid updates broke my fingerprint reader <--> PAM integration; did this happen to anyone else? There was one particular update asking me what I wanted to do about pam settings and I click the recommended suggestion, to use the new ones, but now pam doesn't understand that I have a fingerprint reader to use to login/[gk]sudo, et cetera. Granted, it is only a one-liner to re-configure this, but it seems quite suboptimal!

Okay, good night/morning/afternoon everyone :)

Wednesday, February 11, 2009

Failing tests: When are they okay?

As a developer on a team, when (if ever) is it reasonable to check in something which breaks one or more tests and causes the build to fail?

The most important aspect of a build seems to be that it accurately represents the health of your product. In an ideal world, if all the tests pass, I should be comfortable deploying that code. In a perhaps more realistic world, I should either be comfortable deploying to a part of the system (small random %, special testers group, etc), or at least be comfortable sending it off to QA with the expectation that it will pass (and be surprised if it doesn't). Conversely if the build fails I shouldn't be suspecting that one of the tests must have randomly failed or that someone forget to update a test to reflect application changes; I should be wondering what is wrong with the product.

But if you have set release dates, say every two weeks, is a broken build in week 1 a problem? Is it okay for developers to be using TDD and checking in the tests first, or checking in incomplete or broken code so that others can collaborate on it? In some ways it definitely seems reasonable to allow for this. After all, the release date is in the future and it is quite expected that features aren't completed yet or that bugs aren't fixed yet. You should be encouraged to commit early and often and you don't want to have to jump through hoops to collaborate.

However there seem to be disadvantages to this type of workflow. First of all, if a build is broken, it can't break. What am I talking about? If I check in my expected-to-fail test or my half-finished code and the build is failing, the next time someone unexpectedly breaks something, it isn't nearly as obvious. A passing test suite changing to a failing suite, in a good environment, should be blindingly visible. But what about a failing test suite continuing to fail, albeit in more ways? That's more subtle. From the second that happens you're accumulating debt and the faster you find it, the easier it will be to fix. But if you can't check in broken code, how do you easily collaborate with someone else on it over time?

Another problem that can arise is the accumulation of potential debt by replacing functional code with incomplete code. If you aren't able to make the timeline or the feature gets dropped for release, you now have a reverse merge on your hands. This could be particularly time consuming if others have been working in the same files that your work touches. On the other hand if you had been implementing it side-by-side and waiting to actually replace it until it worked, it would be no problem to ship the code in that state at any point if need be. Your deployment is more flexible and less error prone.


So how can you balance these tensions in an agile environment? I'd love any feedback that anyone has to provide. What are the reasons for prioritizing a passing build that I missed, and what other drawbacks exist?

Tuesday, February 10, 2009

The Joy of Trivial Wrappers in Python

Python is a great language, but if you have ever tried to do anything web-related beyond a basic page fetch, it gets complicated quickly. What are single actions in your mind become multiple operations in Python.

Take for example POSTing some variables to a page. You are going to have to import both urllib and urllib2, and know what is in each of these. Use urllib.urlencode to encode your post variables, then pass them into urllib2.urlopen to get a connection object, then read that. Yikes! Oh, does the site require cookies? That's another import and three lines of code; I hope you like reading up on CookiePolicy's!

Attempting to accomplish this task with built-in modules will likely result in something similar to:

import urllib, urllib2
from cookielib import CookieJar, DefaultCookiePolicy

cj = CookieJar( DefaultCookiePolicy(rfc2965=True) )
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
urllib2.install_opener(opener)
postVars = urllib.urlencode({"username": x, "password": y})
conn = urllib2.urlopen("http://example.com/login.php", postVars)
htmlResult = conn.read()



Compared to Java or C#, this is probably a terse solution. We are using Python however (for a reason), and that block of code sucks; that's not how anyone thinks. It is hard to remember, leads to copy-and-paste code, and isn't particularly readable. It also requires you to work with things you probably don't care about such as cookie policies, openers, and url encoding. You just want to send a page a message!

After forgetting between projects and having to re-discover how to implement this functionality a few times over already, I finally decided to write something to remember it for me. Suddenly we can write:

import web

web.enablecookies()
htmlResult = web.post("http://example.com/login.php", {"username": x, "password": y})



The web module is quite short and not even remotely impressive (you could write what I've exposed here in 5 or 6 lines), but it takes something I found tedious and verbose and turns it into something simple. It adapts the broken-down functionality of these libraries to the more abstract level that I think on. Everyone thinks (and works) differently, and surely for some people it WOULD make sense (and be necessary) to open connections and read from them (if at all) byte by byte.

My interest in posting this has less to do with this specific example, and more to do with finding out what other "thought adapters" people have written to make something easier, more readable, or more pleasant. I have a few of these and pull them as I need them for various projects. What about you?

Sunday, February 8, 2009

AWN dock (and Extras) 0.3.2 released! \o/

Avant Window Navigator has released version 0.3.2 today. This includes the release of the core dock, "awn", and all the applets and plugins, "awn-extras". There was a combination of about 130 bug fixes and feature requests closed in this release, including a few entirely new applets! One of my favorite new applets, moderately pointless I admit, is the Animal Farm applet which displays a cute animal who gives you a fortune on a click, thereupon changing to a different random animal. Below is a shot of 10 of them running :)


Other fun applets include a new customizable notification tray applet which supports transparency (with GTK 2.15+ in Jaunty), a flexible web comics applet, a new themeable clock, a simple to-do list, as well as plugins for Remember The Milk and Tomboy. Don't forget that great applets like Pandora, weather, calendar, and shinyswitcher (a desktop switcher) already exist and have been improved as well.


Awn-manager has also gotten a lot of love since the last release; managing themes and launchers should provide a much better user experience. Tons of bugs have been squashed in awn-manager and most changes will be reflected immediately...no need to restart AWN!

For more detailed information please check out Mark Lee's blog post, one of the main developers. To get it, check out the PPA, and don't forget to Digg it! I'll leave you with some more screenshots, all of which, including the ones above, are licensed under the WTFPL.


Saturday, February 7, 2009

Installing new languages and running applications in them

Awhile ago I promised to explain how to run applications in a different locale. This is quite useful as a developer, so that you can more robustly test your localization code. It can also be useful to translators who are translating to a locale which is not their default. Maybe you are learning a new language and want some extra practice in specific applications. Or, you may just want to see what your favorite application looks like in Russian or Hindi :)

There are basically only two steps to this rather simple process.

1. Install the desired languages. Go to System -> Administration -> Language Support. Scroll down the list, checking the "Support" box on the right for any language you want on your system. Once you are done click Apply and the necessary files will be downloaded for you. A logout and login is recommended by the application after this, and while not necessary, I recommend it as well (I've had a rare instance or so of segfaults when trying to use the locales before restarting the sesssion). If you want to follow along, ensure Russian is one of your choices.



2. Run the application under a different locale. You need to figure out the locale code that you want to run. In a terminal, run "locale -a" without the quotes, and you will see a list of all locales available on your system. If I want Russian, the one I am looking for in this list is "ru_RU.utf8". It is usually fairly obvious which one you want. Now, again in a terminal, just add "LC_ALL=ru_RU.utf8" before the application you want to run. If we want a Russian calculator for example, we would execute "LC_ALL=ru_RU.utf8 gcalctool". Ta-da!




This is a great way as a developer to make sure your applications are correctly detecting locales. I'd love to hear what you think and if there are any other reasons I missed that you may want to do this!

Friday, February 6, 2009

How Windows Vista, Digg, and Ubuntu Landed me a Sweet Job

A lot of people criticized Windows Vista when it first came out, and I was one of them. Another large group of people also don't believe there's money in open-source. However for me, the existence (and negatives) of Vista and the awesomeness of Ubuntu landed me a sweet job. How?

Around the time that Vista came out, I was using a laptop with a 1.7GHz Pentiun M processor and 1GB of RAM. I got Vista (Business edition) for free through school so I threw it on. I quickly grew to love the new start menu and many of the improved usability features it had. Unfortunately it ran like CRAP. It was so slow and unusable due to my machine specs that it was unbearable. I couldn't validate purchasing a new laptop at that time since Windows XP ran perfectly fine and really an OS SHOULD be able to run fine on those specs. But I also didn't want to go back to XP and lose the features that I liked from Vista.

Around that same time I was also browsing Digg and noticed a release announcement for this thing called Ubuntu, Feisty Fawn to be precise. Everyone seemed to be raving about it and I thought since it was free, I might as well give it a try and see if IT can run decently on my machine and allow me to do everything I wanted. As it turns out, it ran quite well and either supported everything I wanted out of the box, or was flexible enough to allow me to do it myself! Even better, it shipped and supported the applications I used on Windows but previously had to install and keep up-to-date myself, like Firefox, Thunderbird, and Pidgin.

Over time I started getting more into contributing to Ubuntu, first by finding bug reports matching problems I had and adding more information, then more general bug triaging help via BugSquad. Eventually I joined the BugControl team and also started contributing to projects like Avant Window Navigator (AWN). At one point the bug bot that announces new bugs in #ubuntu-bugs-announce went down so I wrote a new one (EeeBotu) which lives on to this day happily (I presume) announcing bugs. When I heard that community members could be sponsored to the next UDS (Ubuntu Developers Summit) in California, I excitedly applied and even more excitedly was accepted to attend courtesy of Canonical.

Around THIS period of time I had been applying to various jobs, one such job at a fun startup in California, Genius.com. They had recruited at my college, the Rochester Institute of Technology in Rochester, NY, and had picked some candidates including byself to be flown out for second interviews. Then the downturn in the economy came however, and they decided not to fly anyone out and re-evaluate new hires at a later time.

This was understandable, but I thought that maybe since I was going to be out in California for UDS anyway, they might want to take me up on a free interview. As it turns out they did. I was offered a job, accepted it, and after about a month I can say that it is a pretty sweet job!

So because Vista sucked (at least initially), I gave Ubuntu a try which I heard about on Digg, got involved and was sponsored to attend UDS in California, where I was able to interview at the company I currently work at.

So just remember, there's a positive side to every negative (thanks Microsoft!), and there IS money in open-source, at least indirectly (a huge thanks to Canonical!). Has Digg found a new business model?

In other news, I've joined a pact with 9 other friends to write a blog post a day for a month starting today, so if all goes well you will be hearing many (hopefully) interesting or fun things from me!

Thursday, January 29, 2009

Gnome Do 0.8 released; awesomeness ensues!

Gnome Do 0.8 has just been released! If you don't know anything about Do, I HIGHLY recommend that you check it out. It is a magical launcher inspired by Quicksilver that is super powerful, plugin-rich, intuitive, and rather polished.

The 0.8 release added a bunch of bling in the form of smooth animations and more attractive interfaces. One of the most interesting aspects about the release, however, is the addition of a Dock interface called Docky:




At first glance it appears to be just yet another Linux dock and a random feature for Do to add. However the dock is actually automatically and dynamically populated based on your most launched applications in Do. You can also add and remove launchers by dragging them to and from the dock. This would be cool enough, but it doesn't stop there because Docky is a first-class Do interface where you can perform all your actions!




Oh yeah, and it features a silky smooth parabolic zoom. Instead of repeating what the developers have already said better themselves, let me link you to their blog posts. If you want to get Do, check out the first link which is the official release announcement (or skip right to the PPA :). For more in-depth information, I highly recommend jassmith's post who was the main Docky developer. Happing Do-ing, and don't forget to Digg it!

Release announcement: http://do.davebsd.com/release.shtml
jassmith: http://jassmith.wordpress.com/2009/01/29/gnomedo080release/
pengdeng: http://b.pengdeng.com/2009/01/do-08-rock-out-with-your-dock-out.html