Thursday, July 30, 2009

Launchpad is now an automatic, magical translation factory!

I've been using Launchpad to host my personal finance application wxBanker for a few years now. The thing I was hearing most often was that it wasn't localized; people wanted currencies to look the way they should in their country, and the application to be in their language. Let me explain how Launchpad helped me provide translations for my application, and how much of an utterly slothful breeze it has recently become.

Image courtesy of shirt.woot.com
Normally to handle translations, an application has to wrap strings with gettext, create a template, find translators and give the template to them, collect translation files back, and integrate them into the project. This is painful and is why many applications aren't localized, and shut out most of the world as a result. One of the amazing features of Launchpad however, happens to be Rosetta, which brings translators TO your translations via a simple web interface, and then makes those translations available to the developer. With Rosetta, translators don't need to understand gettext or translation file formats; they just need to know two languages!



So that's what a translator sees. Notice how Launchpad even shows how other applications translated the same string. So once you generate a template and upload it, you can assign a translation group to your project such as "Ubuntu Translators" so that your strings will be translated by volunteers on the Ubuntu project; if your project isn't relevant to the Ubuntu project, you can use the more generic Launchpad Translators group. Now all you have to do is wait for some translations, then download them and check it in to your code. Not too bad, right?

It isn't, but Launchpad has recently made it so much better. They started by adding an option to automatically import translation templates from your project. This means as you are developing, all you have to do is regenerate the template and commit, and new strings will show up for translators in Rosetta and be translated automatically (from the developer's perspective). Then today, they announced the other side of this, which is automatically committing those translations back into your code on a daily basis. This means that all I have to do is commit templates as I change strings, and Launchpad handles everything else. This is a profound tool for developers.

What's the next step? Well, from a developer's perspective the translation template is a tool to give to the translators or in this case Launchpad. In the future Launchpad could eliminate this by generating the template itself from the code (this is what developers are doing themselves, after all), so that truly all you have to do after you set up the initial i18n/l10n framework is commit code as normal, and Launchpad magically commits back translations.

All this work Launchpad is doing gives developers more time to develop while still having localized applications at a very minimal cost. This is continuous translation integration, and boy is it cool!

Monday, July 6, 2009

Simple timing of Python code

Often when I am writing in Python, I find that I want to see how long a particular function call or set of statements are taking to execute. Let's say I have the following code that gets executed frequently:

for i in range(10000000):
x = 934.12 ** 32.61 * i / 453.12 ** 0.23

and I want to know how long it takes to execute to see if it is slowing down my app and should be optimized. Previously I would surround it as such:

import time; x = time.time()
for i in range(10000000):
x = 934.12 ** 32.61 * i / 453.12 ** 0.23
print time.time() - x

This will print out the duration in seconds of that code segment, but is more work and typing than I want, and more cleaning up later. I realized that the new "with" statement in Python could probably help me out. Let's create a little timer class that cooperates with it:

class Timer():
def __enter__(self): self.start = time.time()
def __exit__(self, *args): print time.time() - self.start

Now all we have to do is:

with Timer():
for i in range(1000000):
x = 934.12 ** 32.61 * i / 453.12 ** 0.23

You can also try:

with Timer():
time.sleep(1.5)


For these, 0.28738 and 1.50169 are what I get, respectively. While something like this couldn't really replace application-wide profiling via a module like cProfile, it can be an extremely useful and quick way to see if your prototype is scalable or not. I usually end up having a debug.py or helpers.py file in my larger projects with little tools like this, and I'll probably end up adding this one as well.

Let me know if you are doing something similar, or if I've reinvented something that already exists. I'd also love to hear from people profiling their python code and what techniques they are using, as I am just starting to learn about it.