24 hours a day, 7 days a week, 365 days per year...

April 18, 2015

Congratulations to Endless Computer

So the Endless Computer kickstarter just succeeded in their funding goal of 100K USD. A big heartfelt congratulations to the whole team and I am looking forward to receiving my system. For everyone else out there I strongly recommend getting in on their kickstarter, not only do you get a cool looking computer with a really nice Linux desktop, you are helping a company forward that has the potential to take the linux dektop to the next level. And the be aware that the computer is a standard computer (yet very cool looking) at the end of the day, so if you want to install Fedora Workstation on it you can :)

Dream Road

Right at the moment I’m writing this blog post, the Endless Kickstarter campaign page looks like this:

With 26 days to spare

I’m incredibly humbled and proud. Thank you all so much for your support and your help in bringing Endless to the world.

The campaign goes on, though; we added various new perks, including:

  • the option to donate an Endless computer to Habitat for Humanity or Funsepa, two charities that are involved in housing and education projects in developing countries
  • the full package — computer, carabiner, mug, and t-shirt; this one ships everywhere in the world, while we’re still working out the kinks of international delivery of the merch

Again, thank you all for your support.

April 17, 2015

Red Hat joins Khronos

So Red Hat are now formally a member of the Khronos Groups who many of probably know as the shepherds of the OpenGL standard. We haven’t gotten all the little bits sorted yet, like getting our logo on the Khronos website, but our engineers are signing up for the various Khronos working groups etc. as we speak.

So the reason we are joining is because of all the important changes that are happening in Graphics and GPU compute these days and our wish to have more direct input of the direction of some of these technologies. Our efforts are likely to focus on improving the OpenGL specification by proposing some new extensions to OpenGL, and of course providing input and help with moving the new Vulkan standard forward.

So well known Red Hat engineers such as Dave Airlie, Adam Jackson, Rob Clark and others will from now on play a much more direct role in helping shape the future of 3D Graphics standards. And we really look forward to working with our friends and colleagues at Nvidia, AMD, Intel, Valve and more inside Khronos.

ODF Plus Ten Years

It’s time for another five-year update on ODF for spreadsheets. Read the initial post from 2005 and the 2010 update for context. Keep in mind that I only have an opinion on ODF for spreadsheets, not text documents.

TL;DR: Better, but ODF still not suitable for spreadsheets.

So what’s new? Well, basically one thing: we now have a related standard for formulas in ODF spreadsheets! This is something that obviously occurred 5-10 years too late, but better late than never. The Wikipedia article on OpenFormula is a fairly amusing example of the need to justify and rationalize mistakes that seems to surround the OpenDocument standard.

OpenFormula isn’t bad as standards go. It has a value system, operators, and a long list of functions, for example. Nice Where it does have problems is in the many choices it allows implementations. For example, it allows a choice whether logical values are numbers or their own distinct type. That would not have been necessary if spreadsheets had been considered in the original standard — at that time OO could have bitten the bullet and aligned with everyone else.

Back to the standard proper. What has happened in the past five years? In a word, nothing. We still have a standard whose aim was to facilitate interoperability, but isn’t achieving it.

There are actually two flavours of the standard: strict and extended. “Strict” has a well-defined syntax complete with an xml schema. Extended is strict with add-your-own tags and attributes. No-one uses strict because there are common things that cannot be represented using it. Error values, for example. A simple line graph with a regression line and a legend, for example.

When the Gnumeric team needs to add something outside “strict” we first look to see if, say, LO has already defined a syntax would can use. We only invent our own when we have to and we try to read any LO extension that we can.

The OO/LO approach, however, appears to be to ignore any other producer and define a new extension. This is part of the “ODS by definition is what we write” mindset. The result is that we end up with multiple extensions for the same things.

So extensions are a free-for-all mess. In fact it is so big a mess that the schema for Gnumeric’s extensions that was hacked up a week ago appears to be the first. Let me rephrase that: for the past ten years no-one in the ODS world has been performing even basic document validation on the documents produced. There are document checkers out there, but they basically work by discarding anything non-strict and validating what is left.

There are also inherent performance problems with ODF. Many spreadsheets contain large areas of identical formulas. (“Identical” does not mean “textually identical” in ODF syntax but rather in the R1C1 syntax where “the cell to the left of this” always has the same name.) ODF has no concept of shared formulas. That forces reparsing of different strings that produce identical formulas over and over again. Tens of thousands of times is common. That is neither good for load times nor for file sizes.

A more technical problem with ODF is that the size of the sheet is not stored. One consequence is that you can have two different spreadsheets that compute completely different things but save to identical ODF files. At least one of them will be corrupted on load. That is mostly a theoretical concern, but the lack of size information also makes it harder to defend against damaged (deliberately or otherwise) input. For example, if a file says to colour cell A12345678 red we have no way of telling whether you have a damaged file or a very tall spreadsheet.

Gnumeric continues to support ODF, but we will not be making it the primary format.


Truthiness in Python is occasionally confusing. Obviously, False is false and True is true, but beyond that, what then?

None is always false–though this doesn’t mean that False == None, which is a mistake I made early in my Python career. I was confused by how a nonexistant list and an empty list were both falsey, and somewhere in my mind I thought that they were both None as well. Not so much.

>>> a = None
>>> bool(a)
>>> b = []
>>> bool(b)
>>> bool(a is None)
>>> bool(b is None)

A stylistic note here: since None is a singleton (i.e. there exists only one instance of it), the proper syntax is foo is None, rather than foo == None. But I digress.

The empty values of data structures are always falsey. Hence:

>>> bool([])
>>> bool("")
>>> bool({})

And perhaps most confusingly:

>>> bool(0)
>>> bool(1)
>>> bool(2)
>>> bool(-31.4)

I mean, this makes sense because we know that 0 is false and 1 is true… but if you think about it, this also means that 0 is the empty value of an int (which means that 0 is false, but every other value of int or float is true) This doesn’t mean much in Python, of course, but I’ve been playing with Go lately, in which you have to initialize your variables before you can do anything with them, and suddenly the idea of an empty value makes a lot more sense (and the empty value for an int is indeed zero).

Conversely, every non-zero value of a data structure is true. That means that a string with stuff in it, a dict. with stuff in it, a list with stuff in it, etc. is true no matter what the stuff is. And so:

>>> hip = False
>>> bool(hip)
>>> bool([hip, hip])

Proving conclusively, as we all knew, that hips don’t lie.


Extra credit: do you know what ["hip", "hip"] is?

…(wait for it)…

It’s a hip hip array.

(Womp womp.)


Truthiness in Python is occasionally confusing. Obviously, False is false and True is true, but beyond that, what then?

None is always false–though this doesn’t mean that False == None, which is a mistake I made early in my Python career. I was confused by how a nonexistant list and an empty list were both falsey, and somewhere in my mind I thought that they were both None as well. Not so much.

>>> a = None
>>> bool(a)
>>> b = []
>>> bool(b)
>>> bool(a is None)
>>> bool(b is None)

A stylistic note here: since None is a singleton (i.e. there exists only one instance of it), the proper syntax is foo is None, rather than foo == None. But I digress.

The empty values of data structures are always falsey. Hence:

>>> bool([])
>>> bool("")
>>> bool({})

And perhaps most confusingly:

>>> bool(0)
>>> bool(1)
>>> bool(2)
>>> bool(-31.4)

I mean, this makes sense because we know that 0 is false and 1 is true… but if you think about it, this also means that 0 is the empty value of an int (which means that 0 is false, but every other value of int or float is true) This doesn’t mean much in Python, of course, but I’ve been playing with Go lately, in which you have to initialize your variables before you can do anything with them, and suddenly the idea of an empty value makes a lot more sense (and the empty value for an int is indeed zero).

Conversely, every non-zero value of a data structure is true. That means that a string with stuff in it, a dict. with stuff in it, a list with stuff in it, etc. is true no matter what the stuff is. And so:

>>> hip = False
>>> bool(hip)
>>> bool([hip, hip])

Proving conclusively, as we all knew, that hips don’t lie.

Extra credit: do you know what ["hip", "hip"] is?

…(wait for it)…

It’s a hip hip array.


April 16, 2015

GNOME Builder - 3.16.2

Builder 3.16.2 has arrived!

I released 3.16.0 a couple weeks ago without much fanfare. Despite many months of 16-hour days and weekends, it lacked some of the features I wanted to get into the "initial" release. So I didn't stop. I kept pushing through to make 3.16.2 the best that I could.

Over the next couple of weeks I plan on writing more detailed posts on the technology. I couldn't do that while building it because I lack the ability to multitask on that level. So lets take a visual look through Builder 3.16.2. In future posts we'll dive into the various components that make up LibIDE.

Here we have the new project selector. It is shown when you start Builder. I assume long-term we'll skip this screen and jump you right back into your old project.

project selector

This is the new project dialog. Currently you can create a project from an existing one, or checkout from git. I'd like to have a workflow for quickly cloning from and possibly github.

new project dialog

While Builder is focused on writing GNOME software (and therefore autotools), you can create a project manually by selecting a directory. Unfortunately, you wont get fancy build features in doing so. Builder is abstracted in such a way that we can add additional build systems. If you have a build system you care about, we accept most patches.

open project

With a couple of minor patches to libgit2-glib, we got project cloning working in time. Please file bugs as you find them. We are likely to hit lots of corner cases with authentication.

clone project

Here is the editor. It looks incredibly simple, and that took a lot of work. One thing I'm particularly proud of is how it feels like a single widget rather than being composed of lots of little ones. Relentless iteration was key here.

This contains Builder's custom style scheme for GtkSourceView. Also notice the grid background. That work got pushed upstream into GtkSourceView this cycle.

If you take a look at the macros, you'll notice that macro names and expansions are highlighed. Also, type names are highlighted. This all comes from the parsing the file with clang and extracting information from the resulting AST.


You can quickly switch buffers in the view stack. ls and buffers commands in the command bar will focus this (more on that later).

buffer list

If you are using autotools with C or C++, we can do a reasonable job of extracting CFLAGS to provide to clang. I went through painstaking effort to make that fast.

We provide you a drop down of symbols that we discovered in the file. Clicking on one will focus it in the editor.

You can switch between header and source quickly with F4.

symbols list

While we try to do the right thing with syntax formatting, sometimes we get it wrong. You can override the discovered settings with the editor tweak dropdown.

Note that we support editorconfig and modelines today. We also provide some gsettings to set defaults across all your projects on a per-language basis.

editor tweak

Clicking on the document title will give you options related to that document. This includes splits, preview, and save operations. More on preview later.

document menu

Splits are pretty universally required out of an editor these days. Builder was designed with that in mind.

Notice the highlighting on enums and function names below. Yeah, we do that too.

split views

Yup, we have a project tree. The venerable F9 toggles it's visibility.

project tree

I rather like the new designs for creating new files. So I borrowed them for the project tree.

We try to be smart and expand the selected item while you are creating to give you all the context we can.

new file popover

You can even do a few things with what is in the tree. Clearly this is early on, we have lots more to do. No DnD support yet, sorry!

project tree menu

File rename. I prefer the popover design to editing in place with GtkCellRendererText. I know it's not much context, but it's something.

rename file

Honestly, I haven't spent that much time on preferences. But we do have some. I'd love for someone to come own this feature in Builder.


We have a few flavors of keybindings. I prefer Vim.

Late in the cycle, Alexander Larsson had an idea to do keybindings using the new "gtk-key-bindings" CSS property. It was no walk in the park, but it does work. Our Vim is implemented in CSS.

I often get asked why not just use NeoVim. I have a few reasons. First, I don't want to maintain 2 editors. And then 3 because people will want emacs too. Additionally, anything that resembles the real vim codebase I'm not touching with a 10ft pole. Third, I want all the features we put into a particular mode to improve the other modes equally. It wont ever be perfect, but it's certainly functional.

If you think you can manage to merge GtkTextView<->NeoVim, please go do that prototype somewhere and then we can definitely look at using it. But wholesale using another widget is out of the question. I'm not absracting "IBreakpoint" 4 times for 3 implementations and an interface. That's insane.


Type to search in preferences works.

preferences search

Diagnostics that support Fix-Its can be applied directly from the editor.

apply fixit

Global search is handy for opening files. I certinaly don't use the project tree to open files. This is way faster.

You can also search for documentation.

I do expect this feature to iterate a lot in the future. We have lots of designs to work through.

global search

One thing I never liked about Vim was that Shift+K would take you out of Vim to read the manpage. We can just show you the documentation side-by-side.

I still consider this a crutch though, we should really be providing good documentation at your fingertips when you need it (such as during autocompletion).


We have live preview for markdown.

I think we need a new markdown parser that lets us inject the cursor position. Right now we don't have a way to keep the editor and preview lines matched up vertically.

Works well for small files though.

markdown preview

You have no idea what it took to generate yellow squigly lines with GtkTextView. I had to hide data in the semi-public, albeit out-of-space GtkTextAppearance structure. I ended up stealing a few bytes here and there in the unusable padding between structures.

Errors are easy of course (just set PANGO_UNDERLINE_ERROR). But both that and and setting color is one of those 10+ year bugs. Clearly we should be less crazy in 4.0.

diagnostic warnings

Additions and changes are rendered in the gutter to the left based on colors defined in your style scheme.

buffer changes

We have a dark mode!

You can enable and disable it with nighthack and dayhack in the command bar. Let this be an example of how bad I am at naming things.

Also, we have a command bar! You can execute various GActions in the widget hierarchy as well as some vim commands.

command bar

The command bar also has autocompletion.

command bar completion

For the truely adventurous of you, you can enable the experimental clang autocompletion. It crashes a lot. I don't even run it. You are crazy, don't do it!

clang completion

Completion of clang proposals builds upon our snippet engine. That means you can tab through the various parameters which contain parameter information to give you some context. Context is paramount.

clang snippets

The snippet engine is pretty powerful. You can have tab-stops, linked edit points, edit points which transform values from other edit points, or none at all.

Also, they come with superfluous animation.


So many of the details in Builder aren't visible. So I'm glossing over half the work that went into this release. I'll be expanding more on that in the future, in posts that are not quite so screenshot heavy.

Happy Hacking!

Processing for Kids

Liam (7) has been playing a little recently with Processing, mostly drawing shapes and moving them around. The Hello Processing interactive video tutorial is an excellent introduction, for kids too. Thanks to Jon Nordby for suggesting Processing.

Liam is gradually working through the Getting Started with Processing book, typing in example code and changing it as the book suggests. Previously he has used Scratch and he’s started using the Lego Mindstorms programming environment, which is surprisingly visually complicated. But Processing is a nice introduction to real text-based programming, where you must type everything perfectly correctly or the computer will complain with incomprehensible error messages.

So far this seems to be the closest modern-day equivalent to my childhood experiences of sitting down with a Sinclair ZX81, Spectrum, or BBC Micro and trying things out from a book on BASIC. The expectations are low so you can easily feel that you’ve achieved something significant.


The Processing IDE is a very simple and obvious UI and a Processing hello-world can be just one line, without any platform initialization, without specifying anything about exactly where your output should appear:

line(15, 25, 70, 90);

By default there’s just one screen that you draw on, and all functions and types appear to be in a global namespace. So you can start making things happen without the distraction of boilerplate code and without figuring out where in that mess to put your own code. You don’t need to learn about objects, inheritance, or encapsulation, though of course you should later.

Writing an iPhone or Android app might seem more interesting to modern kids, but they’d have to wade through so much kibble just to get started, while always noticing how far they are from achieving anything like the existing apps they see.

Processing is actually Java. When you write code, that code then seems to be the contents of a method in a (derived?) PApplet class, though you don’t see that other than in some compiler error messages. The functions such as size(), stroke(), point(), ellipse(), color(), strokeWeight(), etc, are actually member methods of this class. You don’t need to import any Java classes to use this API.

Java is fairly forgiving, particularly for the simple examples that people will start with. And it  offers a nice route into object orientated programming without the lower-level pain of C or C++.

Instead of just writing a bunch of code to run and then stop, you can instead define (override) setup() and draw() functions that do what you’d expect. The draw() method can make use of member variables such as mouseX and mouseY (these are not parameters of draw()). Likewise, you can implement keyPressed() and make use of keyCode. So you get a simple introduction to making a program interactive by doing event-driven programming.

Processing is not perfect, and I think you’d feel the lack of a real API and IDE as soon as your project became more serious, but it’s a great introduction to real programming.

April 15, 2015 2015

Sponsored by GNOME FoundationOn May 7-9 2015 I’ll attend GNOME asia. It will be held in Depok, West Java, Indonesia (30km below Jakarta). I’m going there because:

  1. I’d like to understand the difference between GUADEC attendees vs attendees (their interests, why they go there)
  2. I want to represent GNOME Release Team (answer any questions)
  3. I’d like to host a bugsquad session together with Andre Klapper

It will be the first time ever to attend a non-European conference, so I’m quite curious how it’ll be like. I’ve requested partial sponsorship from the GNOME Foundation, which was approved even if I missed some kind of deadline. I guess it helps that I only ask for partial :P

I’ve already bought tickets and I’ll travel for the first time with Emirates. I can checkin 30kg of luggage and carry 7kg. The 30kg is twice what I’ve ever brought on a plane. I dislike having a heavy suitcase and I’m usually amazed that some people are at the airport with massive amount of suitcases. Especially considering that having just one big and good suitcase is not cheap. Bringing loads of stuff with you to me seems like a huge burden (buying suitcases and the impact it has when you travel). If you go to a warm country, you can do without a lot of clothing. I’m still wondering if there really is something heavy I should bring :-P

Be IP is hiring!

In case some readers of this blog would be interested in working with Open Source software and VoIP technologies, Be IP ( is hiring a developer. Please see for the job description. You can contact me directly.

April 14, 2015

High Leap

I’ve been working at Endless for two years, now.

I’m incredibly lucky to be working at a great company, with great colleagues, on cool projects, using technologies I love, towards a goal I care deeply about.

We’ve been operating a bit under the radar for a while, but now it’s time to unveil what we’ve been doing — and we’re doing it via a Kickstarter campaign:

The computer for the entire world

The OS for the entire world


It’s been an honour and a privilege working on this little, huge project for the past two years, and I can’t wait to see what another two years are going to bring us.

empowering kids to create

raul gutierrez:

The best toys - Tinkertoys, Lego, Play-Doh, Lincoln Logs - allowed us to build and rebuild almost endlessly. With my kids, I noticed that these kinds of toys have become increasingly rare. Lego bricks are sold primarily as branded kits. Instead of a pile of blocks that could become anything, they are now essentially disassembled toys. Instead of starting with a child’s imagination of what could be, play is now fixed on a single endpoint, predetermined by Lego’s designers.

more and more often i feel the same way about computers. compare this to alec resnick's article how children what?:

And so in the twenty-three years since the creation of the World Wide Web, "a bicycle for the mind" became "a treadmill for the brain." One helps you get where you want under your own power. Another’s used to simulate the natural world and is typically about self-discipline, self-regulation, and self-improvement. One is empowering; one is slimming. One you use with friends because it's fun; the other you use with friends because it isn't. One does things to you; one does things for you. And they certainly aren't about helping us to do things with them.

Hackweek 12: improving GNOME password management, day 1

This week is Hackweek 12 at SUSE

My hackweek project is improving GNOME password management, by investigating password manager integration in GNOME.

Currently, I'm using LastPass which is a cloud-based password management system.

It has a lot of very nice features, such as:
  • 2 factor authentication
  • Firefox and Chrome integration
  • Linux support
  • JS web client with no install required, when logging from a unknown system (I never needed it myself)
  • Android integration (including automatic password selection for applications)
  • cli open-source client (lastpass-cli), allowing to extract account specific information
  • encrypted data (nothing is stored unencrypted server side)
  • strong-password generator
  • support encrypted notes (not only password)
  • server based (clients sync) with offline operations supported
However, it also has several drawbacks:
  • closed-source
  • subscription based (required for Android support)
  • can't be hosted on my own server
  • doesn't integrate at all with GNOME desktop
I don't want to reinvent the wheel (unless it is really needed), which is why I spend my first day at searching the various password managers available on Linux and compare their features (and test them a bit).

So far, I found the following programs:
  • KeePass (GPL):
    • version 1.x written in Java, still supported, not actively developed
    • version 2.x written in C# (Windows oriented), works with Mono under Linux
    • UI feels really Windows-like
    • DB format change between v1 and v2
    • supports encrypted notes
    • password generator
    • supports plugins ( a lot are available)
    • support OTP (keeotp plugin, provide 2factor auth through TOTP, HTOP built-in)
    • shared db editing
    • support yubikey (static or hotp)
    • 2 Firefox extension available(keefox, passifox)
    • 3 android applications available (one application KeePass2Android supports alternative keyboard, KeepShare supports alternative keyboard + a11y framework to fill android application forms, like LastPass)
    • Chrome extension available
    • JS application available
    • CLI available
    • big ecosystem of plugins and other applications able to process file format

  • KeePassX (GPL)
    • Qt4 "port" of KeePass (feels more a Linux application than KeePass)
    • alpha version for DB v2 support
    • missing support for OTP
    • missing support for keypasshttp (required by firefox extensions to discuss with main application), support is being done in a separate branch by a contributor, not merged
    • release are very scarse (latest release is April 2014, despite commits on git, very few people are contributing, according to git)
    • libsecret dbus support is being started by a contributor

  • Mitro:
    • company developped it was bought by Twitter last year, project released under GPL, no development since January.

  • Password Safe (Artistic license):
    • initially written by Bruce Schneier 
    • beta version available on Linux
    • written in wxWidgets 3.0 / C++
    • yubikey supported
    • android application available, no keyboard nor a11y framework usage, only use copy/paste (but allows sync of db with owncloud and other cloud platforms)
    • CLI available
    • 3 different DB formats (pre-2.0, 2.0, 3.0)
    • password history
    • no firefox extension and the "auto-type" built-in function is all but intuitive
    • support merge of various db

  • Encrypt:
    • same 0 knowledge framework as SpiderOak
    • node-js based

  • Pass:
    • simple script on top of text files / gnupg and optionnally git (used for history and can also be used to manage hosting the file)
    • not easy learning curve (CLI mostly), need gnupg to be setup before usage
    • one file per password entry, should make 
    • very basic Qt GUI available
    • basic FF extension available
    • basic android application available
Unfortunately, none of those applications properly integrates (yet) in GNOME (master password prompt, locking keyring when desktop is locked, etc..).

I've also looked at gnome-keyring integration with the various browsers:
  • Several extensions already exist, one is fully written in Javascript and is working nicely (port to libsecret is being investigated)
  • Chrome has already gnome-keyring and libsecret integration
  • Epiphany already works nicely with gnome-keyring
  • No password generator is available in Firefox / Chrome / Epiphany (nor GTK+ on a more generic basis)
Unfortunately, each browser is storing metadata in gnome-keyring for password entries in a slightly different format (fields name), causing password entries duplication and not allowing sharing keyring data across browsers.

Conclusions for this first day of hackweek:
  • Keepass file format seems to be the format of choice for password manager (a lot of applications written around it)
  • Password manager which would fit my requirement is KeePass but is written in Mono (I don't want Mono stack to come back on my desktops) and too Windows oriented, so not an option.
  • KeePassX seems to be the way to go (even if it is Qt based) but it lacks a lot of features and I'm not sure if it worth spending effort in adding those missing features there.
  • Pass is extremely simple (which would make hacking around it pretty straightforward) but requires a lot of work around it (android, GUI) to make it nicely integrated in GNOME.
I haven't yet made up my mind which solution is the best, but I'll probably spend the following days hacking around KeePassX (or a new program to wrap KP db into libsecret) and Firefox gnome-keyring integration.

Comments welcome, of course.

The chore of tuning PIDs

Tuning PIDs is one of those things you really don’t want to do, but can’t avoid it in the acrobatic quad space. Flying camera operators don’t usually have to deal with this, but the power/weight ratio is so varied in the world of acro flying you’ll have hard time avoiding it there. Having a multirotor “locked in” for doing fast spins is a must. Milliseconds count.

FPV Weekend

So what is PID tuning? The flight controller’s job is to maintain a certain position of the craft. It has sensors to tell it how the craft is angled and how it’s accellerating, and there’s external forces acting on the quad. Gravity, wind. Then there’s a human giving it RC orders to change its state. All this happens in a PID loop. The FC either wants to maintain its position or is given an updated position. That’s the target. All the sensors give it the actual current state. Magic happens here, as the controller gives orders to individual ESCs to spin the motors so we get to there. Then we look at what the sensors say again. Rinse and repeat.

PID loop is actually a common process you can find in all sorts of computer controllers. Even something as simple as a thermostat does this. You have a temperature sensor and you drive a heater or an air conditioner to reach and maintain a target state.

The trick to a solid control is to apply just the right amount of action to get to our target state. If there is difference between where we are and where we want to be, we need to apply some force. If this difference is smaller, only a small force is required. If it’s big, a powerful force is needed. This is essentially what the P means, proprotional. In most cases, as a controller, you are truly unhappy if you are elsewhere to where you were told to be. You want to correct this difference fast, so you provide a high proportional value/force. However, in the case of a miniquad, the momentum will continue pulling you when you reached your target point and don’t apply any force anymore. At this point the difference occurs again and the controller will start correcting the craft pulling it back in the opposite direction. This results in an unstable state as the controller will be bouncing the quad back and forth, never reaching the target state of “not having to do anything”. The P is too big. So what you need is a value that’s high enough to correct the difference fast, but not as much so the momentum gets you oscillating around the target.

So if we found our P value, why do we need to bother with anything else? Well sadly pushing air around with props is a complicated way to remain stationary. The difference between where you are and where you want to be isn’t just determined by the aircraft itself. There are external forces that are in play and those change. We can get a gust of wind. So what we do is we correct that P value based on the changed conditions. Suddenly we don’t have a fixed P contoller, we have one that has variable P. Let’s move on how P is dynamically corrected.

The integral part of the controller corrects the difference that suddenly appears due to the new external forces coming into play. I would probably do a better job explaining this if I enjoyed maths, but don’t hate me, I’m a graphics designer. Magic maths corrects this offset. Having just the proprotional and integral part of the corrective measure is enough to form a capable controller perfectly able to provide a stable system.

However for something as dynamic as an acrobatic flight controller, you want to improve on the final stage of the correction where you are close to reaching your target after a fast dramatic correction. Typically what a PI controller would get you is a bit of a wobble at the end. To correct it, we have the derivative part of the correction. It’s a sort of a predictive measure to lower the P as you’re getting close to the target state. D gives you the nice smooth “locked in” feeling, despite having high P and I values, giving you really fast corrective ability.

There are three major control motions of a quad that the FC needs to worry about. Pitch for forward motion is controlled by spinning the back motors faster than the front two motors thus angling the quad forward. Roll motion is achieved exactly the same way, but with the two motors on one side spinning faster than the other two. The last motion is spinning in the Z axis, the yaw. That is achieved by torgue and the fact than the propellers and motors spin in different directions. Typically the front left and back right motor are clockwise spinning and the front right and back left motor are spinning counter clockwise. Thus spinning up/accellerating the front left and back right motors will turn the whole craft counter clockwise (counter motion).

I prepared a little cheat sheet on how to go about tuning PIDs on the NAZE32 board. Before you start though, make sure you set the PID looptime as low as your ESC allow. Usually ESC send the pulses 400 times a second which is equivalent to a looptime of 2500. The more expensive ESC can do 600Hz and some, such as the miniscule KISS ESCs, can go as low as 1200.

ESC refresh rate NAZE32 Looptime
286Hz 3500
333Hz 3000
400Hz 2500
500Hz 2000
600Hz 1600

You do this in the CLI tab of baseflight:

set looptime=2500

Hope this has been helpful for some as it was for me :).

Quick Guide on PID tuning

April 10, 2015

PrefixSuffix revived

In 2002 I released a little GNOME app to rename files by changing the start or end of the filename, using gtkmm and gnome-vfs(mm). It worked well enough and was even packaged for distros for a while before the dependencies became too awkward.

I’ve never heard of a single person using it, but I still need the app now and then so I just updated it and put it in github, changing it from gtkmm 2.4 to gtkmm 3, removing the bakery dependency, and changing from gnome-vfs to GIO. I also removed most signs of my 2002 code style.

It seems to work, but it really needs a set of test cases.

I’m sure that the performance could be vastly improved but I’m not greatly interested in that so far. Patches are welcome. In particular, I haven’t found an equivalent for the gnome_vfs_xfer_uri_list() function for renaming several files at once and I guess that repeated sequential calls to g_file_set_display_name_async() are probably not ideal.


the web we lost

Anil Dash (original post):

We've increasingly coupled our content and our expression to devices that get obsolete more and more quickly. And when you get to this sense of these new devices, formats get harder and harder to preserve and this is especially true when they're these proprietary or underdocumented formats. Because we've given up on formats. The reality is: those of us that cared about the stuff have lost. Overall we've lost. Very very few the consumer experiences that people use or the default apps that come with their devices work around open formats. There's some slight exceptions around photos, obviously JPEG is doing pretty well, HTML is doing okay, but the core interactions of a small short status update or the ability to tell somebody you like something, those things aren't formats or protocols at all. They're completely undocumented, they can be changed at any time. And just even the expectation that they would be interoperable, that is perhaps the most dramatic shift from the early days of the social web.

the problem with the web we have today isn't that it is worse than the web we had. it's actually better in most regards - except it's harder and more closed up. the opposite is what need, otherwise people will keep on stumbling into seemingly open ad-supported spaces, not realizing what they are doing. until the day they decide they want to leave and can't. kinda like the hotel california:

You can check-out any time you like, but you can never leave!

Updating device firmware on Linux

If you’re a hardware vendor reading this, I’d really like to help you get your firmware updates working on Linux. Using fwupd we can already update device firmware using UEFI capsules, and also update the various ColorHug devices. I’d love to increase the types of devices supported, so if you’re interested please let me know. Thanks!

April 09, 2015

Headless encrypted boot with Fedora Server

Here is a recipe for using encrypted boot on a Fedora Server system that does not have a monitor or keyboard attached during normal use.

I’ll use Fedora 21 Server, and will have a dedicated encrypted volume group for data but leave the main operating system volume group unencrypted. The encryption key will be stored on a USB memory stick. When it is connected the system will boot normally; otherwise it will wait for a while for it to be connected and finally fall back to emergency mode.


When installing, create a separate volume group in which to put the actual data volumes. To do that, select “I will configure partitioning” when setting up disk storage.

On the next page, allow the installer to create the mount points automatically: we’ll edit them next.

Next, set up one of the mount points for the data volumes. In this example, I have used /home.

In the screenshot above, I’ve decreased the size for the / volume to make space for a new volume, then added one and set its mount point to /home. It is currently in the automatically-created fedora-server volume group.

Next, put this new volume in a volume group of its own by selecting /home and choosing “Create a new volume group …” in the Volume Group drop-down.


Give the new volume group a name (for example, “data”), and choose to have it encrypted by clicking on the checkbox.


Once you have added the volumes you want and clicked Done, you will be asked to set a passphrase.

The summary of changes shows in more detail what will happen. You can see that a separate partition, vda3, will be the LUKS (encrypted) device, acting as an LVM physical volume: that’s for our data volume group.

Continue with the installation as normal and reboot when prompted. Don’t forget your passphrase!

Prepare encryption key media

Now, find a USB memory stick, create an ext4 filesystem on it whose label is “Key”, and mount it at /mnt/key. For example:

# mkfs.ext4 -L Key /dev/sdb1
# mkdir /mnt/key
# mount /dev/sdb1 /mnt/key

Place the encryption passphrase in /mnt/key/data-key — note, it must not have a newline at the end. We can use read and printf for this: it serves the dual purpose of removing the newline and keeping the passphrase out of the shell’s history.

# read -r a ; printf %s "$a" > /mnt/key/data-key
(enter passphrase, press enter, then Ctrl-D)
# chmod 600 /mnt/key/data-key
# chcon -u unconfined_u -t unlabeled_t /mnt/key/data-key

Next, edit /etc/fstab and add a line describing the filesystem we just made. Here is the line to add:

LABEL=Key  /mnt/key  ext4  defaults,ro,x-systemd.device-timeout=60  0  0

While there, adjust the x-systemd.device-timeout value for /home from 0 (no timeout) to 60 (seconds). These two timeouts ensure that, rather than waiting indefinitely, the system will fall back to emergency mode if the USB device is not present.

Test the fstab changes by running umount /mnt/key and then mount /mnt/key. If all is well there will be no error messages.

Now that the passphrase is stored away we need to specify its location so it can be retrieved at boot. To do this, edit /etc/crypttab. Change “none” at the end of that line so that it contains the passphrase filename:

luks-... UUID=... /mnt/key/data-key

At boot time, the filesystem will be mounted automatically in order to read the passphrase.

Set up encrypted swap

We definitely want the swap partition to be encrypted, but it doesn’t need to be in the encrypted volume group. Why? Because a new key can be generated each boot and discarded on shutdown. To do that, edit /etc/crypttab again and add this line:

cryptswap  /dev/mapper/fedora--server-swap  /dev/urandom  swap

The /dev/mapper/ name must match the existing logical volume device mapper name: you can check it with “lsblk”.

To actually use that mapped device, edit /etc/fstab and change the swap line to use /dev/mapper/cryptswap:

/dev/mapper/cryptswap  swap  swap  defaults  0  0

No need to recreate initial ramdisk

At boot, the job of the initial ramdisk is to find and mount the root (/) filesystem. As we haven’t made any changes to that, only to /home, there is no need to run dracut.

Reboot and test it!

If the system is booted without the encryption media inserted, after a minute it will drop into emergency mode. The system will allow you to log in as root and make adjustments, but of course /home will not be available unless you unlock it from the command line with the passphrase using cryptsetup.

Bonus: Automatically unmount encryption key media after boot

As things stand, the Key filesystem will remain mounted all the time, which means the device can’t be safely removed while the system is running. Here is a systemd service file you can use to unmount the encryption passphrase USB storage device after it has fulfilled its purpose at boot time:

Description=Unmount encryption passphrase media at boot

ExecStart=/usr/bin/umount /mnt/key


Save it as /etc/systemd/system/umount-key.service and enable it:

# systemctl enable umount-key.service

All done

That’s it! Hope you find this useful.

I’d love to have this work for whole-disk encryption, where the root filesystem is also encrypted, and I’d also love to be able to fall back to entering the passphrase at the console, rather than dropping into emergency mode. However it doesn’t seem to be possible just yet.

UPDATED: Fixed the read/printf line to handle escape characters better as suggested by Ralph.

April 08, 2015

Hughes Hypnobirthing

If you’re having a baby in London, you might want to ask my wife for some help. She’s started offering one to one HypnoBirthing classes for mothers to be and birth partners designed to bring about an easier, more comfortable birthing experience.

It worked for us, and I’m super proud she’s done all the training so other people can have such a wonderful birthing experience.

DFN Workshop 2015

As in the last few years, the DFN Workshop happened in Hamburg, Germany.

The conference was keynoted by Steven Le Blond who talked about targeted attacks, e.g. against dissidents. He mentioned that he already presented the content at the USENIX security conference which some people think is very excellent. He first showed how he used Skype to look up IP addresses of his boss and how similarly targeted attacks were executed in the past. Think Stuxnet. His main focus were attacks on NGOs though. He focussed on an attacker sending malicious emails to the victim.

In order to find out what attack vectors were used, they contacted over 100 NGOs to ask whether they were attacked. Two NGOs, which are affiliated with the Chinese WUC, which represents the Uyghur minority, received 1500 malicious emails, out of which 1100 were carrying malware. He showed examples of those emails and some of them were indeed very targeted. They contained a personalised message with enough context to look genuine. However, the mail also had a malicious DOC file attached. Interestingly enough though, the infrastructure used by the attacker for the targeted attacks was re-used for several victims. You could have expected the attacker to have their infrastructure separated for the various victims, especially when carrying out targeted attacks.

They also investigated how quickly the attacker exploited publicly known vulnerabilities. They measured the time of the malicious email sent minus the release date of the vulnerability. They found that some of the attacks were launched on day 0, meaning that as soon as a vulnerability was publicly disclosed, an NGO was attacked with a relevant exploit. Maybe interestingly, they did not find any 0-day exploits launched. They also measured how the security precautions taken by Adobe for their Acrobat Reader and Microsoft for their Office product (think sandboxing) affected the frequency of attacks. It turned out that it does help to make your software more secure!

To defend against targeted attacks based on spoofed emails he proposed to detect whether the writing style of an email corresponds to that of previously seen emails of the presumed contact. In fact, their research shows that they are able to tell whether the writing style matches that of previous emails with very high probability.

The following talk assessed end-to-end email solutions. It was interesting, because they created a taxonomy for 36 existing projects and assessed qualities such as their compatibility, the trust-model used, or the platform it runs on.
The 36 solutions they identified were (don’t hold your breath, wall of links coming): Neomailbox, Countermail, salusafe, Tutanota, Shazzlemail, Safe-Mail, Enlocked, Lockbin, virtru, APG, gpg4o, gpg4win, Enigmail, Jumble Mail, opaqueMail,,, Mailpile, Bitmail, Mailvelope, pEp, openKeychain, Shwyz, Lavaboom, ProtonMail, StartMail, PrivateSky, Lavabit, FreedomBox, Parley, Mega, Dark Mail, opencom, okTurtles, End-to-End,, and LEAP (Bitmask).

Many of them could be discarded right away, because they were not production ready. The list could be further reduced by discarding solutions which do not use open standards such as OpenPGP, but rather proprietary message formats. After applying more filters, such as that the private key must not leave the realm of the user, the list could be condensed to seven projects. Those were: APG, Enigmail, gpg4o, Mailvelope, pEp,, and

Interestingly, the latter two were not compatible with the rest. The speakers attributed that to the use of GPG/MIME vs. GPG/Inline and they favoured the latter. I don’t think it’s a good idea though. The authors attest pEp a lot of potential and they seem to have indeed interesting ideas. For example, they offer to sign another person’s key by reading “safe words” over a secure channel. While this is not a silver bullet to the keysigning problem, it appears to be much easier to use.

As we are on keysigning. I have placed an article in the conference proceedings. It’s about GNOME Keysign. The paper’s title is “Welcome to the 2000s: Enabling casual two-party key signing” which I think reflects in what era the current OpenPGP infrastructure is stuck. The mindsets of the people involved are still a bit left in the old days where dealing with computation machines was a thing for those with long and white beards. The target group of users for secure communication protocols has inevitably grown much larger than it used to be. While this sounds trivial, the interface to GnuPG has not significantly changed since. It also still makes it hard for others to build higher level tools by making bad default decisions, demanding to be in control of “trust” decisions, and by requiring certain environmental conditions (i.e. the filesystem to be used). GnuPG is not a mere library. It seems it understands itself as a complete crypto suite. Anyway, in the paper, I explained how I think contemporary keysigning protocols work, why it’s not a good thing, and how to make it better.

I propose to further decentralise OpenPGP by enabling people to have very small keysigning “parties”. Currently, the setup cost of a keysigning party is very high. This is, amongst other things, due to the fact that an organiser is required to collect all the keys, to compile a list of participant, and to make the keys available for download. Then, depending on the size of the event, the participants queue up for several hours. And to then tick checkboxes on pieces of paper. A gigantic secops fail. The smarter people sign every box they tick so that an attacker cannot “inject” a maliciously ticked box onto the paper sheet. That’s not fun. The not so smart people don’t even bring their sheets of paper or have them printed by a random person who happens to also be at the conference and, surprise, has access to a printer. What a gigantic attack surface. I think this is bad. Let’s try to reduce that surface by reducing the size of the events.

In order to enable people to have very small events, i.e. two people keysigning, I propose to make most of the actions of a keysigning protocol automatic. So instead of requiring the user to manually compare the fingerprint, I propose that we securely transfer the key to be signed. You might rightfully ask, how to do that. My answer is that we’ve passed the 2000s and that we bring devices which are capable of opening a TCP connection on a link local network, e.g. WiFi. I know, this is not necessarily a given, but let’s just assume for the sake of simplicity that one of our device we carry along can actually do WiFi (and that the network does not block connections between machines). This also prevents certain attacks that users of current Best Practises are still vulnerable against, namely using short key ids or leaking who you are communicating with.

Another step that needs to be automated is signing the key. It sounds easy, right? But it’s not just a mere gpg --sign-key. The first problem is, that you don’t want the key to be signed to pollute your keyring. That can be fixed by using --homedir or the GNUPGHOME environment variable. But then you also want to sign each UID on the key separately. And this is were things get a bit more interesting. Anyway, to make a long story short: We’re not able to do that with plain GnuPG (as of now) in a sane manner. And I think it’s a shame.

Lastly, sending the key needs to be as “zero-click” as possible, too. I propose to simply reuse the current MUA of the user. That sounds easy, but unfortunately, it’s only 2015 and we cannot interact with, say, Evolution and Thunderbird in a standardised manner. There is xdg-email, but it has annoying bugs and doesn’t seem to be maintained. I’m waiting for a sane Email-API. I mean, Email has been around for some time now, let’s now try to actually use it. I hope to be able to make another more formal announcement on GNOME Keysign, soon.

the userbase for strong cryptography declines by half with every additional keystroke or mouseclick required to make it work

— attributed to Ellison.

Anyway, the event was good, I am happy to have attended. I hope to be able to make it there next year again.

Let’s start

So here it is: another development blog on the web.

Let’s be fair and realize that there are plenty of development blogs out there in the wild. So, why I’m doing it – again?

Well, I do many many things. Things that I really like. And I feel it is time to share these things.


Let’s do it.

GNOME Calendar: poll results


As some of you are aware (or not), I did a public poll of the most wanted features for GNOME Calendar. It was a success, and 216+ votes were collected by the time I’m writting this post.

The results are detailed below.

1. Calendar management

The earlier versions of Calendar indeed had a very raw calendar management dialog. It was dropped a long time ago – in fact, more than 2 years ago – in favor of a new, redesigned dialog that actually never came. Some experimental work happened before 3.16, but it wasn’t complete nor stable enought to be merged.

Captura de tela de 2015-04-05 19-48-18GNOME Calendar 3.16 only allows changing the visibility of calendars

Since we’re talking about the topmost wanted feature to land on Calendar, we wouldn’t dare to make it like any other normal feature. No. Period.

This is what most people want from Calendar, and it’ll be done with all the care, attention to details and love that the Calendar Development Team can give to it. We’re nonstop bothering keeping close contact with the Design Team to deliver high quality mockups, and many experiments are going on.

We’ll be able to make it for 3.18 release. Thanks to Allan Day, Lapo Calamandrei for their commitment. They are true heroes!

2. Week view

Calendar also had a week view that didn’t make for 3.16. We didn’t have enought time nor design material to make the old week view reach GNOME quality standards. It wasn’t working correctly, it had many design flaws (not only about aesthetics, but some subtler things like multiday events, all-day events, et al).

Week viewWeek view before the official release

We hope to have new mockups by the Design Team this release, and then we can craft it correctly. Again, this is much of a core feature that absolutely cannot be poorly done.

3. Alarms, attachments & attendants

This will happen with the major rework over the edit dialog. I personally recognize the importance of this item but, since Calendar is early at GNOME’s house, it is not the highest priority item.

4. Agenda view

No idea how it’ll look like, and almost surely will land only at 3.20.

What to expect

I hope that you guys had a taste of Calendar with the preview release this cycle. Besides the features discussed above, we’ll have some other background improvements, stability fixes and many, many bugfixes.

Best regards!

GNU Cauldron 2015

This year the GNU Cauldron Conference is going to be held in Prague, Czech Republic, from August 7 to 9, 2015.

The GNU Cauldron Conference is a gathering of users and hackers of the GNU toolchain ecosystem.

Meaning that if you are interested in projects remotely related to the GNU C library, GNU Compiler Collection, the GNU Debugger or any toolchain runtime related project that has ties with the GNU system you are welcome!

If you are a Free Software project that is using the GNU Toolchain, would like your voice to be heard, hang out with other users and hackers of that space you are even more than welcome! If yo have crazy ideas you'd like to discuss over a nice beverage of your choice, please join!

You just have to send a nice note to saying that you are coming, and that would act as a registration. The number of seats is limited, so please do not drag your feet too much :-)

And if you want present a talk, well, there is a call for paper under way. You just have to sent your abstract to The exact call for paper can be read here.

So see you there, gals'n guys!

GNU Hackers Meeting 2011 in Paris

In case you are in the Paris area and don't know already, there is a a

GNU Hackers Meeting event being held from Thursday 25th to Sunday 28th

August, 2011 at IRILL If you are a GNU user, enthusiast, or

contributor of any kind, feel free to come. I guess you can still

drop an email to

For folks around on Wednesday (yeah, that's tomorrow), we are having a

dinner around 8 PM at the Mussuwam, a Senegalese restaurant in Paris, near Place

d'Italie. When you get there, just give them the secret password

(which is 'GNU') and they'll show you were the rest of the crowd sits.

Be sure to keep that password secret though. No one else should be in

the know.

Happy hacking and I hope to see you guys there.

April 04, 2015

Automating translation of strings in OSM

I’ve been thinking a little bit about automating the translation of maps into multiple Indic languages ever since I saw the Kannada map at geoBLR in March.

I started some work on it today, and I have lots of interesting things to report. Right now I am mostly transliterating as opposed to translating but if a dictionary of common words/tags can be compiled, upgrading the script to translate instead of transliterating should be doable.

Here’s the algorithm I followed:
  1. Get the nodes within a bounding box from OSM using the python wrapper for Overpass – overpy – This returns a collection of nodes and associated ID, tags, lat, lon and other attributes. This can also be repeated for ways by using the corresponding overpy query.
  2. Filter nodes that have tags
  3. From the result of the filter, identify nodes with Indic language tags – eg:[“name:kn”]
  4. Transliterate the string value for tag[“name:kn”] to another language – I used Tamil – and store it within tag[“name:ta”] – I used the Indic transliterator APIs from SILPA for this
  5. Create a new changeset and upload the result(node with tag[“name:ta”]) to OSM using osmapi

I did it only for one node:

  • Indic to Indic transliterations – ✓The Indic transliterator APIs seem to convert quite effortlessly from one Indic language to another. Right now, support is available for Hindi, Tamil, Punjabi, Gujarati, Malayalam, Oriya, Bengaliand Kannada. So, if a Kannada tag exists in OSM, the same text can be transliterated into multiple Indic languages using the naive algorithm I described above.


  • English to Indic transliterations – X: Though the Indic Transliterator works for English To Indic transliterations as well, it is not very useful. This is because only English words that are in the CMU dictionary are capable of being transliterated – which means that we can’t transliterate “Raajaajeenagar”, even if we had a custom tag for transliteration on OSM. On emailing the developer of the transliterator about extending the capabilities of English transliteration, I was told that extending the dictionary by adding additional words is one option. I am not sure of how feasible this is, or how much more optimal it is as compared to translating to one Indic language and transliterating+translating to the rest.
  • Translations of English Words – X – Right now, I am only able to transliterate words, but if a list of common words(I am guessing all the OSM tags, and other common words) could be compiled, and translated into all the Indic languages, the translation process can be automated quite easily. This would require the algorithm to have 2 additional steps
    1. From an Indic tag(i.e., an already translated tag, we would have to identify portions that are in the translations list, and leave them out of the transliteration process.
    2. For the word(s) identified in step 1, we must find a translation in the translations list for the language we are translating into. This must then be suffixed or prefixed with the transliterated portion. I am guessing suffix will be the norm, while prefixes might occasionally be necessary.
  • Tracking node version numbers – X – Right now, I am unable to track the version attribute of a node tag using the overpy API. I entered the version number manually. Not sure if I am missing something. This is just a “need-to-figure-out” issue more than anything. This is very important for automatically updating a node to the server because if there’s a mismatch between the version number being passed to the API and the version number on the server, the API won’t work.
  • Which Indic Language to begin transliterating in – Issues might arise if a language like Tamil – where the letter for ka, kha, ga, gha etc is the same – is say used to transliterate to Hindi. But, if we use a language like Kannada or Hindi for the first time, this issue can probably be resolved easily.

The script is on Github. Feel free to fork it, use it, work on it, edit it and suggest changes, different language, other possibilities, alternatives etc. Pull Requests very welcome. :)

This is my first time writing code in Python, so advice on improving code would be very welcome. Also, let me know if I’m missing something else, obvious or subtle.

April 02, 2015

JdLL 2015

Presentation and conferencing

Last week-end, in the Salle des Rancy in Lyon, GNOME folks (Fred Peters, Mathieu Bridon and myself) set up our booth at the top of the stairs, the space graciously offered by Ubuntu-FR and Fedora being a tad bit small. The JdLL were starting.

We gave away a few GNOME 3.14 Live and install DVDs (more on that later), discussed much-loved features, and hated bugs, and how to report them. A very pleasant experience all-in-all.

On Sunday afternoon, I did a small presentation about GNOME's 15 years. Talking about the upheaval, dragging kernel drivers and OS components kicking and screaming to work as their APIs say they should, presenting GNOME 3.16 new features and teasing about upcoming GNOME 3.18 ones.

During the Q&A, we had a few folks more than interested in support for tablets and convertible devices (such as the Microsoft Surface, and Asus T100). Hopefully, we'll be able to make the OS support good enough for people to be able to use any Linux distribution on those.

Sideshow with the Events box

Due to scheduling errors on my part, we ended up with the "v1" events box for our booth. I made a few changes to the box before we used it:

  • Removed the 17" screen, and replaced it with a 21" widescreen one with speakers builtin. This is useful when we can't setup the projector because of the lack of walls.
  • Upgraded machine to 1GB of RAM, thanks to my hoarding of old parts.
  • Bought a French keyboard and removed the German one (with missing keys), cleaned up the UK one (which still uses IR wireless).
  • Threw away GNOME 3.0 CDs (but kept the sleeves that don't mention the minor version). You'll need to take a sharpie to the small print on the back of the sleeve if you don't fill it with an OpenSUSE CD (we used Fedora 21 DVDs during this event).
  • Triaged the batteries. Office managers, get this cheap tester!
  • The machine's Wi-Fi was unstable, causing hardlocks (please test again if you use a newer version of the kernel/distributions). We tried to get onto the conference network through the wireless router, and installed DD-WRT on it as the vendor firmware didn't allow that.
  • The Nokia N810 and N800 tablets will going to kernel developers that are working on Nokia's old Linux devices and upstreaming drivers.
The events box is still in Lyon, until I receive some replacement hardware.

The machine is 7 years-old (nearly 8!) and only had 512MB of RAM, after the 1GB upgrade, the machine was usable, and many people were impressed by the speed of GNOME on a legacy machine like that (probably more so than a brand new one stuttering because of a driver bug, for example).

This makes you wonder what the use for "lightweight" desktop environments is, when a lot of the features are either punted to helpers that GNOME doesn't need or not implemented at all (old CPU and no 3D driver is pretty much the only use case for those).

I'll be putting it in a small SSD into the demo machine, to give it another speed boost. We'll also be needing a new padlock, after an emergency metal saw attack was necessary on Sunday morning. Five different folks tried to open the lock with the code read off my email, to no avail. Did we accidentally change the combination? We'll never know.

New project, ish

For demo machines, especially newly installed ones, you'll need some content to demo applications. This is my first attempt at uniting GNOME's demo content for release notes screenshots, with some additional content that's free to re-distribute. The repository will eventually move to, obviously.


The new keyboard and mouse, monitor, padlock, and SSD (and my time) were graciously sponsored by Red Hat.

GNOME 3.16

GNOME 3.16 was released last week and is the result of more than 30000 commits by over 1000 persons, I am always impressed by those numbers, thank you all!


It comes with many improvements including a revamped notification system, Allan detailed the features on his blog, go read it for the whole story: In case you didn’t notice….

On the Brussels front Guillaume is so busy he forgot to organize a release party; he got back to his senses and it will happen, but still, we're losing ground here. So I went partying elsewhere, in this case in Lyon, as there was their annual free software event ("les JDLL"). We had a booth thanks to Bastien, and Mathieu was also there to help demonstrating and discussing GNOME.

GNOME booth at JDLL 2015

See you in six months!

Another release, another release video

GNOME 3.16 has been out for a week by now and so has the GNOME 3.16 release video. It’s always a great feeling to publish a production after having it in work for a long time. Spread over 3 months I have spent approximately 30 evenings in total on this thing and I will gladly do that again! I learn a lot from making them.
01-04-15 time-spent

During this period the engagement mailing list has given me valuable feedback on everything from manuscript to animation. Karen Sandler and Mike Tarantino has also been a great help, providing amazing voice-over for this release video. High-five!

You might find a few new things in this release video compared to past release videos:

  • I have focused more on making the content shine and only utilize on animations when they can compliment the content. A big thanks goes to GNOME for letting me borrow a HI-DPI screen to make this possible.
  • I was out early with the first draft of the release video, which meant there was some time before release available for the translation team to create translated subtitles for the video.
  • 01-04-15 subtitles

  • Another nice touch is that I had time to make a custom thumbnail for this release, so the video appear nicely on social media.
  • 01-04-15 thumbnail

There’s been some nice hype around the release. I’ll share a few opinions with you:

Most polished and nicest looking desktop on Linux. Period.

GNOME 3.16, I wanna kiss you.
-Lucas Zanella

Would like to congratulate the +GNOME folks and everything else who has contributed to the project for this fantastic 3.16 release. It has become a tradition for every new Gnome release to be a lot better than the previous one and 3.16 continues it.

Oh, and there was some cool guys on youtube who featured the GNOME 3.16 release video in their show.

Huge credits goes to the GNOME Design Team for awesome assets, Anitek for the awesome music, engagement team for the awesome feedback and translation team for the awesome subtitles. Also thanks to everyone who helped me by fixing visual bugs early so I could record the new improvements. GNOME 3.18 will be amazing.

Minis and FPV


I’ve got some time into the hobby to actually share some experiences that could perhaps help someone who is just starting.

Cheap parts

I like cheap parts just like the next guy, but in the case of electronics, avoid it. Frame is one thing. Get the ZMR250. Yes it won’t be near as tough as the original Blackout, but it will do the job just fine for a few crashes. Rebuilding aside, you can get about 4 for the price of the original. Then the plates give. But electronics is a whole new category. If you buy cheap ESCs they will work fine. Until they smoke mid flight. They will claim to deal with 4S voltage fine. Until you actually attach a 4S and blue smoke makes its appearance. Or you get a random motor/ESC sync issue. And for FPV, when a component dies mid flight, it’s the end of the story if it’s the drive (motor/esc) or the VTX or a board cam.

No need to go straight to T-motor, which usually means paying twice as much of a comparable competitor. But avoid the really cheap sub $10 motors like RCX, RCTimer (although they make some decent bigger motors), generic chinese ebay stuff. In case of motors, paying $20 for a motor means it’s going to be balanced and the pain of vibration aleviated. Vibrations for minis don’t just ruin the footage due to rolling shutter. They actually mess up the IMU in the FC considerably. I like Sunnysky x2204s 2300kv for a 3S setup and the Cobra 2204 1960kv for a 4S. Also rather cheap DYS 1806 seem really well balanced.

Embrace the rate

Rate mode is giving up the auto-leveling of the flight controller and doing it yourself. I can’t imagine flying line of sight (LOS) on rate, but for first person view (FPV) there is no other way. NAZE32 has a cool mode called HORI that allows you to do flips and rolls really easily as it will rebalance it for you, but flying HORI will never get you the floaty smoothness that makes you feel like a bird. The footage will always have this jerky quality to it. On rate a tiny little 220 quad will feel like a 2 meter glider, but will fit inbetween those trees. I was flying hori when doing park proximity, but it was a time wasted. Go rate straight from the ground, you will have way more fun.

Receiver woes

For the flying camera kites, it’s usually fine to keep stuff dangling. Not for minis. Anything that could, will get chopped off by the mighty blades. These things are spinning so fast than antennas have no chance and if your VTX gets loose, it will get seriously messed up as well. You would not believe what a piece of plastic can do when it’s spinning 26 thousand times a minute. On the other hand you can’t bury your receiver antenna on the frame. Carbon fibre is super strong, but also super RF insulating. So you have to bring it outside as much as possible. Those two don’t quite go together, but the best practice I found was taping one of the antennas to the bottom of the craft and have the other stick out sideways on top. The cheapest and best way I found was using a zip tie to hold the angle and heatshrink the antenna onto it. Looks decent and holds way better than a straw os somesuch.

Next time we’ll dive into PID tuning, the most annoying part of the hobby (apart from looking for a crashed bird ;).

April 01, 2015

ChromeOS getting an update

I was among the first to order when Google released the Chromebook, an ultra-portable low-cost laptop that instantly connects you to the Internet. The idea behind the Chromebook is that we don't really need to store things locally anymore. Instead, we use the Cloud for email, documents, collaboration, video, and pretty much everything we do. So the Chromebook's goal is to get you online as quickly and easily as possible, and connect you to those Cloud services. As suggested by the name, the Chromebook comes pre-loaded with Google's Chrome web browser.

Google's Chromebook uses a desktop environment called "Aura." It presents a somewhat simplified desktop experience, where all applications actually run inside the Chrome web browser. For example, clicking a "YouTube" icon actually launches in a new Chrome browser tab. Frequently-used programs can be added to the desktop, or you can browse other applications by clicking the Applications menu icon. Other desktop functions (clock, wireless network connections, battery indicator, etc) are displayed in the lower-right corner. While there is no "bar" or "panel" like in Windows or MacOSX, the Aura desktop appears to do the same by the way it displays icons and desktop functions.

It's no secret that I love the Aura desktop. I find it has good usability! My wife has a Chromebook and loves it. I use a Chromebook at work about half the time, and it's great. The Aura desktop has a simplified look that is both easy for new users and flexible for power users. Aura provides very good usability: The desktop is familiar to both Windows users and Mac users, and the desktop functions more or less like the desktops on these other platforms. Since (almost) everything in the Chromebook runs inside the browser, programs share consistent behavior. The Chrome designers have done a great job of making web apps easy to find and launch, and settings easy to search and update.

Overall, if the Aura desktop were available on a "stock" Linux distribution and had the ability to launch local programs like LibreOffice or a terminal, it would be a great desktop for most folks.

So I was particularly interested to read an article recently from Hot Hardware that ChromeOS is getting a makeover with material design and Google Now support. I really should update my Chromebook at work to use the Beta channel, so I can try it out.

As reported by Hot Hardware: "New in this version is Chrome Launcher 2.0, which gives you quick access to a number of common features, including the apps you use most often (examples are Hangouts, Calculator, and Files). Some apps have also received a fresh coat of paint, such as the file manager, seen below. Google notes that this is just the start, so there will be more updates rolling out to the beta OS as time goes on."

And here are the screenshots:

It's hard to make a heuristic usability evaluation based on two screenshots, but I think I can make a few fair comments here:

As always, I like the clarity in the presentation. Even though Aura doesn't use a traditional "menu bar" or "application bar" like GNOME's top bar or XFCE's launcher, it's easy to see how to launch different applications. The first screenshot shows a number of GNOME Apps icons, and I only have issues with a few of them (Google Calendar's icon looks like a shopping bag to me, for example). In the second screenshot, you can see icons for Chrome and Files. In the lower-right, you can find the clock, wireless status, and battery. Overall, the icons are clear and accurately represent the action.

I love the clearly defined window "tab" in the Files app. When the app is in "windowed" mode, it is a convenient place to "grab" the window to relocate it, and provides icons to minimize, maximize, and close the window using standard icons.

The Google Now screenshot provides only content, so I'll focus on the second screenshot instead. The Files app provides a basic menu bar that I find quite usable. There's a clearly define "breadcrumb" navigation trail in the upper-left, and the upper-right has a search menu and an application menu. I dislike the mixed use of the "three lines" menu with the "three dots" menu. I don't know what each of them does. But overall, this is very clear.

I also appreciate the bright, happy colors used in Aura. No dark, moody colors here. Aura's use of bright blues at the top of the windows would likely reflect feelings of "sky." Especially so because the blue is limited to the top of the screen.

But oh my—the wallpaper! The wallpaper uses muted colors, but is still garish. I don't know if I should blame Google for this (was it the default desktop wallpaper?) or the reviewer (did they pick this particular wallpaper?) but either way, I would have preferred a happier, calmer image.

I'm pleased to see this example of good software usability. Although Aura isn't open source software,My bad. -jh it's important to note positive examples of usability so we can learn from them: what worked well, and what could be improved.
images: Hot Hardware

March 31, 2015

Official GNOME SDK runtime builds are out

As people who have followed the work on sandboxed applications know, we have promised a developer preview for GNOME 3.16. Well, 3.16 has now been released, so the time is now!

I spent last week setting up an build system on the GNOME infrastructure, and the output of this is finally available at:

This repository contains the gnome 3.16 runtimes, org.gnome.Platform, as well as a smaller one that is useful for less integrated apps (like games) called org.freedesktop.Platform. It also has corresponding develoment runtimes (org.gnome.Sdk and org.freedesktop.Sdk) that you can use to create applications for the platforms.

This is a developer preview, so consider these builds weakly supported. This means I will try to keep them somewhat updated if there are major issues and that I will keep them API and ABI stable. I will probably also pick up at least some 3.16.x minor releases as they are released.

I also did the first official release of xdg-app. For easy testing this is available for Fedora 21 and 22 as a copr repo.

Testing the SDK

Using the repo above makes it really easy to test this. Just install the xdg-app package from copr, log out+in (needed update the environment for the session), then follow these instructions (as a regular user):

  1. Install the Gnome SDK public key into  /usr/share/ostree/trusted.gpg.d, (or alternatively, use –no-gpg-verify when you add the remote below).
  2. Install the basic Gnome and freedesktop runtimes:
    $ xdg-app add-remote --user gnome-sdk
    $ xdg-app install-runtime --user gnome-sdk org.gnome.Platform 3.16
    $ xdg-app install-runtime --user gnome-sdk org.freedesktop.Platform 1.0
  3. Optionally install some locale packs:
    $ xdg-app install-runtime --user gnome-sdk 3.16
    $ xdg-app install-runtime --user gnome-sdk 1.0
  4. Install some apps from my repository of test apps:
    $ xdg-app add-remote --user --no-gpg-verify test-apps
    $ xdg-app install-app --user test-apps org.gnome.gedit
    $ xdg-app install-app --user test-apps org.freedesktop.glxgears
  5. Run the apps! You should find gedit listed among the regular applications in the shell as it exports a desktop file. But you can also run them manually like this:
    $ xdg-app run org.gnome.gedit
    $ xdg-app run org.freedesktop.glxgears
  6. I also packaged the latest gnome builder from git. It requires the full sdk which takes a bit longer to download:
    $ xdg-app install-runtime --user gnome-sdk org.gnome.Sdk 3.16
    $ xdg-app install-app --user test-apps org.gnome.Builder

All the above install the apps into your home-directory (in ~/.local/share/xdg-app) . You can also run the commands as root and skip the –user arguments to do system-wide application installs.

Future work

With the basics now laid down to run current applications in a minimally isolated environment the next step is to work on the sandboxing aspects more. This will require lots of work, both in the system side (things like kdbus), the desktop (add sandbox aware APIs, make pulseaudio protect clients from each other, etc)  and in modifying applications.

If you’re interested in this, you can follow the work on the wiki.

Building your own apps

If you download the SDKs you have enough tooling to build your own applications. There are some documentations on how to do this here.

I also created a git repository with the scripts I used to build the test applications above. It uses the gnome-sdk-bundles repostory which has some tooling and specfiles to easily bundle dependencies with the application.

Building the SDK

If you ever want to build the SDK yourself, it is available at:

This repository contains the desktop specific parts of the SDK, which is layered on a core Yocto layer. When you build the SDK this will be automatically checked out and built from:

However, if you don’t want to build all of this you can download the pre-build images from and put them in the freedesktop-sdk-base/images/x86_64 subdirectory of gnome-sdk-images. This can save you a lot of time and space.

Glimpse of FOSS ASIA

In last week, I attended FOSS ASIA which is one of the biggest Open Source Conference in ASIA and was held in beautiful city of Singapore. Conference had 3 or 4 simultaneous tracks which were focused on DevOps,Web Programming,Workshops and I attended  mostly Web and Dev Ops tracks. 
I will not get into details of each track but I would like to mention talk by  Dr. Vivian Balakrishnan about "Singapore as a Smart Nation" .

Dr. Vivian Balakrishnan's  talk was focused on open gov data and how one can utilize that to make better governance. Dr Balakrishnan is working as Singapore’s Minister for the Environment and Water Resources but prior to that he used to be Ophthalmologist and he likes computers. He knows Python and Node.js and I must say I was really impressed by his skills.I did not expect any politician of any country who knows Python or Node. During his talk he mentioned that, In Singapore before you get into politics first you need to serve & get some experience through a job or working as a professional for few years and then you can get into active politics which is unlike in other countries(I know about India not sure about other countries ).
He shared his vision about Singapore and their efforts to achieve it. I am looking forward to see smart Singapore in future. 
I met several friends and  had a good time in zanata,fedora,systemd BOF's. I and Pravin took a workshop on GNOME 101 and RPM packaging. 

I would like to thank Harish Pillay for his gratuitous support and Red Hat to allow me to attend event. I would like to extend my gratitude to  FOSS ASIA organizers  Hong Phuc Dang, Mario Behling, Harish Pillay, Roland Turner, Justin Lee and Darwin Gosal and looking forward to see them again!

March 30, 2015

Bringing sanity back to my T440s

As a long time Thinkpad’s trackpoint user and owner of a Lenovo T440s, I always felt quite frustrated with the clickpad featured in this laptop, since it basically ditched away all the physical buttons I got so used to, and replace them all with a giant, weird and noisy “clickpad”.

Fortunately, following Peter Hutterer’s post on X.Org Synaptics support for the T440, I managed to get a semi-decent configuration where I basically disabled any movement in the touchpad and used it three giant soft buttons. It certainly took quite some time to get used to it and avoid making too many mistakes but it was at least usable thanks to that.

Then, just a few months ago from now, I learned about the new T450 laptops and how they introduced again the physical buttons for the trackpoint there… and felt happy and upset at the same time: happy to know that Lenovo finally reconsidered their position and decided to bring back some sanity to the legendary trackpoint, but upset because I realized I had bought the only Thinkpad to have ever featured such an insane device.

Luckily enough, I recently found that someone was selling this T450’s new touchpads with the physical buttons in eBay, and people in many places seemed to confirm that it would fit and work in the T440, T440s and T440p (just google for it), so I decided to give it a try.

So, the new touchpad arrived here last week and I did try to fit it, although I got a bit scared at some point and decided to step back and leave it for a while. After all, this laptop is 7 months old and I did not want to risk breaking it either :-). But then I kept reading the T440s’s Hardware Maintenance Manual in my spare time and learned that I was actually closer than what I thought, so decided to give it a try this weekend again… and this is the final result:

T440s with trackpoint buttons!

Initially, I thought of writing a detailed step by step guide on how to do the installation, but in the end it all boils down to removing the system board so that you can unscrew the old clickpad and screw the new one, so you just follow the steps in the T440s’s Hardware Maintenance Manual for that, and you should be fine.

If any, I’d just add that you don’t really need to remove the heatskink from the board, but just unplug the fan’s power cord, and that you can actually do this without removing the board completely, but just lifting it enough to manipulate the 2 hidden screws under it. Also, I do recommend disconnecting all the wires connected to the main board as well as removing the memory module, the Wifi/3G cards and the keyboard. You can probably lift the board without doing that, but I’d rather follow those extra steps to avoid nasty surprises.

Last, please remember that this model has a built-in battery that you need to disable from the BIOS before starting to work with it. This is a new step compared to older models (therefore easy to overlook) and quite an important one, so make sure you don’t forget about it!

Anyway, as you can see the new device fits perfectly fine in the hole of the former clickpad and it even gets recognized as a Synaptics touchpad, which is good. And even better, the touchpad works perfectly fine out of the box, with all the usual features you might expect: soft left and right buttons, 2-finger scrolling, tap to click…

The only problem is that the trackpoint’s buttons would not work that well: the left and right buttons would translate into “scroll up” and “scroll down” and the middle button would simply not work at all. Fortunately, this is also covered in Petter Hutterer’s blog, where he explains that all the problems I was seeing are expected at this moment, since some patches in the Kernel are needed for the 3 physical buttons to become visible via the trackpoint again.

But in any case, for those like me who just don’t care about the touchpad at all, this comment in the tracking bug for this issue explains a workaround to get the physical trackpoint buttons working well right now (middle button included), simply by disabling the Synaptics driver and enabling psmouse configured to use the imps protocol.

And because I’m using Fedora 21, I followed the recommendation there and simply added psmouse.proto=imps to the GRUB_CMDLINE_LINUX line in /etc/default/grub, then run grub2-mkconfig -o /boot/grub2/grub.cfg, and that did the trick for me.

Now I went into the BIOS and disabled the “trackpad” option, not to get the mouse moving and clicking randomly, and finally enabled scrolling with the middle-button by creating a file in /etc/X11/xorg.conf.d/20-trackpoint.conf (based on the one from my old x201), like this:

Section "InputClass"
        Identifier "Trackpoint Wheel Emulation"
        MatchProduct "PS/2 Synaptics TouchPad"
        MatchDriver "evdev"
        Option  "EmulateWheel"  "true"
        Option  "EmulateWheelButton" "2"
        Option  "EmulateWheelInertia" "10"
        Option  "EmulateWheelTimeout" "190"
        Option  "Emulate3Buttons" "false"
        Option  "XAxisMapping"  "6 7"
        Option  "YAxisMapping"  "4 5"

So that’s it. I suppose I will keep checking the status of the proper fix in the tracking bug and eventually move to the Synaptic driver again once all those issue get fixed, but for now this setup is perfect for me, and definitely way better than what I had before.

I only hope that I hadn’t forgotten to plug a cable when assembling everything back. At least, I can tell I haven’t got any screw left and everything I’ve tested seems to work as expected, so I guess it’s probably fine. Fingers crossed!

March 29, 2015

MPI-based Nested Cross-Validation for scikit-learn

If you are working with machine learning, at some point you have to choose hyper-parameters for your model of choice and do cross-validation to estimate how well the model generalizes to unseen data. Usually, you want to avoid over-fitting on your data when selecting hyper-parameters to get a less biased estimate of the model's true performance. Therefore, the data you do hyper-parameter search on has to be independent from data you use to assess a model's performance. If you want to do know what happens if you perform both tasks on the same data, have a look at the chapter The Wrong and Right Way to Do Cross-validation in the excellent book The Elements of Statistical Learning.

For instance, scikit-learn's Support Vector Regression class has at least two hyper-parameters, the penalty weight C and which kernel to use. Depending on the kernel, additional hyper-parameters are to be considered. Traditionally, people do an exhaustive grid search over a pre-defined set of values for each parameters and choose the setting that performed best. In fact, that is exactly what sklearn.grid_search.GridSearchCV does. In the end, what you get is the average score across the hold-out data with the best parameters. However, you don't want to report that number, because you essentially cheated by repeatedly using the hold-out data with different parameter settings to evaluate your model's performance, which is over-fitting too.

It is important that the numbers you report in the end were retrieved from data you only used once to measure performance. To avoid the pitfalls of GridSearchCV, you essentially have to nest GridSearchCV within another cross-validation such as StratifiedKFold. That way, the grid search only uses the training data of the outer cross-validation loop and results are reported on the test set, which was not used for the grid search.

Obviously, this can become computationally very demanding, e.g. if you do 10-fold cross-validation with 3-fold cross-validation for the grid-search, you need to train a total of 30 models for each configuration of parameters. Luckily, inner and outer cross-validation can be easily parallelized. GridSearchCV can do this with the n_jobs parameter. However, for large-scale analysis you want to use a cluster to process data in parallel.

This is where the Message Passing Interface (MPI) comes in. It is a standardized protocol for parallel computing. In MPI terms, all processes are organized in groups, which are managed by a communicator and each MPI process gets an ID or rank. Usually, the node with rank zero is used as the master that distributes the work and collects it again.

I implemented nested grid search for scikit-learn classes that distributes work using MPI. I'm not an expert in MPI, so there might be more efficient solutions, but it gets the job done for me. Using it is very similar to GridSearchCV:

from mpi4py import MPI
import numpy
from sklearn.datasets import load_boston
from sklearn.svm import SVR
from grid_search import NestedGridSearchCV
data = load_boston()
X = data['data']
y = data['target']
estimator = SVR(max_iter=1000, tol=1e-5)
param_grid = {'C': 2. ** numpy.arange(-5, 15, 2),
              'gamma': 2. ** numpy.arange(3, -15, -2),
              'kernel': ['poly', 'rbf']}
nested_cv = NestedGridSearchCV(estimator, param_grid, 'mean_absolute_error',
                               cv=5, inner_cv=3), y)
if MPI.COMM_WORLD.Get_rank() == 0:
    for i, scores in enumerate(nested_cv.grid_scores_):
        scores.to_csv('grid-scores-%d.csv' % (i + 1), index=False)

To run this example you execute mpiexec python and it uses all available MPI processors to distribute the work among. Your final result is stored in the best_params_ attribute, which is a pandas data frame that contains the selected hyper-parameters, the average performance across all inner cross-validation folds (score (Validation)), and the performance on the outer testing fold (score (Test)).

score (Validation) C gamma kernel score (Test)
1 -7.252490 0.5 0.000122 rbf -4.178257
2 -5.662221 128.0 0.000122 rbf -5.445915
3 -5.582780 32.0 0.000122 rbf -7.066123
4 -6.306561 0.5 0.000122 rbf -6.059503
5 -6.174779 128.0 0.000122 rbf -6.606218

Complete results of the grid search are stored in the grid_scores_ attributes, which is a list of data frames, one for each outer cross-validation fold.

The code is available at and has the following dependencies.

Note that I only tried this out with python 3.4. For further details, please check out the inline documentation.

March 26, 2015

An API is only as good as its documentation.

Your APIs are only as good as the documentation that comes with them. Invest time in getting docs right. — @rubenv on Twitter

If you are in the business of shipping software, chances are high that you’ll be offering an API to third-party developers. When you do, it’s important to realize that APIs are hard: they don’t have a visible user interface and you can’t know how to use an API just by looking at it.

For an API, it’s all about the documentation. If an API feature is missing from the documentation, it might as well not exist.

Sadly, very few developers enjoy the tedious work of writing documentation. We generally need a nudge to remind us about it.

At Ticketmatic, we promise that anything you can do through the user interface is also available via the API. Ticketing software rarely stands alone: it’s usually integrated with e.g. the website or some planning software. The API is as important as our user interface.

To make sure we consistently document our API properly, we’ve introduced tooling.

Similar to unit tests, you should measure the coverage of your documentation.

After every change, each bit of API endpoint (a method, a parameter, a result field, …) is checked and cross-referenced with the documentation, to make sure a proper description and instructions are present.

The end result is a big documentation coverage report which we consider as important as our unit test results.

Constantly measure and improve the documentation coverage metric.

More than just filling fields

A very important things was pointed out while circulating these thoughts on Twitter.

Shaun McCance (of GNOME documentation fame) correctly remarked:

@rubenv I’ve seen APIs that are 100% documented but still have terrible docs. Coverage is no good if it’s covered in crap. — @shaunm on Twitter

Which is 100% correct. No amount of metrics or tooling will guarantee the quality of the end-result. Keeping quality up is a moral obligation shared by anyone in the team and that can never be replaced with software.

Nevertheless, getting a slight nudge to remind you of your documentation duties never hurts.

Comments | @rubenv on Twitter

Surviving winter as a motorsports fan.

Winter is that time of the year where nothing happens in the motorsport world (one exception: Dakar). Here are a few recommendations to help you through the agonizing wait:

Formula One

Start out with It Is What It Is, the autobiography of David Coulthard. It only goes until the end of 2007, but nevertheless it’s a fascinating read: rarely do you hear a sportsman speak with such openness. A good and honest insight into the mind of a sportsman and definitely not the politically correct version you’ll see on the BBC.

It Is What It Is

Next up: The Mechanic’s Tale: Life in the Pit-Lanes of Formula One by Steve Matchett, a former Benetton F1 mechanic. This covers the other side of the team: the mechanics and the engineers.

The Mechanic's Tale: Life in the Pit-Lanes of Formula One

Still feel like reading? Dive into the books of Sid Watkins, who deserves huge amounts of credit for transforming a very deadly sport into something surprisingly safe (or as he likes to point out: riding a horse is much more dangerous).

He wrote two books:

Both describe the efforts on improving safety and are filled with anecdotes.

And finally, if you prefer movies, two more recommendations. Rush, an epic story about the rivalry between Niki Lauda and James Hunt. Even my girlfriend enjoyed it and she has zero interest in motorsports.


And finally Senna, the documentary about Ayrton Senna, probably the most mythical Formula One driver of all time.


Le Mans

On to that other legend: The 24 hours of Le Mans.

I cannot recommend the book Le Mans by Koen Vergeer enough. It’s beautiful, it captures the atmosphere brilliantly and seamlessly mixes it with the history of this event.

But you’ll have to go the extra mile for it: it’s in Dutch, it’s out of print and it’s getting exceedingly rare to find.

Le Mans

Nothing is lost if you can’t get hold of it. There’s also the 1971 movie with Steve McQueen: Le Mans.

It’s everything that modern racing movies are not: there’s no CG here, barely any dialog and the story is agonizingly slow if you compare it to the average Hollywood blockbuster.

But that’s the beauty of it: in this movie the talking is done by the engines. Probably the last great racing movie that featured only real cars and real driving.

Le Mans


Motorcycles aren’t really my thing (not enough wheels), but I have always been in awe for the street racing that happens during the Isle of Man TT. Probably one of the most crazy races in the world.

Riding Man by Mark Gardiner documents the experiences of a reporter who decides to participate in the TT.

Riding Man

And to finish, the brilliant documentary TT3D: Closer to the Edge gives a good insight into the minds of these drivers.

It seems to be available online. If nothing else, I recommend you watch the first two minutes: the onboard shots of the bike accelerating on the first straight are downright terrifying.

TT3D: Closer to the Edge

Rounding up

By the time you’ve read/seen all of the above, it should finally be spring again. I hope you enjoyed this list. Any suggestions about things that would belong in this list are greatly appreciated, send them over!

Comments | @rubenv on Twitter

Hands-on usability improvements with GNOME 3.16

I downloaded the GNOME 3.16 live demo image and experimented with what the latest GNOME has to offer. My focus is usability testing, so I wanted to explore the live demo to see how the usability has improved in the latest release.

From my 2014 study of GNOME's usability, usability testing revealed several "hot" problem areas, including:

Changing the default font in gedit or Notes
Testers typically looked for a "font" or "text" action under the gear menu. Many testers referred to the gear menu as the "options" or "settings" menu because they previously affiliated a "gear" icon with settings or preferences in Mac OS X or Windows. Testers assumed changing the font was a settings, so they looked for it in what they assumed was a "settings" menu: the gear menu.
Bookmarking a location in Nautilus
Most testers preferred to just move a frequently-used folder to the desktop, so it would be easier to find. But GNOME doesn't have a "desktop" per se by default, and expects users to use the "Bookmark this Location" feature in Nautilus. However, this feature was not very discoverable; many testers moved the target folder into another folder, and believed that they had somehow bookmarked the location.
Finding and replacing text in gedit
When asked to make to replace all instances of a word with another word, across a large text file, testers had trouble discovering the "find and replace text" feature in gedit. Instead, testers experimented with "Find" then simply typed over the old text with the new text.
How does the new GNOME 3.16 improve on these problem areas? Let's look at a few screenshots:


GNOME 3.14 saw several updates to the gedit editor, which continue in GNOME 3.16:

The new gedit features a clean appearance that features prominent "Open" and "Save" buttons—two functions that average users with average knowledge will frequently access.

A new "three lines" icon replaces the gear menu for the drop-down menu. This "three lines" menu icon is more common in other applications, including those on Mac OS X and Windows, so the new menu icon should be easier to find.

The "Open" menu includes a quick-access list, and a button to look for other files via the finder.

The preferences menu doesn't offer significant usability improvements, although the color scheme selector is now updated in GNOME 3.16.


The updated Nautilus features large icons that offer good visibility without becoming too overwhelming. The "three lines" menu is simplified in this release, and offers an easier path to bookmark a location.


I uncovered a few issues with the Epiphany web browser (aka "GNOME Web") but since I don't usually use Epiphany (I use Firefox or Google Chrome) I'm not sure how long these problems have been there.

Epiphany has a clean appearance that reserves most of the screen real estate to display the web page. This is a nice design tradeoff, but I noticed that after I navigated to a web page, I lost the URL bar. I couldn't navigate to a new website until I opened a new tab and entered my URL there. I'm sure there's another way to bring up the URL bar, but it's not obvious to me.

I'll also add that taking screenshots of Epiphany was quite difficult. For other GNOME applications, I simply hit Alt-PrtScr to save a screenshot of my active window. But the Epiphany web browser seems to grab control of that key binding, and Alt-PrtScr does nothing most of the time—especially when the "three lines" menu is open. I took several screenshots of Epiphany, and about half were whole-desktop screenshots (PrtScr) that I later cropped using the GIMP.

EDIT: If you click the little "down" triangle next to the URL, you can enter a new URL. I don't like this feature; it obscures URL entry. Basic functionality like this should not be hidden in a web browser. I encourage the Epiphany team to bring back the URL entry bar in the next release.

Other changes

Notifications got a big update in GNOME 3.16. In previous versions of GNOME 3, notifications appeared at the bottom of the screen. Now, notifications appear at the top of the screen, merged with the calendar. You might consider this a "calendar and events" feature. The notifications are unobtrusive; when I plugged in my USB fob drive, a small white marker appeared next to the date and time to suggest a new notification had arrived. While I haven't reviewed notifications as part of my usability testing, my heuristic evaluation is that the new notifications design will improve the usability around notifications. I believe most users will see the new "calendar and events" feature as making a lot of sense.

However, I do have some reservations about the updated GNOME. For one, I dislike the darker colors seen in these screenshots. Users don't like dark desktop colors. In user interface design, colors also affect the mood of an application. As seen in this comparison, users perceived the darker colors used in Windows and GNOME as moody, while the lighter colors used in Mac OS X suggest an airy, friendly interface. This may be why users at large perceive the GNOME desktop to have poor usability, despite usability testing showing otherwise. The dark, moody colors used in GNOME provoke feelings of tension and insecurity, which influence the user's perception of poor usability.

I'm also not sure about the blue-on-grey effect to highlight running programs or selected items in the GNOME Shell. In addition to being dark, moody colors, the blue-on-grey is just too hard to see clearly. I would like GNOME to update the default theme to use lighter, airier colors. I'll reserve a discussion of colors in GNOME for a future article.

Overall, I'm very pleased with the usability improvements that have gone into the new GNOME release. Good job, everyone!

I look forward to doing more usability testing in this version of GNOME, so we can continue to make GNOME great. With good usability, each version of GNOME gets better and easier to use.

2015-03-26 Thursday

  • Mihai posted a nice blog with a small video of LibreOffice Online in action - hopefully we'll have a higher-resoluton version that doesn't feature some bearded idiot next time.
  • Out to the Dentist for some drilling action.

Building a SNES emulator with a Raspberry Pi and a PS3 gamepad

It’s been a while since I did this, but I got some people asking me lately about how exactly I did it and I thought it could be nice to write a post answering that question. Actually, it would be a nice thing for me to have anyway at least as “documentation”, so here it is.

But first of all, the idea: my personal and very particular goal was to have a proper SNES emulator plugged to my TV, based on the Raspberry Pi (simply because I had a spare one) that I could control entirely with a gamepad (no external keyboards, no ssh connection from a laptop, nothing).

Yes, I know there are other emulators I could aim for and even Raspberry specific distros designed for a similar purpose but, honestly, I don’t really care about MAME, NeoGeo, PSX emulators or the like. I simply wanted a SNES emulator, period. And on top of that I was quite keen on playing a bit with the Raspberry, so I took this route, for good or bad.

Anyway, after doing some investigation I realized all the main pieces were already out there for me to build such a thing, all that was needed was to put them all together, so I went ahead and did it. And these are the HW & SW ingredients involved in this recipe:

Once I got all these things around, this is how I assembled the whole thing:

1. Got the gamepad paired and recognized as a joystick under /dev/input/js0 using the QtSixA project. I followed the instructions here, which explain fairly well how to use sixpair to pair the gamepad and how to get the sixad daemon running at boot time, which was an important requirement for this whole thing to work as I wanted it to.

2. I downloaded the source code of PiSNES, then patched it slightly so that it would recognize the PS3 DualShock gamepad, allow me define the four directions of the joystick through the configuration file, among other things.

3. I had no idea how to get the PS3 gamepad paired automatically when booting the Raspberry Pi, so I wrote a stupid small script that would basically wait for the gamepad to be detected under /dev/input/js0, and then launch the snes9x.gui GUI to choose a game from the list of ROMS available. I placed it under /usr/local/bin/snes-run-gui, and looks like this:



# Wait for the PS3 Game pad to be available
while [ ! -e /dev/input/js0 ]; do sleep 2; done

# The DISPLAY=:0 bit is important for the GUI to work
DISPLAY=:0 $BASEDIR/snes9x.gui

4. Because I wanted that script to be launched on boot, I simply added a line to /etc/xdg/lxsession/LXDE/autostart, so that it looked like this:

@lxpanel --profile LXDE
@pcmanfm --desktop --profile LXDE
@xscreensaver -no-splash

By doing the steps mentioned above, I got the following “User Experience”:

  1. Turn on the RPi by simply plugging it in
  2. Wait for Raspbian to boot and for the desktop to be visible
  3. At this point, both the sixad daemon and the snes-run-gui script should be running, so press the PS button in the gamepad to connect the gamepad
  4. After a few seconds, the lights in the gamepad should stop blinking and the /dev/input/js0 device file should be available, so snes9x.gui is launched
  5. Select the game you want to play and press with the ‘X’ button to run it
  6. While in the game, press the PS button to get back to the game selection UI
  7. From the game selection UI, press START+SELECT to shutdown the RPi
  8. Profit!

Unfortunately, those steps above were enough to get the gamepad paired and working with PiSNES, but my TV was a bit tricky and I needed to do a few adjustments more in the booting configuration of the Raspberry Pi, which took me a while to find out too.

So, here is the contents of my /boot/config.txt file in case it helps somebody else out there, or simply as reference (more info about the contents of this file in RPiConfig):

# NOOBS Auto-generated Settings:

# Set sdtv mode to PAL (as used in Europe)

# Force sound to be sent over the HDMI cable

# Set monitor mode to DMT

# Overclock the CPU a bit (700 MHz is the default)

# Set monitor resolution to 1280x720p @ 60Hz XGA

As you can imagine, some of those configuration options are specific to the TV I have it connected to (e.g. hdmi_mode), so YMMV. In my case I actually had to try different HDMI modes before settling on one that would simply work, so if you are ever in the same situation, you might want to apt-get install libraspberrypi-bin and use the following commands as well:

 $ tvservice -m DMT # List all DMT supported modes
 $ tvservice -d edid.dat # Dump detailed info about your screen
 $ edidparser edid.dat | grep mode # List all possible modes

In my case, I settled on hdmi_mode=85 simply because that’s the one that work better for me, which stands for the 1280x720p@60Hz DMT mode, according to edidparser:

HDMI:EDID DMT mode (85) 1280x720p @ 60 Hz with pixel clock 74 MHz has a score of 80296

And that’s all I think. Of course there’s a chance I forgot to mention something because I did this in my random slots of spare time I had back in July, but that should be pretty much it.

Now, simply because this post has been too much text already, here you have a video showing off how this actually works (and let alone how good/bad I am playing!):

Video: Raspberry Pi + PS3 Gamepad + PiSNES

I have to say I had great fun doing this and, even if it’s a quite hackish solution, I’m pretty happy with it because it’s been so much fun to play those games again, and also because it’s been working like a charm ever since I set it up, more than half a year ago.

And even better… turns out I got it working just in time for “Father’s Day”, which made me win the “best dad in the world” award, unanimously granted by my two sons, who also enjoy playing those good old games with me now (and beating me on some of them!).

Actually, that has been certainly the most rewarding thing of all this, no doubt about it.

March 25, 2015

Python for remote reconfiguration of server firmware

One project I've worked on at Nebula is a Python module for remote configuration of server hardware. You can find it here, but there's a few caveats:
  1. It's not hugely well tested on a wide range of hardware
  2. The interface is not yet guaranteed to be stable
  3. You'll also need this module if you want to deal with IBM (well, Lenovo now) servers
  4. The IBM support is based on reverse engineering rather than documentation, so who really knows how good it is

There's documentation in the README, and I'm sorry for the API being kind of awful (it suffers rather heavily from me writing Python while knowing basically no Python). Still, it ought to work. I'm interested in hearing from anybody with problems, anybody who's interested in getting it on Pypi and anybody who's willing to add support for new HP systems.

comment count unavailable comments