24 hours a day, 7 days a week, 365 days per year...

September 20, 2014

Fri 2014/Sep/19

  • I finally got off my ass and posted my presentation from GUADEC: GPG, SSH, and Identity for Beginners (PDF) (ODP). Enjoy!

  •, the awesome gardening website that Alex Bailey started after GUADEC 2012, is running a fundraising campaign for the development of an API for open food data. If you are a free-culture minded gardener, or if you care about local food production, please support the campaign!

September 19, 2014

What the GNOME release team is doing


At the release team BoF at this years Guadec, I said I would write a blog post about the whats and hows and ifs of release team work. I’m a little late with this, but here it is: a glimpse into the life of a GNOME release team member.

We are in the end phase of the development cycle, when the release team work is really kicking into high gear.

Blocker bugs

Since the .90 release, we’ve been tracking blocker bugs.  The way this works is that we are setting the GNOME Target field to the next release. During the cause of the development cycle, we’ve had 50 bugs that were marked with

GNOME Target=3.14

at some point . Today, we’re down to a single one, which will hopefully be gone before the weekend is over.

In order to draw our developers attention to these bugs, we’re sending out regular reports to desktop-devel-list (see them here, here and here). Andre has taken over this task this cycle.

We don’t have formal criteria for what bugs to mark as blockers, we are mostly going by the instinct and experience of bug zapping veterans like Andre Klapper. My own criteria for setting this flag mostly come down to these questions:

Is it a crash in a core component ?
Is it very visible or annoying ?
Will it affect many users ?

For finding bugs that should be blockers, I am regularly scanning all incoming bugs in bugzilla. On an average day this query finds between 100 and 200 bugs – with some practice, one can get through that list in 15 minutes.

We also get ‘nominations’ for blockers from maintainers and developers, which is very helpful.

Development Releases

The duty of ‘doing’ releases is distributed among all of the current, active release team members.

Our development schedule has a pretty well-established cadence of development  releases: We do a number of early development snapshots which are roughly a month apart (3.13.1, 3.13.2, 3.13.3 and 3.13.4 this cycle), followed by the beta releases which are 2 weeks apart (3.13.90, 3.13.91, 3.13.92, this cycle).

The end product of each development release is a set of jhbuild modulesets and a forest of symlinks to the tarballs for individual modules that are included in the release.

Most of the mechanics of creating the modulesets by rewriting our regular modulesets (which point at git repositories, not tarballs), and creating those symlinks on the server are handled by scripts that have been passed down through the generations.

The time-consuming aspect of creating a release is that it usually takes several attempts to create candidate modulesets, hunt down missing releases (or doing them ourselves – which is sadly necessary for a number of ‘weakly maintained’ modules), and do a full jhbuild using the final modulesets. As a consequence, while our official release day is always Monday, the release typically happens on Wednesday or Thursday of the same week.


Towards the end of a development cycle, the release team also starts to plan for the next cycle.

This includes creating the schedule, taking a look at annoying bugs that we should tackle in the next cycle, figuring out if there are project-wide goals that we should push, and collecting input on new modules and features that people want to work on for the next release.

Since we are at this point right now, let me end with this request:

Please let us know what features are on your modules roadmap for the next cycle!

Announcing Shotwell 0.20 and Geary 0.8

We’ve released Geary 0.8 and Shotwell 0.20 today and I’m pretty excited about getting these out the door to our users.  Both releases include important fixes and some great new features.

Geary 0.8

While Geary 0.8 has a slew of new features and improvements, I would say the most visible for our users (compared to 0.6) are the following:

  • Robert Schroll’s redesign of the mail composer.  Not only does it look a lot sharper and more modern than before, it also operates inline in the main window—that is, you type your reply right below the email you’re responding to.  This means replying to a conversation is a more natural operation than opening a separate window or switching to a new view.  You can still pop the composer out into a separate window, just press the Detach button and you’re on your way.
  • Gustavo Rubio’s hard work to get signature support into Geary.  Now Geary will automatically insert a signature of your design into an email, whether new or replying to another.  This is one of the most-requested features for Geary, so it’s good to get this in.
  • I’ve put in some hard work on improving database speed and IMAP connection stability.  There’s still a couple of kinks here and there, but I feel like 0.8 is a big step forward in making Geary the kind of application you can leave on for days at a time without worrying about it slowing down, crashing, or losing its connection to the server.

In other words, if you’re a Geary user, you really should upgrade.

That said, here’s a more formal list of improvements:

  • Major redesign of email composer, now presented inline in main window
  • Composer will automatically add signature to emails
  • Saving drafts to server can be disabled
  • Improved interface, now using GtkHeaderBar and modern widgets
  • Database speed optimizations to reduce lags and improve read times
  • Improved connection handling and reestablishment
  • Show attachments lacking a Content-Disposition
  • Important bug fixes
  • Updated translations

The tarball for Geary 0.8 is available here.  Visit the Geary home page for more information.

Shotwell 0.20

Shotwell 0.20 has a more modest set of improvements, but it’s still growing and developing.  In particular, new photo sharing plugins were added and stability fixes have been included:

  • Support for and Gallery 3 photo services
  • Set background image for lock screen
  • Better detection of corrupt images during import
  • Important stability bug fixes
  • Updated translations

The tarball for Shotwell 0.20 is available here.  Visit the Shotwell home page for more information.

3.14 On Its Way

I recently put the finishing touches to the GNOME 3.14 release notes, which means that the next GNOME release is getting pretty close now. I never cease to be excited by new GNOME releases, nor to be amazed by our community’s ability to discernibly improve the GNOME 3 user experience release on release. I’m definitely at the point where I want to be running 3.14 all the time – it’s obviously a lot better than the previous version.

You’ll have to wait for the release notes to get all the details about what’s in the new release. Nevertheless, I thought I’d give a sneak peek at some of my personal favourite features.

Often with new releases we focus on the big new features – obvious bits of new UI that do cool stuff. One of the interesting things about this release, though, is that many of the most significant changes are also the most subtle. There’s a lot of polish in 3.14, and it makes a big different to the overall user experience.

New Animations

It’s quite a while since Jakub first posted his motion mockups for the applications view. Since then we’ve been steadily and progressively iterating towards those mockups. With 3.14 we’ve finally got there, and it was worth the wait. The most noticeable effect is the new “swarm” animation, but also a lot of other subtle touches, such as when you browse application folders, or when you launch applications. We’ve also reworked the animations for when windows are opened and closed.

Animations might seem like unimportant window dressing, but it’s surprising how significant they can be for the user experience. They are the glue which binds the different parts of the UX together. By smoothing the transition between views, windows and applications, they make the entire experience feel responsive, fluid and more pleasurable to use. (And they actually make everything feel a lot faster, too. But don’t tell anyone.)

Google Images in Photos

Photos 3.14

GNOME’s Photos app has been steadily maturing over the past couple of releases, and it is turning into the dependable core app that we want it to be. The big news for 3.14 is that Photos will now pick up your Google photos, so any images you’ve uploaded with Picasa, Android, or posted on Google+ will be immediately available there. This is obviously incredibly convenient for users of Google services, and I know I’m looking forward to being able to quickly browse my online photos from within GNOME 3.

Rewritten Adwaita

GTK+ 3 Widget Factory 3.14

Jakub and Lapo have been tireless during the 3.14 cycle, and have completely rewritten Adwaita (the GNOME 3 GTK+ theme). This was a huge undertaking – around 8,000 lines of CSS have been reduced to about 3,300 lines of SASS. This was primarily intended to improve the maintainability of the theme. As such, there hasn’t been a dramatic change in the theme. What has happened, though, is that every aspect of the theme has been gone over with a fine-toothed comb.

There are some more noticeable changes. Progress bars have got thinner. Spinners look different (so much better). Switches are a bit different. However, the more exciting thing for me is that pretty much every part of the theme has changed in a subtle way. Everything feels crisper, sharper, and a bit lighter. There’s also a lot of subtle animations now (thanks to CSS animation support in GTK+), adding to the feeling of polish.

Search More Things

System search has been one of the most successful parts of GNOME 3, in my opinion. The number of applications that are feeding results to system search has continued to increase with 3.14, with two really useful new additions. The first is Clocks, which will return search results for world cities. Search is all you need to do to find the time in a place throughout the world.


The second new search provider in 3.14 comes from the Calculator. As you might expect, this allows you to perform simple calculations straight from the search box. It’s pretty exciting to see system search in GNOME 3 become so versatile, and the great thing about it is that it’s always a single keystroke away.


Go Go GTK+ Inspector

GTK+ Inspector 3.14

I don’t usually write about developer-focused features when I preview GNOME releases, but I can’t talk about 3.14 without mentioning GTK+ Inspector. If you work with GNOME technologies – as I do – this tool is something of a revelation. It’s amazing how quickly it becomes a core part of how you work. I’m already wondering how I ever lived without it.

The inspector isn’t just useful. It is also a lot of fun, and makes it easy to experiment. if you’re not a developer, or you don’t regularly work on GTK+ apps, I’d still recommend trying it out. Just hit Ctrl+Shift+I and have a play around.

Waiting Time

This is just a small selection of the features that are coming in the new release. To learn about everything else that’s coming, you’ll have to wait for the release notes. 3.14 should be out next week.

GStreamer with hardware video codecs on iOS

In the last few days I spent some time on getting GStreamer to compile properly with the XCode 6 preview release (which is since today available as a stable release), and make sure everything still works with iOS 8. This should be the case now with GIT master of cerbero.

So much for the boring part. But more important, iOS 8 finally makes the VideoToolbox API available as public API. This allows us to use the hardware video decoders and encoders directly, and opens lots of new possibilities for GStreamer usage on iOS. Before iOS 8 it was only possible to directly decode local files with the hardware decoders via the AVAssetReader API, which of course only allows rather constrained GStreamer usage.

We already had elements (for OS X) using the VideoToolbox API in the applemedia plugin in gst-plugins-bad, so I tried making them work on iOS too. This required quite a few changes, and in the end I rewrote big parts of the encoder element (which should also make it work better on OS X btw). But with GIT master of GStreamer you can now directly use the hardware codecs on iOS 8 by using the vtdec decoder element or the vtenc_h264 encoder element. There’s still a lot of potential for improvements but it’s working.


If you compile everything from GIT master, it should still be possible to use the same application binary with iOS 7 and earlier versions. Just make sure to use “-weak_framework VideoToolbox” for linking your application instead of “-framework VideoToolbox”. On earlier versions you just won’t be able to use the hardware codecs.

September 18, 2014

And now for some hardware (Onda v975w)

Prodded by Adam Williamson's fedlet work, and by my inability to getting an Android phone to display anything, I bought an x86 tablet.

At first, I was more interested in buying a brand-name one, such as the Dell Venue 8 Pro Adam has, or the Lenovo Miix 2 that Benjamin Tissoires doesn't seem to get enough time to hack on. But all those tablets are around 300€ at most retailers around, and have a smaller 7 or 8-inch screen.

So I bought a "not exported out of China" tablet, the 10" Onda v975w. The prospect of getting a no-name tablet scared me a little. Would it be as "good" (read bad) as a PadMini or an Action Pad?


Well, the hardware's pretty decent, and feels rather solid. There's a small amount of light leakage on the side of the touchscreen, but not something too noticeable. I wish it had a button on the bezel to mimick the Windows button on some other tablets, but the edge gestures should replace it nicely.

The screen is pretty gorgeous and its high DPI triggers the eponymous mode in GNOME.

With help of various folks (Larry Finger, and the aforementioned Benjamin and Adam), I got the tablet to a state where I could use it to replace my force-obsoleted iPad 1 to read comic books.

I've put up a wiki page with the status of hardware/kernel support. It's doesn't contain all my notes just yet (sound is working, touchscreen will work very very soon, and various "basic" features are being worked on).

I'll be putting up the fixed-up Wi-Fi driver and more instructions about installation on the Wiki page.

And if you want to make the jump, the tablets are available at $150 plus postage from Aliexpress.

Evince annotations: almost there

The delay on drawing annotations was fixed and, together with some poppler patches by jaliste, we managed to get it working properly. Check it out:

There are still glitches here and there, but we are working on it so that the feature is available on evince's next release (fingers crossed!). After we get highlighting done, the other text markup annotations should be fairly straightforward.

Long live gnome-common? Macro deprecation

gnome-common is shrinking, as we’ve decided to push as much of it as possible upstream. We have too many layers in our build systems, and adding an arbitrary dependency on gnome-common to pull in some macros once at configure time is not helpful — there are many cases where someone new has tried to build a module and failed with some weird autotools error about an undefined macro, purely because they didn’t have gnome-common installed.

So, for starters:

What does this mean for you, a module maintainer? Nothing, if you don’t want it to. gnome-common now contains copies of the autoconf-archive macros, and has compatibility wrappers for them.

In the long term, you should consider porting your build system to use the new, upstreamed macros. That means, for each macro:

  1. Downloading the macro to the m4/ directory in your project and adding it to git.
  2. Adding the macro to EXTRA_DIST in
  3. Ensuring you have ACLOCAL_AMFLAGS = -I m4 ${ACLOCAL_FLAGS} in your top-level
  4. Updating the macro invocation in; just copy out the shim from gnome-common.m4 and tidy everything up.

Here’s an example change for GNOME_CODE_COVERAGE ? AX_CODE_COVERAGE in libgdata.

This is the beginning of a (probably) long road to deprecating a lot of gnome-common. Macros like GNOME_COMPILE_WARNINGS and GNOME_COMMON_INIT are next in the firing line — they do nothing GNOME-specific, and should be part of a wider set of reusable free software build macros, like the autoconf-archive. gnome-common’s support for legacy documentation systems (DocBook, anyone?) is also getting deprecated next cycle.

Comments? Get in touch with me or David King (amigadave). This work is the (long overdue) result of a bit of discussion we had at the Berlin DX hackfest in May.

A Londoner on Voting YES

There is a lot of misinformation in the press about the YES vote. Some people think it has to do with blind patriotism and xenophobia but the reality seems more converse to that if people inform themselves better on the politics of the question concerning whether Scotland should go YES or "No thanks".

I guess before I go on, I should "disclose" that My Mother's was Glaswegian Scots-Irish, but I honestly do not identify as Scottish and never have. My grandfather died when I was 9.

Technically, I am half Argentinian, half English in terms of where my folks were born. I was born in the UK but I do not "feel" English, not Argentine either and really, the Union Jack to me represents something other than me, as well...

I do however "feel" like a Londoner. I identify as a Londoner. I am a Londoner, I suppose... As a Londoner, I find that my friends tend to be black, white or brown, but certainly not any of them are red, white and blue (not times when they are feeling well, anyway!)

Moving Up

I have lived in Scotland since 2009 and during the time between now and then, I have come to observe some stark differences between the capital here and the one I grew up in. Some good, some bad. Either way, it is largely because of those differences I have seen during my time in Scotland, that I will be voting YES today.

YES - Scotland Banner


The first thing I noticed when I moved up here, was that everyone was white. That was a bit of a culture shock for me, especially at times when I would (very rarely) hear people at bus stops winging about immigrants. What immigrants? Who is taking all the jobs, now? What all 5 of them? More to the point, what jobs have they been taking?

That is a difference you notice between the capitals. In Scotland: There aren't many jobs generally, really. Not decent ones anyway... In London, availability of decent jobs does not seem to be such a striking issue for the people who live there.

Many Scottish University graduates seem to end up having to go to Birmingham, London or Manchester which seems a shame... So, as time draws near, I have had to wonder myself why are there no jobs in Scotland or rather, why are there no jobs offering decent pay/terms in what is clearly, such a wealthy nation?

Scotland's Wealth of Opportunity: £5 Billion - Aerospace Defence,  £32 Billion - Rural and Highland Economy, £1500 Billion - Oil and Gas, £10 Billion - Tourism, £9.3 Billion - Chemical Industries, £2.3 Billion - Economic Impact of the Historic Environment, £7 Billion - Financial Services Industry, £17 Billion - Construction Industry

A lot might blame the recession or the "national dept" and call it "a sign of the times" but I doubt that is solely to blame for the phenomena.


Let's look at Scottish transport networks, for a moment:

  • It takes the fastest train 4 hours to get to London from Edinburgh on the train and there's one every half hour. 
  • The fastest to from Edinburgh to Inverness is 3:24 and there is one every hour. 
  • Edinburgh is 405 miles (a seven hour drive) from London 
  • Edinburgh is just 155 miles (a three hour drive) to Inverness.

Now, maybe my maths is rusty but does that not seem a bit... Well: Crap? More so, because that is one of a myriad of infrastructural deficits which I have seen concerning the Scottish transportation "system", to date.

The Capital

Let's have a more local look at the Lothian Bus map for Edinburgh:

Lothian Buses - Map

The first thing to note is that it is practically impossible to get from one end of Edinburgh to the other without having to bus to the city centre. Also note, the distinct lack of night buses.

There are no tubes in Edinburgh, so I cannot talk about the state of those. There are however, two train stops. One at Edinburgh Waverley (city centre) and one at Haymarket. The one at Morningside was closed years ago (back in the 1960's, according to Wikipedia) and is not in use for passenger travel anymore.


Getting a GP to refer you to a specialist could be pretty easily done when I first landed, here which made the service notably different from how things are in London, where you have to fight tooth and nail to see any kind of specialist.

In London, the NHS is pretty s*** all round, to be honest. I saw a woman giving birth in the A&E waiting room of Lewisham hospital when I was just seven years old and this is just one of several incidents I have borne witness to in London which to me, suggest a certain lack of quality in the service provided down there.

Tory Cu#ts

Ahem, typo of course.. I meant to say cuts, naturally... Unfortunately since the Tories got in, I have noticed a clear and steep decline in the Scottish heathcare system in the capital. The first (and only) ADHD clinic in Scotland was pretty much shut down shortly after the elections, for one. I work with disabled people so I have witnessed the changes can impact people first hand, in various ways. One example is my friend, Ryan Bryden who is C5 tetraplegic. If you have a moment have a read about Ryan's story, please do (and feel free help him out if you have any of that, to spare). Ryan is just one of many in Scotland who are solely reliant on the NHS and charitable support during what can only be described as the slashing of health and welfare services under a Conservative government which Scotland did not vote for and which seem to most severely impact who are most vulnerable and leave those least in need, unscathed...

Edinburgh and Scottish Culture

Some things of note, about Edinburgh and Scottish culture that I prefer to that of my home town:

  • People form an orderly queue at the bus stop
  • People form an orderly queue at the bar
  • Pretty much everyone knows, is related to or is a boxer, themselves
  • It's a lot less of a hazard prone trip to the shops (speaking as a female)
  • Homophobia does not seem to be nearly as common here (again, speaking as a female - correlation between this point and the above one, I reckon)
  • Passers by will intervene if someone is getting given trouble by someone else
  • Crack cocaine is not a visible presence on the streets of the capital
  • When people are being friendly, it's because they want to be friendly rather than because they are about to attack you (i.e. in Scotland when a person wants to attack you, there tends to be fair warning ;-))
  • People tend to respect their elders
  • People don't tend to vote Tory or UKIP (which for the record, implies that Scotland has not been nurturing the culture of hysterical insanity about people on benefits, immigrants and those on disability that England seems be nurturing, at present)
  • People are more politically aware (go figure ^)
  • Bus drivers wait until the elderly are sat down to start driving the bus
  • People move out the way to make room for the disabled spots on buses rather than "tutting" and frowning at the mere prospect
  • The Edinburgh Fringe
  • The Edinburgh Film Festival
  • The history was not bombed away during wwII
  • The Scottish Mining Museum
  • Ginger people are treated just like everyone else (they have hell of a time in England)

Oh and ... It's one of the most beautiful places, in the world:

Scottish Landscape

What If...?

There is an atmosphere of fear about what "might" happen if Scotland gains its independence which makes little sense to me. Whilst I can concede that there are a lot of unknowns in a YES vote, since the truth is we simply cannot say how it will pan out, really. Only the 2016 elections can decide that really... Yet, whilst Scotland may not benefit from a YES, its very unlikely to improve significantly with a "no thanks" or it simply would not be in the state that brought it to this referendum, in the first place!

It follows that Scotland's chances of economic growth seem to be greatly increased if it has a government which is accountable to its people (now, there is a surprise). As things are Scotland does not have that at all, to my mind. The core Infrastructural deficits in here do seem to be responsible for the vast pockets of poverty in this very wealthy country, simply because Westminster controls Scotland's tax revenues and they are seemingly uninterested interested in putting those taxes back into nurturing the growth of Scottish industry, services and infrastructure.

So, whilst I am a Londoner, I feel I can proudly vote YES today for Scottish autonomy tomorrow and hope the rest of country does this too, so it has a fighting chance. The rest is up to Scotland. (hopefully) not Westminster. Time will tell...

Besides all that stuff, it would be absolutely glorious to see David Cameron's face, if YES were to get the majority vote :-)

September 17, 2014

What’s in a job title?

Over on Google+, Aaron Seigo in his inimitable way launched a discussion about  people who call themselves community managers.. In his words: “the “community manager” role that is increasingly common in the free software world is a fraud and a farce”. As you would expect when casting aspertions on people whose job is to talk to people in public, the post generated a great, and mostly constructive, discussion in the comments – I encourage you to go over there and read some of the highlights, including comments from Richard Esplin, my colleague Jan Wildeboer, Mark Shuttleworth, Michael Hall, Lenz Grimmer and other community luminaries. Well worth the read.

My humble observation here is that the community manager title is useful, but does not affect the person’s relationships with other community members.

First: what about alternative titles? Community liaison, evangelist, gardener, concierge, “cat herder”, ombudsman, Chief Community Officer, community engagement… all have been used as job titles to describe what is essentially the same role. And while I like the metaphors used for some of the titles like the gardener, I don’t think we do ourselves a service by using them. By using some terrible made-up titles, we deprive ourselves of the opportunity to let people know what we can do.

Job titles serve a number of roles in the industry: communicating your authority on a subject to people who have not worked with you (for example, in a panel or a job interview), and letting people know what you did in your job in short-hand. Now, tell me, does a “community ombudsman” rank higher than a “chief cat-herder”? Should I trust the opinion of a “Chief Community Officer” more than a “community gardener”? I can’t tell.

For better or worse, “Community manager” is widely used, and more or less understood. A community manager is someone who tries to keep existing community members happy and engaged, and grows the community by recruiting new members. The second order consequences of that can be varied: we can make our community happy by having better products, so some community managers focus a lot on technology (roadmaps, bug tracking, QA, documentation). Or you can make them happier by better communicating technology which is there – so other community managers concentrate on communication, blogging, Twitter, putting a public face on the development process. You can grow your community by recruiting new users and developers through promotion and outreach, or through business development.

While the role of a community manager is pretty well understood, it is a broad enough title to cover evangelist, product manager, marketing director, developer, release engineer and more.

Second: The job title will not matter inside your community. People in your company will give respect and authority according to who your boss is, perhaps, but people in the community will very quickly pigeon-hole you – are you doing good work and removing roadblocks, or are you a corporate mouthpiece, there to explain why unpopular decisions over which you had no control are actually good for the community? Sometimes you need to be both, but whatever you are predominantly, your community will see through it and categorize you appropriately.

What matters to me is that I am working with and in a community, working toward a vision I believe in, and enabling that community to be a nice place to work in where great things happen. Once I’m checking all those boxes, I really don’t care what my job title is, and I don’t think fellow community members and colleagues do either. My vision of community managers is that they are people who make the lives of community members (regardless of employers) a little better every day, often in ways that are invisible, and as long as you’re doing that, I don’t care what’s on your business card.


A follow up to yesterday's Videos new for 3.14

The more astute (or Wayland testing) amongst you will recognise mutter running a nested Wayland compositor. Yes, it means that Videos will work natively under Wayland.

Got to love indie films

It's not perfect, as I'm still seeing hangs within the Intel driver for a number of operations, but basic playback works, and the playback is actually within the same window and correctly hidden when in the overview ;)

Making of GNOME 3.14

The release of GNOME 3.14 is slowly approaching, so I stole some time from actual design work and created this little promo to show what goes into a release that probably isn’t immediately obvious (and a large portion of it doesn’t even make it in).

Watch on Youtube

I’d like to thank all the usual suspects that make the wheels spinning, Matthias, Benjamin and Allan in particular. The crown goes to Lapo Calamandrei though, because the amount of work he’s done on Adwaita this cycle will really benefit us in the next couple of releases. Thanks everyone, 3.14 will be a great release*!

* I keep saying that every release, but you simply feel it when you’re forced to log in to your “old” GNOME session rather than jhbuild.

September 16, 2014

ACPI, kernels and contracts with firmware

ACPI is a complicated specification - the latest version is 980 pages long. But that's because it's trying to define something complicated: an entire interface for abstracting away hardware details and making it easier for an unmodified OS to boot diverse platforms.

Inevitably, though, it can't define the full behaviour of an ACPI system. It doesn't explicitly state what should happen if you violate the spec, for instance. Obviously, in a just and fair world, no systems would violate the spec. But in the grim meathook future that we actually inhabit, systems do. We lack the technology to go back in time and retroactively prevent this, and so we're forced to deal with making these systems work.

This ends up being a pain in the neck in the x86 world, but it could be much worse. Way back in 2008 I wrote something about why the Linux kernel reports itself to firmware as "Windows" but refuses to identify itself as Linux. The short version is that "Linux" doesn't actually identify the behaviour of the kernel in a meaningful way. "Linux" doesn't tell you whether the kernel can deal with buffers being passed when the spec says it should be a package. "Linux" doesn't tell you whether the OS knows how to deal with an HPET. "Linux" doesn't tell you whether the OS can reinitialise graphics hardware.

Back then I was writing from the perspective of the firmware changing its behaviour in response to the OS, but it turns out that it's also relevant from the perspective of the OS changing its behaviour in response to the firmware. Windows 8 handles backlights differently to older versions. Firmware that's intended to support Windows 8 may expect this behaviour. If the OS tells the firmware that it's compatible with Windows 8, the OS has to behave compatibly with Windows 8.

In essence, if the firmware asks for Windows 8 support and the OS says yes, the OS is forming a contract with the firmware that it will behave in a specific way. If Windows 8 allows certain spec violations, the OS must permit those violations. If Windows 8 makes certain ACPI calls in a certain order, the OS must make those calls in the same order. Any firmware bug that is triggered by the OS not behaving identically to Windows 8 must be dealt with by modifying the OS to behave like Windows 8.

This sounds horrifying, but it's actually important. The existence of well-defined[1] OS behaviours means that the industry has something to target. Vendors test their hardware against Windows, and because Windows has consistent behaviour within a version[2] the vendors know that their machines won't suddenly stop working after an update. Linux benefits from this because we know that we can make hardware work as long as we're compatible with the Windows behaviour.

That's fine for x86. But remember when I said it could be worse? What if there were a platform that Microsoft weren't targeting? A platform where Linux was the dominant OS? A platform where vendors all test their hardware against Linux and expect it to have a consistent ACPI implementation?

Our even grimmer meathook future welcomes ARM to the ACPI world.

Software development is hard, and firmware development is software development with worse compilers. Firmware is inevitably going to rely on undefined behaviour. It's going to make assumptions about ordering. It's going to mishandle some cases. And it's the operating system's job to handle that. On x86 we know that systems are tested against Windows, and so we simply implement that behaviour. On ARM, we don't have that convenient reference. We are the reference. And that means that systems will end up accidentally depending on Linux-specific behaviour. Which means that if we ever change that behaviour, those systems will break.

So far we've resisted calls for Linux to provide a contract to the firmware in the way that Windows does, simply because there's been no need to - we can just implement the same contract as Windows. How are we going to manage this on ARM? The worst case scenario is that a system is tested against, say, Linux 3.19 and works fine. We make a change in 3.21 that breaks this system, but nobody notices at the time. Another system is tested against 3.21 and works fine. A few months later somebody finally notices that 3.21 broke their system and the change gets reverted, but oh no! Reverting it breaks the other system. What do we do now? The systems aren't telling us which behaviour they expect, so we're left with the prospect of adding machine-specific quirks. This isn't scalable.

Supporting ACPI on ARM means developing a sense of discipline around ACPI development that we simply haven't had so far. If we want to avoid breaking systems we have two options:

1) Commit to never modifying the ACPI behaviour of Linux.
2) Exposing an interface that indicates which well-defined ACPI behaviour a specific kernel implements, and bumping that whenever an incompatible change is made. Backward compatibility paths will be required if firmware only supports an older interface.

(1) is unlikely to be practical, but (2) isn't a great deal easier. Somebody is going to need to take responsibility for tracking ACPI behaviour and incrementing the exported interface whenever it changes, and we need to know who that's going to be before any of these systems start shipping. The alternative is a sea of ARM devices that only run specific kernel versions, which is exactly the scenario that ACPI was supposed to be fixing.

[1] Defined by implementation, not defined by specification
[2] Windows may change behaviour between versions, but always adds a new _OSI string when it does so. It can then modify its behaviour depending on whether the firmware knows about later versions of Windows.

comment count unavailable comments

Flowhub Kickstarter delivery

It is now a year since our NoFlo Development Environment Kickstarter got funded. Since then our team together with several open source contributors has been busy building the best possible user interface for Flow-Based Programming.

When we set out on this crazy adventure, we still mostly had only NoFlo and JavaScript in mind. But there is nothing inherently language-specific in FBP or our UI, and so when people started making other runtimes compatible with the protocol we embraced the idea of full-stack flow-based programming.

Here is how the runtime registration screen looks with the latest release:

Flowhub Runtime Registration

This hopefully highlights a bit of the possibilities of what can be done with Flowhub right now. I know there are several other runtimes that are not yet listed there. We should have something interesting to announce in that space soon!

Live mode

The Flowhub release made today includes several interesting features apart from giving private repository access to our Kickstarter backers. One I'm especially happy about is what we call live mode.

The live mode, initially built by Lionel Landwerlin, enables Flowhub to discover and connect to running pieces of Flow-Based software running in different environments. With it you can monitor, debug, and modify applications without having to restart them!

We made a short demo video of this in action with Flowhub, Raspberry Pi and an NFC tag.

Getting started

Our backers should receive an email today with instructions on how to activate their Flowhub plans. For those who missed the Kickstarter, there should be another batch of Flowhub pre-orders available soon.

Just like with Travis and GitHub, Flowhub is free for open source development. So, everybody should be able to start using it immediately even without a plan.

If you have any questions about Flow-Based Programming or how to use Flowhub, please check out the various ways to get in touch on the NoFlo support page.

Kickstarter Backer badge

A Wayland status update

It has been a while since I last  wrote about the GNOME Wayland port; time for another status update of GNOME 3.14 on Wayland.


So, what have we achieved this cycle ?

  • Keyboard layouts are supported
  • Drag-and-Drop works (with limitations)
  • Touch is supported

The list of applications that work ‘natively’ (ie with the GTK+ Wayland backend) is looking pretty good, too.  The main straggler here is totem, where we are debugging some issues with the use of subsurfaces in clutter-gtk.

We are homing in on ‘day-to-day usable’.  I would love to say the Wayland session is “rock-solid”, but I just spent an hour trying to track track down an ugly memory leak that ended my session rather quickly. So, we are not quite there yet, and more work is needed.

If you are interested in helping us complete the port and take advantage of Wayland going forward, there is an opening in the desktop team at Red Hat for a Wayland developer.

Update: After a bit of collective head-scratching, Jasper fixed the memory leak here.

Introducing Probe

We’ve all heard of the best practices regarding layouts on Android: keep your view tree as simple as possible, avoid multi-pass layouts high up in the hierarchy, etc. But the truth is, it’s pretty hard to see what’s actually going on in your view tree in each UI traversal (measure → layout → draw).

We’re well served with developer options for tracking graphics performance—debug GPU overdraw, show hardware layers updates, profile GPU rendering, and others. However, there is a big gap in terms of development tools for tracking layout traversals and figuring out how your layouts actually behave. This is why I created Probe.

Probe is a small library that allows you to intercept view method calls during Android’s layout traversals e.g. onMeasure(), onLayout(), onDraw(), etc. Once a method call is intercepted, you can either do extra things on top of the view’s original implementation or completely override the method on-the-fly.

Using Probe is super simple. All you have to do is implement an Interceptor. Here’s an interceptor that completely overrides a view’s onDraw(). Calling super.onDraw() would call the view’s original implementation.

public class DrawGreen extends Interceptor {
    private final Paint mPaint;

    public DrawGreen() {
        mPaint = new Paint();

    public void onDraw(View view, Canvas canvas) {

Then deploy your Interceptor by inflating your layout with a Probe:

Probe probe = new Probe(this, new DrawGreen(), new Filter.ViewId(;
View root = probe.inflate(R.layout.main_activity, null);

Just to give you an idea of the kind of things you can do with Probe, I’ve already implemented a couple of built-in interceptors. OvermeasureInterceptor tints views according to the number of times they got measured in a single traversal i.e. equivalent to overdraw but for measurement.

LayoutBoundsInterceptor is equivalent to Android’s “Show layout bounds” developer option. The main difference is that you can show bounds only for specific views.

Under the hood, Probe uses Google’s DexMaker to generate dynamic View proxies during layout inflation. The stock ProxyBuilder implementation was not good enough for Probe because I wanted to avoid using reflection entirely after the proxy classes were generated. So I created a specialized View proxy builder that generates proxy classes tailored for Probe’s use case.

This means Probe takes longer than your usual LayoutInflater to inflate layout resources. There’s no use of reflection after layout inflation though. Your views should perform the same. For now, Probe is meant to be a developer tool only and I don’t recommend using it in production.

The code is available on Github. As usual, contributions are very welcome.

September 15, 2014

A few words to end this

This has gone on quite long enough.

Last Friday I wrote a post that was as painful to write for me as it was hurtful to others.

Unfortunately I felt, and still feel that shining some light on our issues was neccessary to protect open discussion and general inclusiveness in our community. I truly hope that stirring the waters here has led us to some long needed introspection about how things are done around here.

I have just now closed comments on the post, a few days of discussion on this is quite enough. You’ll just have to take my word that I have not doctored any of the comments and did not discriminate against any commentors, regardless of whether or not I liked what they had to say.

The reason for this follow up, and the reason for it being a separate post, is that I have to stress how painful it was for me to level accusations against some really nice people, and if my words are in any way harmful to their overall reputation, then this by itself needs to be rectified at least so much as is possible from my side.

Firstly, for anyone who does not know Paolo Borelli, he does not have a hurtful bone in his body, really he is among the nicest people in GNOME I have met. The undertones surrounding this situation are complex, there is a lot of pressure in the community to avoid any conflicts and it’s sad to see people get pulled into this.

Paolo is actually the one who, you could say “mentored” me over ten years ago now, he helped me a lot to understand how things work with IRC and the politics around being a maintainer in GNOME, I hope this serves to clarify how painful it was for me to bring his name into this.

Secondly, I’ve been exchanging emails with Alberto over the weekend, he is also a really nice guy who I would not have expected to take a stance. However something that I failed to recognize in all of this is that Alberto, being the maintainer of Planet GNOME, was under extreme pressure by various people to remove Philip from Planet GNOME at the end of May, of course, he had to take a position in a lose-lose battle and was already caught in the cross fire.

I do not envy Alberto’s position in all of this at all, and while we may disagree on some matters, he does not deserve to be painted in the light that I painted him in.

Paolo, Alberto and Emmanuele, you deserve, and have my deepest apologies for having dragged your names into this.

That said, the fact that there was so much pressure in the community to take a public stance against any and all forms of criticism regarding OPW and the direction of GNOME and our priorities, is a problem and I’m glad we got it out in the open to discuss it.

My blog will not be a venue for further discussion on this matter for the moment, I’ve contributed enough hours to this and we are going into a beta testing phase in one month and really need to focus on the work we are doing.


summing up 61

i am trying to build a jigsaw puzzle which has no lid and is missing half of the pieces. i am unable to show you what it will be, but i can show you some of the pieces and why they matter to me. if you are building a different puzzle, it is possible that these pieces won't mean much to you, maybe they won't fit or they won't fit yet. then again, these might just be the pieces you're looking for. this is summing up, please find previous editions here.

  • magic ink: information software and the graphical interface, by bret victor. today's ubiquitous gui has its roots in doug engelbart's groundshattering research in the mid-'60s. the concepts he invented were further developed at xerox parc in the '70s, and successfully commercialized in the apple macintosh in the early '80s, whereupon they essentially froze. twenty years later, despite thousand-fold improvements along every technological dimension, the concepts behind today's interfaces are almost identical to those in the initial mac. similar stories abound. for example, a telephone that could be "dialed" with a string of digits was the hot new thing ninety years ago. today, the "phone number" is ubiquitous and entrenched, despite countless revolutions in underlying technology. culture changes much more slowly than technological capability. the lesson is that, even today, we are designing for tomorrow's technology. cultural inertia will carry today's design choices to whatever technology comes next. in a world where science can outpace science fiction, predicting future technology can be a nostradamean challenge, but the responsible designer has no choice. a successful design will outlive the world it was designed for. highly recommended
  • visualisation and cognition: drawing things together, by bruno latour. it is not perception which is at stake in this problem of visualization and cognition. new inscriptions, and new ways of perceiving them, are the results of something deeper. if you wish to go out of your way and come back heavily equipped so as to force others to go out of their ways, the main problem to solve is that of mobilization. you have to go and to come back with the "things" if your moves are not to be wasted. but the "things" you gathered and displaced have to be presentable all at once to those you want to convince and who did not go there. in sum, you have to invent objects which have the properties of being mobile but also immutable, presentable, readable and combinable with one another. highly recommended (pdf)
  • cargo cult software engineering, the issue that has fallen by the wayside while we've been debating process vs. commitment is so blatant that it may simply have been so obvious that we have overlooked it. we should not be debating process vs. commitment; we should be debating competence vs. incompetence. the real difference is not which style is chosen, but what education, training, and understanding is brought to bear on the project. rather than debating process vs. commitment, we should be looking for ways to raise the average level of developer and manager competence. that will improve our chances of success regardless of which development style we choose
  • nude portraits, photography project by trevor christensen

September 14, 2014

Back to the kingdom

And not the kingdom of Spain unfortunately (unfortunately because I miss it and because it's still a kingdom). In a few months (not sure about specific dates yet, probably in early 2015) I will be moving back to the United Kingdom, this time to the larger metropolis, London. Don't panic, I will still be with Red Hat, there won't be a lot of changes in that front. In the meantime I will settle back in Gran Canaria and will be flying back and forth on a monthly basis.

I must note that when I made the decision to move to Czech my plan was: "I do not have a plan", just enjoying it and trying to make the best of it without thinking in deadlines as to when to move back to Spain. Red Hat has been a very welcoming company in which I feel just like home and Brno has been a very welcoming city and this is definitively a part of Europe that is worth experiencing. I've met terrific people during this period both inside and outside Red Hat.

There was, however, a little problem.

Something altered the mid-term plans, a few months before I moved, when the decision was already made, I met someone very special with whom now I want to share my life with. After 16 months of  carrying a distant relationship it was due time to find a place where we could be together, after months of planning and considering options, London presented itself as the spot to make the move as she found a pretty good job there.

While I am going to miss sharing the office on a daily basis with awesome people, I am looking forward to this new chapter in my life.

Canary Wharf at Night | London, England, Niko Trinkhaus, (CC by-nc)

I want to note that I am deeply thankful to Christian Schaller for his tremendous amount of support during my stay in Brno and for working with me in figuring ways to balance my professional and personal life. I also wish him the best of luck with his new life in Westford, I'm certainly going to miss him.

On the other hand I guess this means I'll show up at the GNOME Beers in London more often :-)

Sun 2014/Sep/14

You can try to disguise it in any way you want, but at the end of the day what we have is a boys' club that suddenly cannot invest all of its money into toys for the boys' amusement but now also needs to spend it leveling the field for the girls to be able to play too. Less money for the toys the boys like, surely that upsets them -- after all, boys were having so much fun so far and now that fun is being taken away.

The fact that the fun in this case happens to be of a socially necessary technological nature (a free desktop, a free software stack, whatever you want to call it) doesn't make this any different. If you are objecting to OPW and your argument is that it hinders the technological advance of the GNOME project, well, admit it -- isn't the fact that you enjoy technology at heart (ie, you are the one having fun) one of the main reasons you're saying this?

Male-chauvinism can take a thousand forms, and many of those forms are so well hidden and ingrained into our culture that they are terribly difficult to see, specially if you're a man and not the target of it. Once we are confronted with any of these forms, this might even give us a terrible headache -- we are in front of something we didn't even know it existed -- and it can take a tremendous effort to accept they're here. But, frankly, that effort is long due and many of us will refuse to be around those not wanting to make it.

September 12, 2014

I’m looking at you


I write to you all today on a solemn matter, one which I fear will be forgotten and ignored if nobody starts some discussion on this.

Earlier this week, some of you may have noticed that for a very short time there was a rather angry post by Philip Van Hoof, he sounded quite frustrated and disturbed and the title of his post basically said to please remove him from the Planet GNOME feeds.

Unfortunately this blog post was even deleted from his own blog, so there is nothing to refer to here, also it was gone so fast that I have a hunch many Planet GNOME readers did not get a chance to see what was going on.

What I want to highlight in this post is not this frustrated angry post by Philip, but rather the precursor which seems to have led us to this sad turn of events.

Let’s make things better

In late May this summer, Philip submitted the post “Let’s make things better“. This post is also deleted from his own blog, I’m not sure for what reasons, I’m keeping the link alive here incase Philip feels inspired enough to at reactivate that post (it would help for people to see this in perspective, as people who have not read that entry may suspect it contained rudeness or bad language or sweeping accusations or something, which simple was not the case at all).

Yes, a lot of you readers know about that post, many of you would probably prefer I don’t bring it up, but the problem is that many people just don’t know what happened. Also the result of him deleting his post is that people don’t get any chance to verify these false claims of indecency which were aimed towards him for writing a very sensible post.

What I can say, is that the post did not use any distasteful language, he was not rude and did not single anyone out or blame anyone, he just said some really sensible things which happened to annoy a certain few members of our community.

I think the critical part which made people react irrationally to his prose, ran something like:

Maybe if we spent a little less time on outreach, and a little bit more on development…

And went on from there, he was basically arguing that our efforts on sustaining programs such as OPW are not a part of our mission, and that maybe our attention would be better spent writing excellent software (I’ll be happy if the post re-appears so people can read it in it’s integrity, as I don’t have a copy anymore).

I think, given the turn of events, this recent post by Philip requesting to be removed was a final attempt to try to do something good for a community that just keeps telling him that his views are wrong, dirty, and need to be censored, i.e. he got a lot of flak from the community at large for absolutely no good reason at all – if anyone needs to be ashamed, it’s us, as a community, for failing him.

I’m looking at you

It’s generally bad form to name people in public, however the wider GNOME community needs to know what is really going on in this case and they will not have the evidence to judge for themselves without references. That said these are only a couple of excerpts from the circus of public shaming which followed Philip’s perfectly reasonable blog post.


Paolo Borelli makes a response to someone who quoted Philip’s blog in a positive light on a public mailing list, and he goes out of his way to mention his public opposition to Philip speaking his mind:

However you also started off by citing Philip’s blog post and honestly I found that post wrong and disturbing

Taken in context of the mail thread, it looks as though the original poster is to be considered lucky to be taken seriously in any measure, just for referring to the said blog post which puts a little scrutany on our GNOME identity as an outreach foundation.

Paolo, really ? I would never have expected this behavior, do you really feel it’s necessary to call Philip’s call to reevaluate our position on these matter as “wrong and disturbing” ?

We have a long history you and I, I thought I knew you better than that.


Alberto Ruiz takes it a step further, again taking a public stance against Philip:

“I’ve been asked to remove your blog by several people and I’ve reached the conclusion that it would be a really bad idea because
it would set the wrong precedence and it would shift the discussion to the wrong topic (censorship yadda yadda). Questioning OPW should be allowed.

The problem with your post is that if not questioned by other people (as many have done already) it would send the wrong message to the public and prospect GSoC, OPW and general contributors. Your blog was the wrong place to question and your wording makes it clear that you have misunderstandings about how the community works.”

Alberto, I’m disappointed in you. There is no censorship on Planet GNOME, you know that, I know that, and asides from one silly “upskirt” incident in the history of Planet GNOME, this has never caused any issues.

Moreover, it is simply not your call, or anyone’s call to make, to decide that a long time member of our community’s politely and consicely formed opinion be censored from Planet GNOME just because it disagrees with what some of the other members think.

It is not your call to say that people should not be questioning things on Planet GNOME, especially since that is EXACTLY where it will be heard. Have you considered that he takes this issue very seriously and has decided, as is his right, to raise the matter for open public discussion ? Public discussion on the direction of GNOME is what we do in GNOME, we are the foundation and contributors and public discussion needs to happen about critical matters in order for us, the public, to make good decisions about the future of GNOME.


Finally, Emmanuele Bassi, I know his recent post was pretty “out there”, anyone would expect him to be frustrated after the treatment this community has given him, the public shaming and insolence this community has shown him by taking such an opposed stance against his expressing himself would be enough to drive anyone nuts.

Don’t you think, though, that his post was a last-effort attempt to be heard and be a positive influence for change in GNOME ?

Do you really think this immediate response to a frustrated blog post was the correct way to diffuse the situation ?

Really, we should do better to protect our own, Philip obviously had a rough time in the last couple months, his blog post was not an excuse to quickly sweep him under the rug, but a challenge to call people to action and actually openly discuss change.

If we don’t have people like Philip who are at least willing to fight for our ability to openly discuss things, then I fear the worst for this community in the long run.

Moral of the story guys… Please get a grip, I’m really not impressed with how people have responded to Philip this summer, it could have equally been any of you, and if you had something important to share, I would be equally disappointed if the community had so aggressively shouted you down.

And no, I was never a proponent of the CoC effort, but please guys at least try to remember the first rule: Assume that others mean well.

All the best.



Today someone pointed out that since the original post at the end of may is missing, noone can form an opinion of their own. I did not have access to it at the time but another commentor was kind enough to paste a copy:

Matthew gets that developers need good equipment.
Glade, Scaffolding (DevStudio), Scintilla & GtkSourceView, Devhelp, gnome-build and Anjuta also got it earlier.
I think with GNOME’s focus on this and a bit less on woman outreach programs; this year we could make a difference.
Luckily our code is that good that it can be reused for what is relevant today.
It’s all about what we focus on.
Can we please now go back at making software?
ps. I’ve been diving in Croatia. Trogir. It was fantastic. I have some new reserves in my mental system.
ps. Although we’re very different I have a lot of respect for your point of view, Matthew.

I'm Leaving My Job At The Wikimedia Foundation

(Music for this entry: "You Can't Be Too Careful" by Moxy Früvous; "Level Up" by Vienna Teng; "Do It Anyway" by Ben Folds Five; "Teenagers, Kick Our Butts" by Dar Williams.)

I've regretfully decided to leave the Wikimedia Foundation, and my last day will be September 30th.

I've worked at WMF since February 2011, so I've seen the Foundation grow from 70 to 214 people. It's the best job I've ever had and I've grown a lot. And my team and my bosses are tremendously supportive. In April I summarized my work achievements from the past four years and I remain proud of them. Most recently, I'm proud of co-mentoring Frances Hocutt, who's about to turn her energies to Growstuff API development (with help from your donations).

But I want to redefine myself and grow in new directions, as a maker and activist. Wikimedia has 13 years of legacy code and thousands of vocal stakeholders, and WMF has one office, in San Francisco. I'm a junior-level developer (I'm a much better software engineer than I am a coder) but don't want to move to San Francisco, where we (understandably) prefer to have junior devs onsite. And I'd like to try out what it's like to get better at making software, to have more of a blank slate and perhaps less of a public spotlight, to work face-to-face with a team here in New York City, and to exclude destructive communication from my life (yes, there's some amount of burnout on toxic people and entitlement). One of the things I admire about Wikimedia's best institutions is our willingness to reflect and reinvent when things are not working. I need to emulate that.

I remain on the board of directors of the Ada Initiative, which aims to close the gender gap in Wikimedia and other open culture/source projects. (Please donate.) And I don't see any way I could stop being a Wikimedian and pursuing the mission. You'll see me as User:Sumanah out on the wikis.

After I wrap things up at Wikimedia Foundation, I'll be privileged to spend six weeks at Hacker School, concentrating on learning how to crank out websites and fiddling with web security, and then in late November I'll be meeting other South Asian geek feminist women at AdaCamp Bangalore. Aside from that I'm open to new opportunities, especially in empowering marginalized groups via open technology.

"Level Up" by Vienna Teng. ("If you are afraid, come out.") And heck, why not, a Kira Nerys fanvid I love, set to "Shake It Out" by Florence + The Machine. ("So tonight I'm gonna cut out and then restart.")

New gedit beta release for OS X

We have been hard at work since the last announcement. Thanks to help from people testing out the previous release, we found a number of issues (some not even OS X related) and managed to fix most of them. The most significant issues that are resolved are related to focus/scrolling issues in gtk+/gdk, rendering of window border shadows and context menus. We now also ship the terminal plugin, had fixes pushed in pygobject to make multiedit not crash and fixed the commander and multiedit plugin rendering. For people running OS X, please try out the latest release [1] which includes all these fixes.

Can’t see the video? Watch it on youtube:


Thu 2014/Sep/11

September 11, 2014

Understanding Conservancy Through the GSoC Lens

[ A version of this post originally appeared on the Google Open Source Blog, and was cross-posted on Conservancy's blog. ]

Software Freedom Conservancy, Inc. is a 501(c)(3) non-profit charity that serves as a home to Open Source and Free Software projects. Such is easily said, but in this post I'd like to discuss what that means in practice for an Open Source and Free Software project and why such projects need a non-profit home. In short, a non-profit home makes the lives of Free Software developers easier, because they have less work to do outside of their area of focus (i.e., software development and documentation).

As the summer of 2014 ends, Google Summer of Code (GSoC) coordnation work exemplifies the value a non-profit home brings its Free Software projects. GSoC is likely the largest philanthropic program in the Open Source and Free Software community today. However, one of the most difficult things for organizations that seek to take advantage of such programs is the administrative overhead necessary to take full advantage of the program. Google invests heavily in making it easy for organizations to participate in the program — such as by handling the details of stipend payments to students directly. However, to take full advantage of any philanthropic program, the benefiting organization has some work to do. For its member projects, Conservancy is the organization that gets that logistical work done.

For example, Google kindly donates $500 to the mentoring organization for every student it mentors. However, these funds need to go “somewhere”. If the funds go to an individual, there are two inherent problems. First, that individual is responsible for taxes on that income. Second, funds that belong to an organization as a whole are now in the bank account of a single project leader. Conservancy solves both those problems: as a tax-exempt charity, the mentor payments are available for organizational use under its tax exemption. Furthermore, Conservancy maintains earmarked funds for each of its projects. Thus, Conservancy keeps the mentor funds for the Free Software project, and the project leaders can later vote to make use of the funds in a manner that helps the project and Conservancy's charitable mission. Often, projects in Conservancy use their mentor funds to send developers to important conferences to speak about the project and recruit new developers and users.

Meanwhile, Google also offers to pay travel expenses for two mentors from each mentoring organization to attend the annual GSoC Mentor Summit (and, this year, it's an even bigger Reunion conference!). Conservancy handles this work on behalf of its member projects in two directions. First, for developers who don't have a credit card or otherwise are unable to pay for their own flight and receive reimbursement later, Conservancy staff book the flights on Conservancy's credit card. For the other travelers, Conservancy handles the reimbursement details. On the back end of all of this, Conservancy handles all the overhead annoyances and issues in requesting the POs from Google, invoicing for the funds, and tracking to ensure payment is made. While the Google staff is incredibly responsive and helpful on these issues, the Googlers need someone on the project's side to take care of the details. That's what Conservancy does.

GSoC coordination is just one of the many things that Conservancy does every day for its member projects. If there's anything other than software development and documentation that you can imagine a project needs, Conservancy does that job for its member projects. This includes not only mundane items such as travel coordination, but also issues as complex as trademark filings and defense, copyright licensing advice and enforcement, governance coordination and mentoring, and fundraising for the projects. Some of Conservancy's member projects have been so successful in Conservancy that they've been able to fund developer salaries — often part-time but occasionally full-time — for years on end to allow them to focus on improving the project's software for the public benefit.

Finally, if your project seeks help with regard to handling its GSoC funds and travel, or anything else mentioned on Conservancy's list of services to member projects, Conservancy is welcoming new applications for membership. Your project could join Conservancy's more than thirty other member projects and receive these wonderful services to help your community grow and focus on its core mission of building software for the public good.

Kernel Development Beginner

Yesterday Vignesh asked me if I could give some guidance to a college junior of mine who wants to start with Kernel programming. Being a filesystem developer on Novell for a while now, I thought I could share some things that I have learned. I wrote a somewhat long reply which I am reproducing below (with minor edits for clarity) in the hope that it may be useful to someone.

Since it was originally intended to be a mail, it is a little more verbose than a blog post. My advice is based on the situation on my college when I studied a decade ago. Things would have probably changed and the recommendations may need tweaking based on the context.


The most important quality that you need to inculcate if you want to do any kernel space programming is "Patience" (or persistence if you will). Though it is a good quality for any large scale project, it is a fundamental requirement for kernel programming. It is very easy to see progress and make an impact on userspace projects, but even simple changes in the kernel core will take a lot of time to get accepted, and will often require multiple rewrites. But fear not, as there are plenty of people who have conquered this mountain and it is not something to be worried about.

The starting steps will be:

1) Try to understand how to use git. We were (are ?) not taught to use a version control system in our college and it is such a fundamental thing. So start using git for college assignments and get the hang of it.

2) Start writing a lot of C programs and get experienced with pointers, memory allocation, threading. You can start implementing things like Stack, Queue, Trees etc. (whatever you study in datastructures) in a simple, thread-safe way. Do not focus on how you can visualize these datastructures but how you can effectively implement their functionality and thread safety. Use pthreads for threading. Do not use any library (like Glib) for giving you convenient datastructures (like Strings). Implement each of the things on your own. (But when you are writing code for a product, use a standard library always instead of re-inventing the wheel)

Write these C programs on Linux and compile using gcc. In our college days we were using turboc on windows and I hope things have changed. Use a linux distro (fedora, debian, openSUSE, Gentoo etc.) exclusively; Do not use Windows (at least for a while) to make yourself aware of the sysadmin, shell-scripting parts of linux, which will come in handy.

3) Grab a (any) book on Operating Systems theory and read it. The dinosaur book by Silberschatz et. al. is a good start.

4) Without hesitation buy, Robert Love's Linux Kernel Programming book. It is one of the best beginner material and start reading it parallel to the OS book. This is easier to read than the previous one and more practical. But the previous one adds more value and is more theoretical. Handle (3) and (4) in parallel without blocking on any of the other activities.

5) After you are done with (1) and (2), and feel sufficiently confident with C and pointers, grab the  linux kernel sources from and try to build the sources yourself. should help. Learn how to install and boot with the kernel that you have built.

6.1) Subscribe to Kernel Newbies mailing list and read every mail, *even* if you do not understand most of it.

6.2) Watch:

6.3) Subscribe to RSS feeds.

After this, you should be able to fix and send any trivial, documentation, staging fixes. Once you have done this and get the hang of the process, you will know how to send patches for any parts of the kernel.

By this time, you would have found your areas of interest in kernel (filesystems, memory management, io scheduler, CPU scheduling etc.). You will then have to dig deeper in those particular areas, by:
a) subscribing to the individual mailing lists (such as fs-devel, etc.)
b) reading about the bug reports for the individual component
c) finding the literature that is relevant for your subsystem (The linux memory management book Mel Gorman, etc).

Three other non-technical things that I would recommend are:

1) Create a new email address and use that for all your open source activities. That way you do not miss any important updates from your friends.

2) Kernel programming will not give you big money in the short and medium term (at least in India). If your motivation is not excellence in engineering, but becoming popular or rich (it is not wrong btw) then you should focus on some other areas of programming (developing apps, websites, solving user problems, making meaning etc.).

It will often take months (or even years) before you make a significant contribution that is not merely a memory leak or bug fix. Be prepared for that. But since you have age, energy, time (once you get married and/or have kids you will understand) on your side, it is not that difficult.

Many people try kernel programming and then quit because they do not have the patience and perseverance. It may also happen that they have found a more interesting technology at its nascent stage (like Distributed Computing, Artificial Intelligence, Containers, NLP etc.) It is not wrong to quit midway :) Any little time spent on kernel programming will immensely benefit you as a programmer even when you are doing user space programming. This holds good for not just kernel programming but any large-code-base/system programming (like Compilers, glibc, webkit, chrome, firefox etc.)

3) Be more aware of the GSoC community in colleges around you.

All the best.

September 09, 2014

Work Underway

<p>This is my first full week working on Builder. I'm still in the process of pulling things together like wiki pages and bugzilla. This will improve in the coming weeks.</p> <ul> <li><a href="">Wiki</a></li> <li><a href="">Bugzilla</a></li> <li><a href="">Source Code</a></li> </ul> <p>Yesterday involved a bunch of work on the editor workspace's core commands. Stuff like Save, Save As, Open, closing tabs, modified state tracking and such. I also did some more work on the snippet system with regards to variables in tab-stops. Longer term, I'd love for us to be sharing more code with Gedit for the editor. A bunch of the code is written in a way that keeps that in mind.</p> <p>Icons were switched to symbolic variants. We don't have all the icons we need made yet, so some are temporary. Art requests will be put together soon. There will be lots of fun icons to make for classes, functions/methods, enums, and such. Send me an email if you'd like to work on this.</p> <p>Builder includes some fun animations building on what Gedit does for split-views. While this code doesn't reuse gedit-multi-notebook, it is very much inspired by it and I hope to merge them soon. (There were some technical difficulties here). Notice in the video below how we notify the user of the location of the drop-zone.</p> <p>The search highlighting stuff I worked on last year has been ported to the new GtkTextView::draw_layer API that landed this cycle. That means it can use the pixel cache, reducing the amount of drawing we do.</p> <p>I've been testing these animations on my old ThinkPad x300 to ensure they are smooth. They look fluid everywhere I've tested. Of course, screencapture can't quite keep up with the frame-rate.</p> <p>The snippet language was heavily based upon <a href="">snipMate</a>, which I've been using for years. It looks something like this:</p> <pre> snippet foo ${$filename|stripsuffix|functify|capitalze} (${1});$0 </pre> <p>In a file named "foo-item.h", this would result in "FOO_ITEM ();" with the first tab-stop between the parenthesis and the final cursor position after the semicolon.</p> <p>You can even reference existing tab-stops from each other and see everything update as you type.</p> <pre> snippet foo ${1:hello world} ${2:$1|capitalize} $0 </pre> <p>Anyway, here is a short video of where things are.</p> <p>Lot's to do.o I hope to have some updates on how we are going to integrate lots of GNOME projects in the process.</p> <iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe> <p><a href="">Watch on Youtube</a></p>

GNOME apps in three dimensional space

The release of GNOME 3.14 is getting closer and closer and I’m trying my best to have the a video ready for release. The manuscript is still open for revision but is at its final stages. Voice-over should finish around next week or so. And in the meantime I am testing a new workflow in Blender.

Alexander Larsson sent me a Chromebook Pixel a few weeks ago. What is really cool about a Pixel for me, is  its 2560x1700px screen. All my source material for the GNOME 3.14 release video can now recorded in high resolution. That opens up some cool animation possibilities.

vlcsnap-2014-09-09-07h12m12s213The picture below is a snapshot from a video, which as you might notice is very high resolution.  This is all possible thanks to GNOME’s Hi-DPI support (which rocks!).  What you can see in the background, is a very green wallpaper. That’s a virtual “green screen” which I will remove from the video material afterwards with Blender (see below).

09-09 green-screenAnd now is where the magic happens. Because the video is so high resolution, it means I can import the video as a texture onto an animatable plane in 3D space.  On low resolution video this would turn the video into a blurry mess, but the high resolution video looks unaffected.  It means I get some new animation possibilities. The screenshot below is a demonstration.

09-09 crazy effects

..I’m definitely excited to make this release video. :)


September 08, 2014

2014-09-08: Monday

  • Mail chewage, tried to re-arrange tasks into some sort of sensible order with Andras, Kendy & Tim's help. Lunch. Reviewed and merged a rather nice string speedup which was a victim of a previous String to OUString conversion.
  • Lots of partner / customer interaction backlogged from last week. Wrote an LXF column. Sam put up a nice LibreOffice from Collabora team picture from the conference.

OpenGlucose: Again

Made progress this weekend on OpenGlucose. The GUI is still ugly but it has the info I want.

Important things on my wish-list, when I have time:

  1. Handling the units. My only device is mg/dl but other countries uses mmol/L. Since I’m living in Canada where they use mmol/L I should grab a new device with those units, so I’ll be able to compare logs and see how to know in which unit the device is configured.
  2. Make printable report, I’ve heard doctors like that. OTOH, I shouldn’t encourage using unofficial app for medical purpose.
  3. Support other FreeStyle devices. I’m pretty sure they all have the same kind of format so most of the parser should be reusable, I hope. I should be able to get a spare FreeStyle Freedom Lite in a few weeks.
  4. Publish an ubuntu package on a PPA.
  5. CSV export.
  6. Ideas?

I’m curious, does someone else own an InsuLinx and tried OpenGlucose yet? Or even tried to support another device?



Drawing Web content with OpenGL (ES 3.0) instanced rendering

This is a follow up article about my ongoing research on Web content rendering using aggressive batching and merging of draw operations, together with OpenGL (ES 3.0) instanced rendering.

In a previous post, I discussed how relying on the Web engine’s layer tree to figure out non-overlapping content (layers) of a Web page, would (theoretically) allow an OpenGL based rasterizer to ignore the order of the drawing operations. This would allow the rasterizer to group together drawing of similar geometry and submit them efficiently to the GPU using instanced rendering.

I also presented some basic examples and comparisons of this technique with Skia, a popular 2D rasterizer, giving some hints on how much we can accelerate rendering if the overhead of the OpenGL API calls is reduced by using the instanced rendering technique.

However, this idea remained to be validated for real cases and in real hardware, specially because of the complexity and pressure imposed on shader programs, which now become responsible for de-referencing the attributes of each batched geometry and render them correctly.

Also, there are potential API changes in the rasterizer that could make this technique impractical to implement in any existing Web engine without significant changes in the rendering process.

To try keep this article short and focused, today I want to talk only about my latest experiments rendering some fairly complex Web elements using this technique; and leave the discussion about performance to future entries.

Everything is a rectangle

As mentioned in my previous article, almost everything in a Web page can be rendered with a rectangle primitive.

Web pages are mostly character glyphs, which today’s rasterizers normally draw by texture mapping a pre-rendered image of the glyph onto a rectangular area. Then you have boxes, images, shadows, lines, etc; which can all be drawn with a rectangle with the correct layout, transformation and/or texturing.

Primitives that are not rectangles are mostly seen in the element’s border specification, where you have borders with radius, and different styles: double, dotted, grooved, etc. There is a rich set of primitives coming from the combination of features in the borders spec alone.

There is also the Canvas 2D and SVG APIs, which are created specifically for arbitrary 2D content. The technique I’m discussing here purposely ignores these APIs and focuses on accelerating the rest.

In practice, however, these non-rectangular geometries account for just a tiny fraction of the typical rendering of a Web page, which allows me to effectively call them “exceptions”.

The approach I’m currently following assumes everything in a Web page is a rectangle, and all non-rectangular geometry is treated as exceptions and handled differently on shader code.

This means I no longer need to ignore the ordering problem since I always batch a rectangle for every single draw operation, and then render all rectangles in order. This introduces a dramatic change compared to the previous approach I discussed. Now I can (partially) implement this technique without changing the API of existing rasterizers. I say “partially” because to take full advantage of the performance gain, some API changes would be desired.

Drawing non-rectangular geometry using rectangles

So, how do we deal with these exceptions? Remember that we want to draw only with rectangles so that no operation could ever break our batch, if we want to take full advantage of the instanced rendering acceleration.

There are 3 ways of rendering non-rectangular geometry using rectangles:

  • 1. Using a geometry shader:

    This is the most elegant solution, and looks like it was designed for this case. But since it isn’t yet widely deployed, I will not make much emphasis on it here. But we need to follow its evolution closely.

  • 2. Degenerating rectangles:

    This is basically to turn a rectangle into a triangle by degenerating one of its vertices. Then, with a set of degenerated rectangles one could draw any arbitrary geometry as we do today with triangles.

  • 3. Drawing geometry in the fragment shader:

    This sounds like a bad idea, and it is definitely a bad idea! However, given the small and limited amount of cases that we need to consider, it can be feasible.

I’m currently experimenting with 3). You might ask why?, it looks like the worse option. The reason is that going for 2), degenerating rectangles, seems overkill at this point, lacking a deeper understanding of exactly what non-rectangle geometry we will ever need. Implementing a generic rectangle degeneration just for a few tiny set of cases would have been initially a bad choice and a waste of time.

So I decided to explore first the option of drawing these exceptions in the fragment shader and see how far I could go in terms of shader code complexity and performance (un)loss.

Next, I will show some examples of simple Web features rendered this way.


The setup:

While my previous screen-casts were ran in my working laptop with a powerful Haswell GPU, one of my goals then was to focus on mobile devices. Hence, I started developing on an Arndale board I happen to have around. Details of the exact setup is out of the scope now, but I will just mention that the board is running a Linaro distribution with the official Mali T604 drivers by ARM.

My Arndale board

Following is a video I ensambled to show the different examples running on the Arndale board (and my laptop at the same time). This time I had to record using an external camera instead of screen-casting to avoid interference with the performance, so please bear with my camera-on-hand video recording skills.

This video file is also available on Vimeo.

I won’t talk about performance now, since I plan to cover that in future deliveries. Enough to be said that the performance is pretty good, comparable to my laptop in most of the examples. Also, there are a lot of simple known optimizations that I have not done because I’m focusing on validating the method first.

One important thing to note is that when drawing is done in a fragment shader, you cannot benefit from multi-sampling anti-aliasing (MSAA), since sampling occurs at an earlier stage. Hence, you have to implement anti-aliasing your self. In this case, I implemented a simple distance-to-edge linear anti-aliasing, and to my surprise, the end result is much better than the MSAA with 8 samples I was trying on my Haswell laptop before, and it is also faster.

On a related note, I have found out that MSAA does not give me much when rendering character glyphs (the majority of content) since they come already anti-aliased by FreeType2. And MSAA will slow down the rendering of the entire scene for every single frame.

I continue to dump the code from this research into a personal repository on GitHub. Go take a look if you are interested in the prototyping of these experiments.

Conclusions and next steps

There is one important conclusion coming out from these experiments: The fact that the rasterizer is stateless makes it very inefficient to modify a single element in a scene.

By stateless I mean they do not keep semantic information about the elements being drawn. For example, lets say I draw a rectangle in one frame, and in the next frame I want to draw the same rectangle somewhere else on the canvas. I already have a batch with all the elements of the scene happily stored in a vertex buffer object on GPU memory, and the rectangle in question is there somewhere. If I could keep the offset where that rectangle is in the batch, I could modify its attributes without having to drop and re-submit the entire buffer.

The solution: Moving to a scene graph. Web engines already implement a scene graph but at a higher level. Here I’m talking about a scene graph in the rasterizer itself, where nodes keep the offset of their attributes in the batch (layout, transformation, color, etc); and when you modify any of these attributes, only the deltas are uploaded to the GPU, rather than the whole batch.

I believe a scene graph approach has the potential to open a whole new set of opportunities for acceleration, specially for transitions and animations, and scrolling.

And that’s exciting!

Apart from this, I also want to:

  • Benchmark! set up a platform for reliable benchmarking and perf comparison with Skia/Cairo.
  • Take a subset of this technique and test it in Skia, behind current API.
  • Validate the case of drawing drop shadows and multi-step gradient backgrounds.
  • Test in other different OpenGL ES 3.0 implementations (and more devices!).

Let us not forget the fight we are fighting: Web applications must be as fast as native. I truly think we can do it.

Introducing dspec

With all the recent focus on baseline grids, keylines, and spacing markers from Android’s material design, I found myself wondering how I could make it easier to check the correctness of my Android UI implementation against the intended spec.

Wouldn’t it be nice if you could easily provide the spec values as input and get it rendered on top of your UI for comparison? Enter dspec, a super simple way to define UI specs that can be rendered on top of Android UIs.

Design specs can be defined either programmatically through a simple API or via JSON files. Specs can define various aspects of the baseline grid, keylines, and spacing markers such as visibility, offset, size, color, etc.

Baseline grid, keylines, and spacing markers in action.

Baseline grid, keylines, and spacing markers in action.

Given the responsive nature of Android UIs, the keylines and spacing markers are positioned in relation to predefined reference points (e.g. left, right, vertical center, etc) instead of absolute offsets.

The JSON files are Android resources which means you can easily adapt the spec according to different form factors e.g. different specs for phones and tablets. The JSON specs provide a simple way for designers to communicate their intent in a computer-readable way.

You can integrate a DesignSpec with your custom views by drawing it in your View‘s onDraw(Canvas) method. But the simplest way to draw a spec on top of a view is to enclose it in a DesignSpecFrameLayout—which can take an designSpec XML attribute pointing to the spec resource. For example:


I can’t wait to start using dspec in some of the new UI work we’re doing Firefox for Android now. I hope you find it useful too. The code is available on Github. As usual, testing and fixes are very welcome. Enjoy!

September 07, 2014

Final report: via points support landed in Gnome Maps!

This is my final report for Google Summer of Code 2014. I'm a bit on late but, since in the last weeks I was in a hard review state, I decided to write the wrap up post just when my changes to Gnome Maps were committed.

By the way, finishing GSOC does not mean that my contribution finished, indeed I'm planning to continue contributing on this project as I have free time.

Results achieved

Gnome Maps finally supports via points. All my code was committed to the master branch and considering that we've got a code freeze exception, I'm really happy to announce that it will be in Gnome 3.14!

Finally via points!

Turn point marker bubble appears on instruction selection
In details:
  • It's possible to add and remove via points. All the points that compose the query have a marker, that can be drag.
  • On instruction selection, a turn point is showed on the map and a bubble with the instruction appears.
  • There are information about the estimated time to reach the final destination and the distance of the routing, moreover every instruction has info about its length.
  • There are new icons for destinations (thanks to Andreas Nilsson).
  • A spinner is showed on route loading (thank to Jonas Danielsson).
The main thing I dropped has been via points reordering. The GTK drag and drop library is not simple as it seems, so I planned to add this feature in the next release.


I really would like to thank you all the Gnome Maps team, in details:
  • Mattias Bengtsson for mentoring me
  • Jonas Danielsson, Damiàn Nohales and Zeeshan Ali for reviewing my code
  • Andreas Nilsson for design, mockup and graphics
By the way, I could never forget the nights spent coding during GUADEC with Rishi Raj Singh and Mattias.

2014-09-07: Sunday

  • To NCC, good to catch up with all & sundry. Lunch. Out for a run with N. across the heath - fun; lazed in the garden in the sun suitably.
  • Practised quartet a little, starting to sound quite good having moved lots of the blacker notes from E. to H. David over for a fine roast dinner.

Systemd in GNOME 3.14 and beyond

Plan to get rid of ConsoleKit in GNOME 3.14

Before the start of the GNOME 3.14 cycle, Ryan Lortie announced his intention to make most GNOME modules depend on a logind-like API. The API would just implement the bits that are actually used. According to Ryan, most GNOME modules only use a selection of the logind functionality. He wanted to document exactly what we depend on and provide a minimal API. Then we could write a minimal stub implementation for e.g. FreeBSD as we’d know exactly what parts of the API we actually need. The stub would still be minimal; allow GNOME to run, but that’s it.

Not done for GNOME 3.14. Needs urgent help.

As didn’t see the changes being made, I asked Ryan about it during GUADEC. He mentioned he underestimated the complexity in doing this. Further, his interests changed. Result: still have support for ConsoleKit in 3.14, though functionality wise the experience without logind (and similar) is probably getting worse and worse.

Systemd user sessions

In future I see systemd user sessions more or less replacing gnome-session. The most recent discussions on desktop-devel-list indicated something like gnome-session would still stay around, but as those discussions are quite a while ago, this might have changed. We’re doing this as systemd in concept does what gnome-session does anyway, but then better. Further, we could theoretically have one implementation across desktop environments. I see this as the next generation of the various XDG specifications.

Coming across as forcing vs “legacy”

From what I understood, KDE will also make use of user sessions, logind, etc. However, they seem to do this by calling the existing software “legacy” and putting everything into something with a new name. Then eventually things will be broken of course. Within GNOME we often try to make things really clear for everyone. E.g. by using wording usch as “fallback”. It makes clear our focus is elsewhere and what likely will happen. I guess KDE is more positive. It might still work, provided someone spends the effort to make it work. In any case, the messaging done by KDE seems to be very good. I don’t see any backlash, though mostly similar things is occurring between GNOME and KDE. There are a few exceptions, e.g. KWin maintainer explicitly tries to make the logind dependency as avoidable as possible. I find the KDE situation pretty confusing though; it feels uncoordinated.

Edit: At least the user session bit in KDE is undecided. It was talked about and seemingly agreed between two well known KDE people, see here, but still undecided. Same person clarifying this requested that I clarify that I’m not from KDE. I am not from KDE.

Appearance that things work fine “as-is”

In a lot of distributions there is still a lot of hacks to make Display Managers, Window Managers and Desktop Environments work with the various specifications and software written loads of years ago. Various software still does not understand XDG sessions. They also do NOT handle ConsoleKit. Distributions add hacks to make this work, doing the ConsoleKit handling in a wrapper.

This is then often used in discussions around logind and similar software.

“My DM/WM/DE is simple and just works. There is no problem needing to be solved.”

There are various distributions which have as goal to make everything work, no regressions are allowed. If you use such a distribution and given enough manpower, enough hacks will be added which on short term ensures things work. However, those temporary hacks are hacks. E.g. if some software should support XDG sessions and it does not, eventually the problem is with that software.

Looking at various distributions, I see that those temporary hacks are still in place. Especially funny one is Mageia, where XDG session support is second class. The XDG session files are generated from different configuration files. This results in fun times when a XDG session file changes. Each time this happened, the blame is quickly with the upstream software. “Why are they changing their session files, it should just never change”. While the actual problem is that the upstream files are thrown away!

The support for unmaintained software has at various points resulted in preventable bugs in maintained software. While at the same time the maintained software is considered faulty. I find this tendency to blame utterly ridiculous.

Aggressive anti-advocacy

There are many people who have some sort of dislike for systemd. In the QA session Linus had at Debconf, he mentioned he appreciates systemd, but the does NOT like the bughandling. In various other forums I see people really liking systemd, but still having their doubts about the scope of systemd.

When either liking or disliking systemd, it is important to express the reason clearly and in a non-agressive way. Unfortunately there are a few people who limit their dislike in ways that’ll result in them being ignored completely. Examples are:

  • Failure to understand that a blank “you cannot rely on it” statement is not helpful

    If a project sees functionality within systemd that is useful, it is you’ll not get very far with stating that the project is bad for having used that. Or suggesting that there is some conspiracy going on, or that the project maintainer is an idiot. That’s unfortunately often the type of “anti-systemd advocacy” which I see.

  • Failure to provide any realistic alternatives

    Suggesting that systemd-shim is an alternative for logind. It’s a fork and it took 6 months or so to be aligned with latest systemd changes. Further, it’s a fork with as purpose to stay compatible. It’s headed by Ubuntu (Canonical) who are going to use systemd anyway.
    The suggestions are often so strange that I have real difficulty summarizing them.

  • Continuous repeat of non-issues

    E.g. focussing on journald. Disliking e.g. udev or dbus, confusing the personal dislike as a reason everyone should not use systemd.

  • Outright false statements

    E.g. stuff “systemd is made only for desktops”, “all server admins hate it”. If you believe this to be true, suggest to do your homework. That, or staying out of discussions.

  • Suggesting doom and gloom

    According to some of the anti-advocacy, there’s a lot of really bad things in systemd. A few examples: my machine should continuously corrupt the journal files, my machine often doesn’t boot up, etc. As it’s not the case, such a claim pretty much destroys any credibility they might have had with me.

    Anyone trying systemd for the first time will also notice that it’ll just work. Consorting to this type of anti advocacy will just backfire because although systemd is NOT perfect, it does work just fine.

  • Lack of understanding that systemd is providing things which are wanted

    Projects have depended on systemd because it does things which are useful. As a person you might not need it. The other one believes he does need the functionality. Saying “I don’t” is not communication. At least ask why the other believes the functionality is useful!

  • Lack of understanding that systemd is focussed to adding additional wanted functionality

    Systemd often adds new functionality. A large part of that functionality might have been available before in a different way. It’s something which most people seem to worry about. It’s usually added as a response to some demand/need. Having a project listen to everyones needs is awesome!

  • Personal insults

    This I find interesting. The insults are not just limited to e.g. Lennart, the insults are to anyone who switched to systemd. A strategy to of having people use something other than systemd by insulting them is a very bad strategy to have. Especially if you lack any credibility with the very people you need and whom you are insulting.

  • Failure to properly articulate the dislikes

    There are too many blank statements which apparently has to be taken as truths. Saying that something is just bad (udev, dbus, etc) will be ignored if the other person doesn’t see it as a problem. “That systemd uses this greatly used component is one of the reasons not to use it”. Such a statement is not logical.

  • The huge issues aren’t

    Binary logging by journald. Anti-advocacy turns this into one of the biggest problems. The immediate answer by anyone is going to be that you can still have syslog and log it as you do now. If you advocated this as a huge issue, then anyone trying to decide on systemd will quickly see that this huge issue is not an issue at all.
    The attempt is to make people not use systemd. In practice, if the huge issues aren’t an issue, then the anti advocacy is actually helpful to the adoption. The biggest so called problems are easy, so anyone quickly gains confidence in systemd. Not what was intended!

  • Outright trolling

    For this I usually just troll back :-P

What I suggest to anyone disliking systemd is to not make entire lists of easily dismissed arguments. Keep it simple (one is enough IMO), understandable but also in line with the people you’re talking to. Understand whom you’re talking to. Anything technical can often be sorted out or fixed, suggest not to focus on that.

Once the reason against is clearly explained, focus upon what can be done to change things. Here the focus should be on gaining trust and give an idea on what can be done (in a positive way).

Don’t ignore people who dislike systemd

Due to having seen the same arguments for at least 100 times, it’s easy to quickly start ignoring anyone who doesn’t like systemd. I’ve noticed someone saying on Google+ that the systemd should not be used because Lennart is a brat. Eventually enough is enough and it is time to tell these people to STFU. But that’s not according to one part of the GNOME Code of Conduct, “assume people mean well”. Not believing in people meaning well and ignoring it has bitten me various times.

Turns out, this person is concerned that his autofs mounted home directories won’t be supported some time in the future. So this person does follow what Lennart writes. While it appeared to me he’s just someone repeating the anti-advocacy bit, he has a valid concern. I still think it is unacceptable to call people names and said so, but it
is equally important to ensure things are still possible.

Can a “not supported” still be made to work?

Systemd developers are quick to point out that something is not supported. E.g. a kernel other than Linux. A libc other than glibc. Some use cases are not. But there’s a important thing to know: would the usecase be impossible, or would it take way more effort?

The type of effort is also important. For a different kernel/libc, you’d need a developer with good insight into these things. For others, it might be possible by customizing things. I assume the autofs homedirs will always be possible, just not always taken into account.

If it is not supported but can be used anyway if you’re an “ok” sysadmin, that’ll mean for most people it’ll be possible. A “not supported by systemd” does therefore not 1 on 1 relate to impossible. If you want a different libc but you’re are a sysadmin and not a developer that’s quickly seen as impossible. While another “not supported” is actually perfectly possible.

IMO it is good that not everything is supported. Ensure that whatever is supported works really well. But at the same time, I think more focus should be on ensuring people do understand that a “not supported” does not mean “cannot work”.

My opinion on systemd as a release team member

I like *BSD. I like avoiding unneeded differences, this easies portability.

There are some interesting tidbits I’ve learned. Apparently OpenBSD has a GSoC student working on providing alternative implementations for hostnamed, timedated, localed and logind. I don’t think it’s enough, because it needs to be maintained fully. I further think that a logind alternative cannot be written together with the other bits during just a summer. Whatever it is, I think this will make it even easier to use systemd. This is not what some of the anti-advocacy is intending to happen. Oh well.

There seems to be another round of (temporary) increase of people disliking systemd. I’m pretty sure it’ll quiet down to normal levels again once Debian has systemd in a stable release for a few months.

Eventually they’ll notice that although systemd is not perfect, it just works. Unfortunately, this all doesn’t help in with the concerns I still have.

What to do with ConsoleKit?

September 06, 2014

A Life Worth Living

So here enters your protagonist. I've left a good job simply for the satisfaction in doing what I think is important.

Let's be honest. I'm terrified. This is the most exciting thing I've ever done. I guess that is what is so attractive to me, adrenaline junkie and all. Will I make it a year? Will I finish what I'm setting out to? Will I let everyone down? Will people hate me because they don't agree with what I think is important? All of these questions, playing like tapes in the back of my consciousness.

The GNOME community has always felt like home to me. Some people leave their jobs and do the start-up thing. That's fun and all, but I'd rather just write software for my friends. Nothing brings me more satisfaction than contributing to this group of people. And like Luis said so many years ago, GNOME is about people.

Learning from Failure

The GNOME University effort was simply overwhelming. We had over 150 people request to participate! That number is still staggering to me. I couldn't keep up with my goals while working a full-time job. And worse, I feel like I let people down.

The reality is that our tooling is not where I think it should be. So there were a couple of options. One being to work really hard to teach people ways of solving problems that should be made obsolete. Whereas the other is making the tools to remove friction from the process.

So I built some really good software at MongoDB while I figured out how I wanted to solve this. It became increasingly clear that I needed all of my time to focus. So here I am, scared as hell, but with relentless ambition.


My absolute first goal for this year is to find a new way of living and funding Free Software.

However, my second goal is to help us define how we want to build software for our platform. I know that once people see their first GtkWindow displayed on the monitor, they will be as hooked as I was. And things look a whole lot better now than Gtk+ 1.x with Blueheart theme!


Builder is my attempt to focus on a comprehensive developer workflow. From the beginning of "I want to make an app" to shipping tarballs and binaries to users. This might even include interactive tutorials right inside of Builder. It makes it possible to guide new developers through the workflow.


I'll be blogging a lot more this year, as I think communication is vital to a healthy community. Additionally, it's time to get some sort of crowdfunding effort underway. I simply wont make it without some help. I'm going to try to pick up a few small contracting gigs as well to fund this. So if you work at a company that needs some quick help solving problems, let me know. Especially in the Gtk+ or MongoDB space.

Much Love,

Redmine workflow helpers for ubinam

Redmine issue editing is quite a complex task, with a fairly complex, huge, two-columned form you get to edit (we also have several custom fields, which make this issue even worse).

In Trac customized for ubinam, after adding our custom workflow, at the end of the page we had some options for augmenting our workflow, to ease the status updates, like reassignment, quick-fix, start working on an issue, and other easy tasks, which left most of the ticket fields untouched, only juggled with resolution, status, and assignee.

The status-button Redmine plugin provided a great base: after the description and primary fields of the ticket, it shows links to quick status transitions. With it, you don't have to click edit the issue, find the status field on the form, click to open it, select the status, and click submit to save the changes, instead, you change the status with one click. In our Trac-originated workflow we had a status with multiple resolutions (fixed, invalid, duplicate, wontfix, worksforme), that being a more complex transition, as you have to update two fields, and usually the assigned status goes along with a new assignee, so that is not that easy either.

After checking the source, learning a bit of Ruby on Rails, I have managed to update the form to change the links to Bootstrap buttons, and added an assignee combobox (with a nice look, and using the same data as the one on the edit form, thus no additional requests) with a built-in search box, thanks to the awesome Select2 component.
Of course, some status transitions also need a reasoning, why you did switch to that status: I could have chosen to have a dropdown with a text entry, but as the form already had a nice way to scroll to the comment form, why not use it? The rest of the form is not really helpful in this context, so with a bit of JQuery I have hidden it. Now, clicking a quick-status button either changes the status and submits the form (if no comment required - like test released) or changes the status and jumps to the comment form to give you an option to comment. Obviously, you could still use the traditional edit button, but why would you?

But a picture is worth a thousand words, so here you go, instead of three thousand words:

The overall look of a ticket with the plugin, see the quick-status buttons
A complex status transition, setting the status and the resolution, and requiring a comment
Changing the assignee is easy and fast, select the user, and click reassign...
 Again, this is a heavily customized version, but it there's enough interest, I will share the plugin, or even develop a more generic one, not strictly tied to our workflow. So, let me see your +1s/comments/shares, if I get 30 of those, I'll share it in a github repo.

Trac 1.0.1 to Redmine migration script

After sharing my experiences of migrating from Trac 1.0.1→Redmine some people have asked me to share the script I have used.

Do you need the script?
(Public domain image)
I would prefer sharing the migration script by getting it in the Redmine source tree. I am willing to spend some more of my spare time of getting the migration script in shape (currently it's too personalized for our project to be shared), but I'm not sure how many people would use it, so to find out, I need you to +1/comment/share this post to express your interest in it. Even if this act might look like a shameless self-promotion, you'll have to believe me that it is only a way to find out in what form to share the script. If I see at least 30 people interested in it, I will do my best to share the migration script as soon as possible, and get it in the Redmine source tree. If there are less than 30 people interested in the script, I will still share the script with them, but as a raw script in a public github repo/gist, without getting proper testing and review from the Redmine team.

I have already asked the Redmine devs on IRC about the way they would prefer (and hopefully accept) a patch, they answered that they will accept the script, better in a separate migration script (the current one in the tree is probably for Trac 0.12 and Trac 1.0 has changed a lot), to avoid breaking the old script for the ones who could use it. This is the easiest way, as it reduces the number of checks in the migration script for Trac version.

The Redmine developers have also asked me to get a sample Trac DB dump, but my company's database is not public. If you would be interested in the migration script, and want to help, and have a public Trac database at hand (preferably with less than 1000 tickets), please share it. I have looked at the Trac users page for open-source projects, but only a few of them are using Trac 1.0.1. The database dump would be helpful to test the migration script, and write some unit tests, to make sure everything works well.

Stay tuned, in my next post I will present the personalizations I have used to ease Redmine ticket updates without using the complex edit form, and if there's enough interest, I will share the plugin I customized with the people interested.

September 05, 2014

Gsoc ending!


So after some months, the GSOC on gnome-shell animations improvements ends. We achieved to land all the code before .91 to be included in the 3.14 release as we wanted.

The work accomplished included improvements on the app picker transitions, the folders transitions,  the overview transitions, and some bugfixes and clean-ups we considered were important to have.

You can see the visual part of the work in the next video

I’m very happy all the work was landed, and I want to thank my mentor Florian Müllner for his effort and patience =)

September 04, 2014

My Wikimania 2014 talks

Primarily what I did during Wikimania was chew on pens.

Discussing Fluid Lobbying at Wikimania 2014, by  Sebastiaan ter Burg, under CC BY 2.0
Discussing Fluid Lobbying at Wikimania 2014, by Sebastiaan ter Burg, under CC BY 2.0

However, I also gave some talks.

The first one was on Creative Commons 4.0, with Kat Walsh. While targeted at Wikimedians, this may be of interest to others who want to learn about CC 4.0 as well.

Second one was on Open Source Hygiene, with Stephen LaPorte. This one is again Wikimedia-specific (and I’m afraid less useful without the speaker notes) but may be of interest to open source developers more generally.

The final one was on sharing; video is below (and I’ll share the slides once I figure out how best to embed the notes, which are pretty key to understanding the slides):