24 hours a day, 7 days a week, 365 days per year...

August 22, 2017

Last Project Phase and 3.26 Features

Repair and resize is available in the recent 3.25 release and needs at least UDisks 2.7.2. Currently Ext4, XFS and FAT are supported through libblockdev and I hope to extend this list with NTFS soon. There were some race conditions when a resized partition is detected by the kernel again and also the FAT support through libparted is still a bit shaky.
Showing the proportion of used disk space in the slider was the last appearance change for the resize dialog.
I’ve written some retrospective words about the project at the end of its wiki page – thank to all for this learning experience!

The new format dialog did not get merged yet and will come to master after the freeze.
Not yet implemented are the mockups for the whole UI where the partition list is shown. Jimmy Scionti and others in #gnome-design worked on my changes to Allan’s original mockup and the direction seems to stabilize now.

It was nice to visit GUADEC and meet people in person and discuss various things in the days after as well. There are too many areas where I also would like to do something but rightnow the time is short, and the list of plans for Disks is also growing…

Development News

Both 3.25.90 and 3.25.91 have been released and I think that 3.26 will be a good improvement compared to 3.24. Please report issues you experience.

August 21, 2017

Ich bin ein Berliner

Well, no, not really but maybe I'll be able to claim that at some point because I'm moving to Berlin to join Kinvolk. I'm told that I'm changing countries and companies too often but that's not true. I was at Red Hat for 5 years and at Nokia before that for 5 years as well. The decision to move out of Finland was not exactly mine.

Regarding Pelagicore

I'm not as much leaving Pelagicore as I'm leaving the automotive industry, more specifically the software side of it. While the automotive industry is changing and mostly for the good, I realized that it still is not a place for me. Things typically move very slowly in this industry and I realized that I don't have the required patience for it. Also, C++/Qt are big here and while an year ago I thought it's just another language and Open Source UI framework, I no longer think so. Since you can find a lot of rants from very experienced C++ developers on why C++ is a horrible language, I won't rant about that in here.

My experience with Qt hasn't been that great either. Most of the documentation I found would simply assume you use Qt Creator and despite my years of experience with D-Bus, it took me weeks to figure out how to make a few D-Bus calls from Qt (I have to admit though that the calls involved complex types). While Nokia made Qt a normal Open Source project by relicensing it under LGPLv2, the Qt company recently realized that it's loosing a lot of money by people using Qt in products without paying them anything so they relicensed it under GPLv3 and commercial (i-e dual-license).  I maintained Genivi Development Platform for 6 months and because of this relicensing, we were unable to upgrade Qt5 to Qt6 and overtime it had been becoming a major pain point in that project. To make things even less open-sourcy, they require all contributors to sign a CLA. I believe (and I think many would agree) that CLAs are bad for Open Source. So all those things put together, I don't think of Qt as a typical Open Source project. Feel free to disagree but that's how I feel and hence I'm not keen on working with Qt in future.

Having said all that, Pelagicore is a very awesome company and probably the best place to be if you're fine with C++/Qt and want to be part of the next-gen automotive. It might sound like I just contradicted myself but not everyone thinks and sees the world like me. To each, his own and all. Also Pelagicore is hiring!

Why leave Gothenburg?

Gothenburg is a very lovely city and I'm going to miss it a lot for sure, even though I've been only here for an year. I still love the Swedish language, which I have been learning slowly over the year. However, I've not been happy with the cost of living in here, especially Veterinary costs. I have an old cat with multiple conditions so I need to visit the Vet every few months. The Vet charge 700 SEK just to see him and most often they don't even care to read through his records beforehand.

Gothenburg is also currently the best place to find an accommodation in. To get a first-hand contract, you register yourself in on Boplats website and keep on applying to new listings but typical wait-time is in years, not months or weeks. In practice, it's usually not a problem. Most people just get a second-hand contract or a room in a shared flat to start with and then look for a more permanent solution. However, add a cat into the picture and things get very difficult again.

Kinvolk comes along

Because of the reasons stated above, I've been looking for some exciting opportunities outside automotive world in some nice location. I had been focused on finding jobs that either involve Rust language or at least there were good chances of Rust being involved. Long story short, I ultimately got in touch with Kinvolk folks. I already knew the company's founders: Chris, Alban and Iago. They are very good at what they do and fun folks to hang out with.

While Rust is not a big part of work at Kinvolk currently, they (especially Chris) seem very interested in it. From what I know, main languages at Kinvolk are C and Go. I don't mind coding in C anyway and I've been missing the times when I did any kernel programming in it. I've no experience of Go but from what I hear, it's a pretty decent language.

So after interviews etc, when Kinvolk offered me a job, I couldn't resist accepting it. Berlin is an awesome city and it's hard to say "no" to moving there.

If you're looking for a great place to work at on Linux-related tech with some very competent Open Source developers, do consider applying at Kinvolk.


Talking of great companies to work at, I recently got in contact with some folks from Cellink. It's a biotech company based in Gothenburg, whose aim is to end animal testing. They plan to achieve this very noble goal through 3D printers that print human tissues. They already have 2 products that they sell to pharmaceutical companies in 30 countries across the globe. While they already have the bio and hardware side of things covered, they are looking to expand their software side of things now and to do that, they need good software engineers, especially ones with Open Source experience.

Here is a video of a very nice intro to their awesome work from their CEO, Erik Gatenholm.

So if you're an Open Source developer and either live in or willing to relocate to (especially) Gothenburg, Cambridge (USA) or Bay Area, please contact me and I can connect you to the right people. Alternatively, feel free to contact them directly. I only want to help these folks achieve their very noble cause.

The GSoC wrap-up

And so, another Summer of Code has ended. During the past three months, I was working on Mutter, a GNOME's Compositor. My main goal was to make Mutter, which was written as an X11 Window Manager more than a decade ago and recently ported to also be a Wayland compositor, start Wayland session without requiring X server (Xwayland) to be present. That goal was accomplished, and Mutter is now able to start without Xwayland being started. Other goals included making Wayland clients start without X11 being present, which wasn't accomplished (or even started), because after main goal was finished, there was only a week left before this evaluation, and nothing could be accomplished in that time, given the complexity of the task.

So, lets sum what needed to be done during these past three months:

- First task that I worked on was splitting X11 specific part from main display object, MetaDisplay. That code was moved into a new object, MetaX11Display, which is owned by MetaDisplay and is either created or not created at startup, depending on how Mutter was started. A more detailed explanation can be found at [1].

- Second task involved getting rid of MetaScreen object, an object which previously had more than one instance, that nowadays has only one. MetaScreen contained both X11 and non-X11 specific parts, so it was split between MetaDisplay and MetaX11Display, accordingly. A more detailed explanation can be found at [1] and [2].

- Third set of tasks (yes, set of tasks) involved getting X11 specific functionality out of objects managed by MetaDisplay. These include workspace management, startup notification, stack tracker and bell (visual/sound notification). A more detailed explanation can be found at [3] and [4].

- Final task involved getting rid of GDK usage early in the code, and moving control of GDK X11 display to MetaX11Display itself. This also involved (sadly) getting rid of some GtkSettings usage which was used to change some properties when it gets changed in GTK. A more detailed explanation can be found at [4].

- After the final task was done, a --no-x11 switch was added to Mutter command line, which can be used to start Mutter without starting Xwayland.

- Bonus tasks, which did not really need to be done in order for Mutter to be able to work without X11 included renaming and improving X11 error management code (x11 trap code), moving workspace management code from MetaDisplay (previously MetaScreen) into new object - MetaWorkspaceManagement, and getting rid of screen size tracking in MetaDisplay (again, previously MetaScreen), and utilizing MetaMonitorManager directly. A more detailed explanation can be found at [3] and [4].

During this effort, a lot of functions were removed or replaced by functions with different names/parameters. Any compositor using libmutter library (gnome-shell is its most known user) cannot be compiled against my Mutter branch. However, I did create an API changelog which can be found at [5], for those that want to make use of my work, which can now be found at [6], and is rebased against current git master, as of moment of writing. Instructions to try out the compositor code can be found at [4], as well.

This summarizes my work done over these past three months. I'd like to thank both of my mentors, Jonas and Carlos, for helping me when I got stuck, providing feedback whenever it was necessary, or answering calmly and kindly at any of my questions, however simple or stupid they might sound. Also, I'd like to thank Google and GNOME Foundation for accepting me as student developer and allowing me to participate in this year's Summer of Code.


August 20, 2017

GSoC/GUADEC: Wrapping Things Up

The Google Summer of Code is slowly but surely coming to an end and it’s time to start wrapping thing up for the final evaluation. The documentation cards have been officially pushed to the master of the GNOME Builder and last couple of days were spent just tweaking the feature and going through the code reviews.

I would also like to take a quick look back at the amazing GUADEC that was held in Manchester this summer and share some of my photos. I was so glad I could attend and connect the faces with the people I have only met online.


GUADEC kicked of at MMU’s Birley Campus with series of talks varying from technical updates of tools to strengthening the community and the principles which it stands for.


The social calendar was packed as well, making it very easy to meet and get to know a bit all the amazing people who have been part of the for years. Including, for me the highpoint of the events, the GNOME’s 20th birthday celebration held in Museum of Science and Industry. If you haven’t yet, don’t forget to checkout the GNOME’s birthday page.

The last days in Manchester were spent in nearby Shed providing space to discuss and work on ideas surrounding GNOME.

Hope to see you all next year!


August 19, 2017

Apple laptops have become garbage

When OSX launched it quite quickly attracted a lot of Linux users and developers. There were three main reason for this:

  1. Everything worked out of the box
  2. The hardware was great, even sexy
  3. It was a full Unix laptop
It is interesting, then, that none of these things really hold true any more.

Everything works out of the box

I have an Android phone. One of the things one would like to do with it is to take pictures and then transfer them to a computer. On Linux and Windows this is straightforward: you plug in the USB cable, select "share pictures" on the phone and the operating system pops up a file dialog. Very simple.

In OSX this does not work. Because Android is a competitor to the iPhone (which makes Apple most of its money nowadays) it is in Apple's business interest to not work together with competing products. They have actively and purposefully chosen to make things worse for you, the paying customer, for their own gain. Google provides a file transfer helper application but since it is not hooked inside the OS its UX is not very good.

But let's say you personally don't care about that. Maybe you are a fully satisfied iPhone user. Very well, let's look at something completely different: external monitors. In this year's Europython conference introductory presentation the speaker took the time to explicitly say that if anyone presenting had a latest model Macbook Pro, it would not work with the venue's projectors. Things have really turned on their heads because up to a few years ago Macs were pretty much the only laptops that always worked.

This problem is not limited to projectors. At home I have an HP monitor that has been connected to many a different video source and it has worked flawlessly. The only exception is the new work laptop. Connecting it to this monitor makes the system go completely wonky. On every connection it does an impressive impersonation of the dance floor of a german gay bar with colors flickering, things switching on and off and changing size for about ten seconds or so. Then it works. Until the screen saver kicks in and the whole cycle repeats.

If this was not enough every now and then the terminal application crashes. It just goes completely blank and does not respond to anything. This is a fairly impressive feat for an application that reached feature stability in 1993 or thereabouts.

Great hardware

One of the things I do in my day job is mobile app development (specifically Android). This means connecting external display, mouse and keyboard to the work laptop. Since macs have only two USB ports they are already fully taken and there is nowhere to plug the development phone. The choices here are to either unplug the mouse whenever you need to deploy or debug on the device or use a USB hub.

Using dongles for connectivity is annoying but at least with a hub one can get things working. Except no. I have a nice USB hub that I have used for many years on many devices that works like a charm. Except on this work computer. Connecting anything through that hub causes something to break so the keyboard stops working every two minutes. The only solution is to unplug the hub and then replug it again. Or, more specifically, not to use the hub but instead live without an external mouse. This is even more ridiculous when you consider that Apple was the main pioneer for driving USB adoption back in the day.

Newer laptop models are even worse. They have only USB-C connectors and each consecutive model seems to have fewer and fewer of them. Maybe their eventual goal is to have a laptop with no external connection slots, not even a battery charger port. The machine would ship from the factory pre-charged and once the juice runs out (with up to 10 hours of battery life™) you have to throw it away and buy a new one. It would make for good business.

After the introduction of the Retina display (which is awesome) the only notable hardware innovation has been the emojibar. It took the concept of function buttons and made it worse.

Full Unix support

When OSX launched it was a great Unix platform. It still is pretty much the same it was then, but by modern standards it is ridiculously outdated. There is no Python 3 out of the box, and Python 2 is several versions behind the latest upstream release. Other tools are even worse. Perl is 5.18 from 2014 or so, Bash is 3.2 with the copyright year of 2007, Emacs from 2014 and Vim from 2013. This is annoying even for people who don't use macs, but just maintain software that supports OSX. Having to maintain compatibility with these sorts of stone age tools is not fun.

What is causing this dip in quality?

There are many things one could say about the current state of affairs. However there is already someone who has put it into words much more eloquently than any of us ever could. Take it away, Steve:

Post scriptum

Yes, this blog post was written on a Macbook, but it is one of the older models which were still good. I personally need to maintain a piece of software that has native support for OSX so I'm probably going to keep on using it for the foreseeable future. That being said if someone starts selling a laptop with a Risc-V processor, a retina-level display and a matte screen, I'm probably going to be first in line to get one.

August 18, 2017

My first (and definitely not the last) GUADEC!

Hey folks!

I recently attended GNOME Users and Developers European Conference (GUADEC) 2017 held in Manchester, UK. It was my first time in the UK and my first time at a conference and needless to say, I had a wonderful time.

Core Conference Days

The core conference days were held at the Brooks Building in Manchester Metropolitan University. I attended a lot of great talks. Some of the talks I found helpful were-

  • History of GNOME by Jonathan Blandford
  • GNOME Build Strategies and BuildStream by Tristan Van Berkom
  • On mice, touchpads and other rodents by Peter Hutterer

I gave a lighting intern talk presenting my GSoC project – Adding recurring events support to GNOME Calendar. My talk was accompanied by a live demonstration too!


Unconference Days

The Unconference days were held at the Shed (MMU). There I took part in various workshops and Birds of Feather (BoF) sessions. I had a sit-down with my mentor, Georges Stavacras, and we discovered and fixed (not all) quite a lot of bugs in GNOME Calendar.

Social Events


Lots of social events and fun activities were organised. The GNOME 20th Anniversary party was one of the best parties I’ve been to yet. The decorations and ambiance were one of a kind. The newcomers dinner was also very engaging. We also played indoor football at the MMU sports hall and lots of other fun games. And let’s not forget all the local pubs we visited throughout the conference. We also had a walking tour of Manchester which was very informative.




The Peak District Hike was an altogether different experience. We went on a 9km Trek on Loose Hill. The trek was so scenic and beautiful! I sincerely thank Allan Day for organizing and leading that awesome trek.



I was also a part of the volunteering team and made lots of friends on the team. The ‘hosts’ from Codethink were a very friendly bunch of people with whom I had a great time. Kudos to Sam Thursfield who took me rock climbing to the Manchester Climbing Centre.


It was really exciting to finally meet the GNOME contributors that I had previously chatted with online. Meeting my mentor Georges was really fun. I thank him for the work he’s put in mentoring my GSoC project.

I would like to thank the GNOME Foundation for sponsoring me to attend GUADEC. It would not have been possible for me to attend the event without the financial aid provided by the GUADEC travel committee.  Thank you all the organisers, volunteers and attendees for making this year’s GUADEC successful. See you next year at Almeria!


Post-GUADEC distractions

Like everybody else, I had a great time at GUADEC this year.

One of the things that made me happy is that I could convince Behdad to come, and we had a chance to finally wrap up a story that has been going on for much too long: Support for color Emoji in the GTK+ stack and in GNOME.

Behdad has been involved in the standardization process around the various formats for color glyphs in fonts since the very beginning. In 2013, he posted some prototype work for color glyph support in cairo.

This was clearly not meant for inclusion, he was looking for assistance turning this into a mergable patch. Unfortunately, nobody picked this up until I gave it a try in 2016. But my patch was not quite right, and things stalled again.

We finally picked it up this year. I produced a better cairo patch, which we reviewed, fixed and merged during the unconference days at GUADEC. Behdad also wrote and merged the necessary changes for fontconfig, so we can have an “emoji” font family, and made pango automatically choose that font when it finds Emoji.

After guadec, I worked on the input side in GTK+. As a first result, it is now possible to use Control-Shift-e to select Emoji by name or code.

This is a bit of an easter egg though, and only covers a few Emoji like ❤. The full list of supported names is here.

A more prominent way to enter Emoji is clearly needed, so i set out to implement the design we have for an Emoji chooser. The result looks like this:

As you can see, it supports variation selectors for skin tones, and lets you search by name. The clickable icon has to be enabled with a show-emoji-icon property on GtkEntry, but there is a context menu item that brings up the Emoji chooser, regardless.

I am reasonably happy with it, and it will be available both in GTK+ 3.92 and in GTK+ 3.22.19. We are bending the api stability rules a little bit here, to allow the new property for enabling the icon.

Working on this dialog gave me plenty of opportunity to play with Emoji in GTK+ entries, and it became apparent that some things were not quite right.  Some Emoji just did not appear, sometimes. This took me quite a while to debug, since I was hunting for some rendering issue, when in the end, it turned out to be insufficient support for variation selectors in pango.

Another issue that turned up was that pango did place the text caret in the middle of Emoji’s sometimes, and Backspace deleted them piece-meal, one character at a time, instead of all at once. This required fixes in pango’s implementation of the Unicode segmentation rules (TR29). Thankfully, Peng Wu had already done much of the work for this, I just fixed the remaining corner cases to handle all Emoji correctly, including skin tone variations and flags.

So, what’s still missing ? I’m thinking of adding optional support for completion of Emoji names like :grin: directly in the entry, like this:

But this code still needs some refinement before it is ready to land. It also overlaps a bit with traditional input method functionality, and I am still pondering the best way to resolve that.

To try out color Emoji, you can either wait for GNOME 3.26, which will be released in September, or you can get:

  • cairo from git master
  • fontconfig from git master
  • pango 1.40.9 or .10
  • GTK+ from the gtk-3-22 branch
  • a suitable Emoji font, such as EmojiOne or Noto Color Emoji

It was fun to work on this, I hope you enjoy using it! ❤

Shipping PKCS7 signed metadata and firmware

Over the last few days I’ve merged in the PKCS7 support into fwupd as an optional feature. I’ve done this for a few reasons:

  • Some distributors of fwupd were disabling the GPG code as it’s GPLv3, and I didn’t feel comfortable saying just use no signatures
  • Trusted vendors want to ship testing versions of firmware directly to users without first uploading to the LVFS.
  • Some firmware is inherently internal use only and needs to be signed using existing cryptographic hardware.
  • The gpgme code scares me.

Did you know GPGME is a library based around screen scraping the output of the gpg2 binary? When you perform an action using the libgpgme APIs you’re literally injecting a string into a pipe and waiting for it to return. You can’t even use libgcrypt (the thing that gpg2 uses) directly as it’s way too low level and doesn’t have any sane abstractions or helpers to read or write packaged data. I don’t want to learn LISP S-Expressions (yes, really) and manually deal with packing data just to do vanilla X509 crypto.

Although the LVFS instance only signs files and metadata with GPG at the moment I’ve added the missing bits into python-gnutls so it could become possible in the future. If this is accepted then I think it would be fine to support both GPG and PKCS7 on the server.

One of the temptations for X509 signing would be to get a certificate from an existing CA and then sign the firmware with that. From my point of view that would be bad, as any firmware signed by any certificate in my system trust store to be marked as valid, when really all I want to do is check for a specific (or a few) certificates that I know are going to be providing certified working firmware. Although I could achieve this to some degree with certificate pinning, it’s not so easy if there is a hierarchical trust relationship or anything more complicated than a simple 1:1 relationship.

So this is possible I’ve created a LVFS CA certificate, and also a server certificate for the specific instance I’m running on OpenShift. I’ve signed the instance certificate with the CA certificate and am creating detached signatures with an embedded (signed-by-the-CA) server certificate. This seems to work well, and means we can issue other certificates (or CRLs) if the server ever moves or the trust is compromised in some way.

So, tl;dr: (should have been at the top of this page…) if you see a /etc/pki/fwupd/LVFS-CA.pem appear on your system in the next release you can relax. Comments, especially from crypto experts welcome. Thanks!

August 17, 2017

20 years strong

20 years ago, Miguel and Federico created GNOME.

We had an early party during GUADEC at the Manchester Museum of Science and Industry. I organized a local one in Strasbourg yesterday with the help of Marie-France, a student who was already behind the logistics of the Sympa hackfest. It was part of a string of similar events around the world.

Marie-France baked us an awesome cake. We had a love wall, a longtime tradition at GNOME events that the locals weren’t used to, but they sure appreciated it!

Stickers that I brought back from GUADEC got dispatched and brought smiles to the faces of attendees.

GUADEC 2017 Notes

With GUADEC 2017 and the unconference days over, I wanted to share a few conference and post-conference notes with a broader audience.

First of all, as others have reported, at this year’s GUADEC, it was great to see an actual increase in numbers of attendees compared to previous years. This shows us that 20 years later, the community as a whole is still healthy and doing well.

At the conference venue.

While the Manchester weather was quite challenging, the conference was well-organized and I believe we all had a lot of fun both at the conference venue and at social events, especially at the awesome GNOME 20th Birthday Party. Kudos to all who made this happen!

At the GNOME 20th Birthday Party.

As I reported at the GNOME Foundation AGM, the docs team has been slightly more quiet recently than in the past and we would like to reverse this trend going forward.

At the GNOME 20th Birthday Party.
  • We held a shared docs and translation session for newcomers and regulars alike on the first two days of the post-GUADEC unconference. I was happy to see new faces showing up as well as having a chance to work a bit with long-time contributors. Special thanks goes to Kat for managing the docs-feedback mailing list queue, and Andre for a much needed docs bug triage.

    Busy working on docs and translations at the unconference venue.
  • Shaun worked on a new publishing system for that could replace the current library-web scripts requiring release tarballs to get the content updated. The new platform would be a Pintail-based website with (almost) live content updates.
  • Localization-wise, there was some discussion around language packs, L10n data installation and initial-setup, spearheaded by Jens Petersen. While in gnome-getting-started-docs, we continue to replace size-heavy tutorial video files with lightweight SVG files, there is still a lot of other locale data left that we should aim to install on the user’s machine automatically when we know the user’s locale preference, though this is not quite what the user’s experience looks like nowadays. Support for that is something that I believe will require more input from PackageKit folks as well as from downstream installer developers.
  • The docs team also announced a change of leadership, with Kat passing the team leadership to me at GUADEC.
  • In other news, I announced a docs string freeze pilot that we plan to run post-GNOME 3.26.0 to allow translators more time to complete user docs translations. Details were posted to the gnome-doc-list and gnome-i18n mailing list. Depending on the community feedback we receive, we may run the program again in the next development cycle.
  • The docs team also had to cancel the planned Open Help Conference Docs Sprint due to most core members being unavailable around that time. We’ll try to find a better time for a docs team meetup some time later this year or early 2018. Let me know if you want to attend, the docs sprints are open to everybody interested in GNOME documentation, upstream or downstream.
At the closing session.

Last but not least, I’d like to say thank you to the GNOME Foundation and the Travel Committee for their continuous support, for sponsoring me again this year.

Went to COSCUP 2017

I joined COSCUP2017 which is held in Taipei from August 5 to 6. ‘COSCUP’ means Conference for Open Source Coders, Users and Promoters, is an annual conference held by Taiwanese Open source community participants since 2006. It’s a major force of Free software movement advocacy in Taiwan.

People of different careers, different ages, different areas and different organizations went there to talk about their experience about FOSS. Especially some organizations such as WoFOSS and PyLadies in Taiwan, they talked about how to make community diversified and how to respect women in a community.

I also gave a speech at this conference about how college women participate in FOSS easily, I shared my experience during Outreachy and GSoC in my talk and encouraged other women to join them. My speech attracted many college women and I hope my story can give them some useful information and more confidence.



This year is my second one attending GUADEC, this time around, in Manchester. It was a great experience this time too, because I got to meet again with the friends I made last year, but also I got to make new friends as well.

During the core days I attended a lot of great talks and I got to learn cool new things. Among these, I could mention learning about technologies that I didn’t know they existed, like Emeus, improve my view about how a good design should look like or discover more about the history of GNOME. Since this year I am a GSoC student again, I also had a lightning talk and I’m happy to say that this year I was slightly less nervous about talking in front of a lot of people.

During the unconference days I had a bit of time to work on my project and talked about it with my mentors, Carlos Soriano and Carlos Garnacho. Also, I got some feedback on the work I did this summer from Allan Day.

Besides that, we also had a lot of fun. For instance, the 20th anniversary party was great and the venue was very cool as well: the Museum of Science and Industry in Manchester. There was a trip to the Peak District, which is quite close to Manchester, and even if at the top it was a bit foggy, we still got some great views.

GUADECFinally, I would like to thank the GNOME Foundation for making it possible for me to attend GUADEC this year and I’m looking forward to the next one, which will be in Almería.






Correctness in Rust: building strings

Rust tries to follow the "make illegal states unrepresentable" mantra in several ways. In this post I'll show several things related to the process of building strings, from bytes in memory, or from a file, or from char * things passed from C.

Strings in Rust

The easiest way to build a string is to do it directly at compile time:

let my_string = "Hello, world!";

In Rust, strings are UTF-8. Here, the compiler checks our string literal is valid UTF-8. If we try to be sneaky and insert an invalid character...

let my_string = "Hello \xf0";

We get a compiler error:

error: this form of character escape may only be used with characters in the range [\x00-\x7f]
2 |     let my_string = "Hello \xf0";
  |                              ^^

Rust strings know their length, unlike C strings. They can contain a nul character in the middle, because they don't need a nul terminator at the end.

let my_string = "Hello \x00 zero";
println!("{}", my_string);

The output is what you expect:

$ ./foo | hexdump -C
00000000  48 65 6c 6c 6f 20 00 20  7a 65 72 6f 0a           |Hello . zero.|
0000000d                    ^ note the nul char here

So, to summarize, in Rust:

  • Strings are encoded in UTF-8
  • Strings know their length
  • Strings can have nul chars in the middle

This is a bit different from C:

  • Strings don't exist!

Okay, just kidding. In C:

  • A lot of software has standardized on UTF-8.
  • Strings don't know their length - a char * is a raw pointer to the beginning of the string.
  • Strings conventionally have a nul terminator, that is, a zero byte that marks the end of the string. Therefore, you can't have nul characters in the middle of strings.

Building a string from bytes

Let's say you have an array of bytes and want to make a string from them. Rust won't let you just cast the array, like C would. First you need to do UTF-8 validation. For example:

fn convert_and_print(bytes: Vec<u8>) {
    let result = String::from_utf8(bytes);
    match result {
        Ok(string) => println!("{}", string),
        Err(e) => println!("{:?}", e)

fn main() {
    convert_and_print(vec![0x48, 0x65, 0x6c, 0x6c, 0x6f]);
    convert_and_print(vec![0x48, 0x65, 0xf0, 0x6c, 0x6c, 0x6f]);

In lines 10 and 11, we call convert_and_print() with different arrays of bytes; the first one is valid UTF-8, and the second one isn't.

Line 2 calls String::from_utf8(), which returns a Result, i.e. something with a success value or an error. In lines 3-5 we unpack this Result. If it's Ok, we print the converted string, which has been validated for UTF-8. Otherwise, we print the debug representation of the error.

The program prints the following:

$ ~/foo
FromUtf8Error { bytes: [72, 101, 240, 108, 108, 111], error: Utf8Error { valid_up_to: 2, error_len: Some(1) } }

Here, in the error case, the Utf8Error tells us that the bytes are UTF-8 and are valid_up_to index 2; that is the first problematic index. We also get some extra information which lets the program know if the problematic sequence was incomplete and truncated at the end of the byte array, or if it's complete and in the middle.

And for a "just make this printable, pls" API? We can use String::from_utf8_lossy(), which replaces invalid UTF-8 sequences with U+FFFD REPLACEMENT CHARACTER:

fn convert_and_print(bytes: Vec<u8>) {
    let string = String::from_utf8_lossy(&bytes);
    println!("{}", string);

fn main() {
    convert_and_print(vec![0x48, 0x65, 0x6c, 0x6c, 0x6f]);
    convert_and_print(vec![0x48, 0x65, 0xf0, 0x6c, 0x6c, 0x6f]);

This prints the following:

$ ~/foo

Reading from files into strings

Now, let's assume you want to read chunks of a file and put them into strings. Let's go from the low-level parts up to the high level "read a line" API.

Single bytes and single UTF-8 characters

When you open a File, you get an object that implements the Read trait. In addition to the usual "read me some bytes" method, it can also give you back an iterator over bytes, or an iterator over UTF-8 characters.

The Read.bytes() method gives you back a Bytes iterator, whose next() method returns Result<u8, io::Error>. When you ask the iterator for its next item, that Result means you'll get a byte out of it successfully, or an I/O error.

In contrast, the Read.chars() method gives you back a Chars iterator, and its next() method returns Result<char, CharsError>, not io::Error. This extended CharsError has a NotUtf8 case, which you get back when next() tries to read the next UTF-8 sequence from the file and the file has invalid data. CharsError also has a case for normal I/O errors.

Reading lines

While you could build a UTF-8 string one character at a time, there are more efficient ways to do it.

You can create a BufReader, a buffered reader, out of anything that implements the Read trait. BufReader has a convenient read_line() method, to which you pass a mutable String and it returns a Result<usize, io::Error> with either the number of bytes read, or an error.

That method is declared in the BufRead trait, which BufReader implements. Why the separation? Because other concrete structs also implement BufRead, such as Cursor — a nice wrapper that lets you use a vector of bytes like an I/O Read or Write implementation, similar to GMemoryInputStream.

If you prefer an iterator rather than the read_line() function, BufRead also gives you a lines() method, which gives you back a Lines iterator.

In both cases — the read_line() method or the Lines iterator, the error that you can get back can be of ErrorKind::InvalidData, which indicates that there was an invalid UTF-8 sequence in the line to be read. It can also be a normal I/O error, of course.

Summary so far

There is no way to build a String, or a &str slice, from invalid UTF-8 data. All the methods that let you turn bytes into string-like things perform validation, and return a Result to let you know if your bytes validated correctly.

The exceptions are in the unsafe methods, like String::from_utf8_unchecked(). You should really only use them if you are absolutely sure that your bytes were validated as UTF-8 beforehand.

There is no way to bring in data from a file (or anything file-like, that implements the Read trait) and turn it into a String without going through functions that do UTF-8 validation. There is not an unsafe "read a line" API without validation — you would have to build one yourself, but the I/O hit is probably going to be slower than validating data in memory, anyway, so you may as well validate.

C strings and Rust

For unfortunate historical reasons, C flings around char * to mean different things. In the context of Glib, it can mean

  • A valid, nul-terminated UTF-8 sequence of bytes (a "normal string")
  • A nul-terminated file path, which has no meaningful encoding
  • A nul-terminated sequence of bytes, not validated as UTF-8.

What a particular char * means depends on which API you got it from.

Bringing a string from C to Rust

From Rust's viewpoint, getting a raw char * from C (a "*const c_char" in Rust parlance) means that it gets a pointer to a buffer of unknown length.

Now, that may not be entirely accurate:

  • You may indeed only have a pointer to a buffer of unknown length
  • You may have a pointer to a buffer, and also know its length (i.e. the offset at which the nul terminator is)

The Rust standard library provides a CStr object, which means, "I have a pointer to an array of bytes, and I know its length, and I know the last byte is a nul".

CStr provides an unsafe from_ptr() constructor which takes a raw pointer, and walks the memory to which it points until it finds a nul byte. You must give it a valid pointer, and you had better guarantee that there is a nul terminator, or CStr will walk until the end of your process' address space looking for one.

Alternatively, if you know the length of your byte array, and you know that it has a nul byte at the end, you can call CStr::from_bytes_with_nul(). You pass it a &[u8] slice; the function will check that a) the last byte in that slice is indeed a nul, and b) there are no nul bytes in the middle.

The unsafe version of this last function is unsafe CStr::from_bytes_with_nul_unchecked(): it also takes an &[u8] slice, but you must guarantee that the last byte is a nul and that there are no nul bytes in the middle.

I really like that the Rust documentation tells you when functions are not "instantaneous" and must instead walks arrays, like to do validation or to look for the nul terminator above.

Turning a CStr into a string-like

Now, the above indicates that a CStr is a nul-terminated array of bytes. We have no idea what the bytes inside look like; we just know that they don't contain any other nul bytes.

There is a CStr::to_str() method, which returns a Result<&str, Utf8Error>. It performs UTF-8 validation on the array of bytes. If the array is valid, the function just returns a slice of the validated bytes minus the nul terminator (i.e. just what you expect for a Rust string slice). Otherwise, it returns an Utf8Error with the details like we discussed before.

There is also CStr::to_string_lossy() which does the replacement of invalid UTF-8 sequences like we discussed before.


Strings in Rust are UTF-8 encoded, they know their length, and they can have nul bytes in the middle.

To build a string from raw bytes, you must go through functions that do UTF-8 validation and tell you if it failed. There are unsafe functions that let you skip validation, but then of course you are on your own.

The low-level functions which read data from files operate on bytes. On top of those, there are convenience functions to read validated UTF-8 characters, lines, etc. All of these tell you when there was invalid UTF-8 or an I/O error.

Rust lets you wrap a raw char * that you got from C into something that can later be validated and turned into a string. Anything that manipulates a raw pointer is unsafe; this includes the "wrap me this pointer into a C string abstraction" API, and the "build me an array of bytes from this raw pointer" API. Later, you can validate those as UTF-8 and build real Rust strings — or know if the validation failed.

Rust builds these little "corridors" through the API so that illegal states are unrepresentable.

Shooting Your Foot in Rust

I’ve had a bit of difficulty getting this post done in a decent timeframe as I have 4 papers on the go this semester, one of which I was enrolled for 2.5 weeks late and had to scramble to catch up on - there were some other things I wanted to discuss here but time constraints are pushing those to the next post. Never-the-less, onwards.

Before I am able to use Rust in GJS effectively, I’ve needed to create FFI bindings for Rust to use to call in to C libraries, such as GLib, GIRepository, and libffi; doing this required the use of both bindgen and gtk-rs/gir.

In both cases, these tools produce unsafe Rust, this is code that does one of;

  1. Dereferencing a raw pointer
  2. Calling an unsafe function or method
  3. Accessing or modifying a mutable static variable
  4. Implementing an unsafe trait </a> For the bindings I am using, the unsafety generally comes from the use of points 1, 2, and 3. And example of this is from the [libffi] bindings (generated using bindgen):
extern "C" {
    pub fn ffi_prep_cif(cif: *mut ffi_cif, abi: ffi_abi,
                        nargs: ::std::os::raw::c_uint, rtype: *mut ffi_type,
                        atypes: *mut *mut ffi_type) -> ffi_status;

extern "C" is a marker that tells Rust that the function is called externally - as such it is regarded as an unsafe function - and the function prototype follows. The function takes a variety of arguments; cif: *mut ffi_cif is a raw pointer to a ffi_cif which has the layout of:

#[derive(Debug, Copy)]
pub struct ffi_cif {
    pub abi: ffi_abi,
    pub nargs: ::std::os::raw::c_uint,
    pub arg_types: *mut *mut ffi_type,
    pub rtype: *mut ffi_type,
    pub bytes: ::std::os::raw::c_uint,
    pub flags: ::std::os::raw::c_uint,

#[repr(C)] marks this struct definition as having the order, size, and alignment of the same definition in C. This is important for anything being passed over the FFI boundary between Rust & C. There are some small restrictions here, Rust tuples and tagged unions (enum) don’t exist in C, and should not ever be passed over the FFI, and drop flags need to be added (drop flags are what Rust uses to free memory).

Raw pointers in Rust are safe, you can copy, move, create, and borrow raw pointers. But when you dereference one, it is classed as unsafe. Why? Rust can’t guarantee that the data pointed to is actually valid data - this is a task for the implementor.

You will see ::std::os::raw::c_uint and other variants of c_ types popping up, these are basic data types which are guaranteed to have the same size as its C counterpart.

The other oddity here is pub arg_types: *mut *mut ffi_type. arg_types is a Vector, and the *mut *mut is a mutable raw pointer to the first element in the Vector, which is also a mutable pointer (to an ffi_type). That is, a pointer to an vector of pointers.

If you’re curious, ffi_type is defined as;

#[derive(Debug, Copy)]
pub struct _ffi_type {
    pub size: usize,
    pub alignment: ::std::os::raw::c_ushort,
    pub type_: ::std::os::raw::c_ushort,
    pub elements: *mut *mut _ffi_type,
pub type ffi_type = _ffi_type;

Making unsafe Safe

Rust is supposed to be a safe langugae, right? It still is, even when you use unsafe code. Using unsafe code doesn’t disable all the safety checks, it only enables the use of some extra features which are unsafe (see points 1-4). The caveat is that the unsafe code must be contained within an unsafe { } block to enable these features, and it is up to the programmer to validate these blocks and make sure they actually are safe. If you do end up with problems, eg, leaked memory then you can be sure that the problem lies within an unsafe block.

An example is from my gi-girffi wrapper (this is a translation of the functions in girepository/girffi.c);

pub fn g_callable_info_get_ffi_return_type(callable_info: &mut GICallableInfo) -> Option<ffi_type> {
    let mut return_type;
    unsafe {
        return_type = g_callable_info_get_return_type(callable_info)
            .as_mut() // make the raw pointer a mutable reference
            .unwrap_or(return None);

You will see that g_callable_info_get_return_type() is the only function within an unsafe block; it is a GLib function which has a signature definition of;

pub fn g_callable_info_get_return_type(info: *mut GICallableInfo) -> *mut GITypeInfo;

it takes a raw mutable pointer to a GICallableInfo (which is a type alias for GIBaseInfo), and returns a raw mutable pointer to a newly allocated GITypeInfo. Since this is a new allocation of memory, and the only reference to it is this pointer, I convert it to a Rust mutable reference using as_mut(). This conversion also checks if the pointer is null, and returns None if so - it doesn’t check that the data is valid, however…

return_type is passed on to g_type_info_get_ffi_type(), which I’ve written in similar fashion as a safe function and takes ownership of it (moved value), and drops it once done with it, we then return the result of that call wrapped in an <Option>.

Regarding g_type_info_get_ffi_type: within this function I’ve added a manual call to drop return_type via g_free. At some point int he near future I may wrap some things like this with a manually defined Drop trait so that the data is correctly freed when dropped.

But that’s a lot of unsafety

Yes, it is. But there are several aspects to all this:

  • we want to restrict all unsafe features/functions/operations to be within unsafe blocks - this means that if there are issues anywhere, then we have a good idea of where to start
  • we want to create a safe API over these unsafe aspects so that we can guarantee that the use of this API is safe throughout safe Rust use
  • and we want to ensure that contracts between unsafe and safe are fulfilled so that safe Rust continues to be safe - for example wrapping a binding in a safe function which guarantees that the contract to the unsafe function is filled.

Having said all that, can you shoot your foot in Rust? Absolutely! That is why unsafe code is boxed in with unsafe keywords. If something is hinky, then you know where to start looking. It is up to the implementor to honour the contracts with Rust when producing a safe function that contains unsafe code.

Pain Points


Note: I’m referencing stable rustc here

The biggest headache I’ve had so far is purely with deciding how to represent a C union in Rust. It looks like bindgen produces rust code for unions by using a Rust struct and ::std::marker::PhantomData<T>. PhantomData is is used as a marker of sorts in many instances, and in this case it is used to indicate ownership of this data. I don’t really understand it very well at this point, more info is available here and here.

Untagged Union support is coming to stable rustc soon, and is available in nightly rust (RFC). I hope it lands in rustc 1.19 so that I can incorporate it in to the gir-ffi bindings, and the gtk-rs project can add support to the gir->rust binding producer. I worked on enabling proper Union bindings in gir and the PR was accepted - ~it is only usable in nightly Rust as of yet, and is behind a feature-gate~.

Update: I am actively working to move the gir union support to default since untagged unions are now stable in Rust as of 1.19


Bit-fields feature in a few structs within the GNOME libs, and likely in a fair few other libraries. In particular there are a few structs that have mixed data, and these are the ones which don’t really have an ideal solution yet. An example of this is;

struct _GHookList
  gulong	    seq_id;
  guint		    hook_size : 16;
  guint		    is_setup : 1;
  GHook		   *hooks;
  gpointer	    dummy3;
  GHookFinalizeFunc finalize_hook;
  gpointer	    dummy[2];

where hook_size and is_setup are the bit-fields and as such they change the size and alignment of the struct. For now the gir to Rust binding gen is replacing the first instance of a bit-field with ` _truncated_record_marker: c_void` and commenting out the following fields. The programmer is expected to ackowledge which structs are truncated and write their code accordingly. A bit of extra work.

We now need a Rust RFC to be finished off to get C type bit-fields in to Rust (I will be taking this on as soon as I get the time).

Next post?

My next post will be about what I’ve learned from this project, and may span a few posts as I try and clarify things for myself enough to write about them. In particular I want to highlight the pros/cons of this project, and I want to try and translate what I’ve learned in Rust back to the C++ codebase - this means analysing use of pointers, switching to unique_ptr and the ownership model it presents, references instead of pointers, and a few other things.

So far this has been an incredibly rewarding project for myself, and I really hope I can share this knowledge in a way that is adequate for others to follow.

August 16, 2017


One of the things I like the most about GNOME are the annual conferences called GUADEC. You can see in person to the folks you were chatting with on IRC. As I have mentioned in my previous posts, I am an accepted GSoC 2017 student this year, and thus like other students, I was invited to give a lightning talk about my project with Pitivi, that consist on adding a plugin system as I mentioned in other posts. I live in Peru, so the duration of the flight is really long. Actually, I found a great deal by going first to Madrid and then taking other flight to Manchester. From Lima to Amsterdam (the stop of my flight to Madrid) it was about 12 hours, from Amsterdam to Madrid it was about 3 hours and from Madrid to Manchester about 2 hours. Yup! Almost 17 hours flying. But it was worth it.

I arrived to Manchester on July 27 at 12:20 p.m. and the weather surprised me with a strong rain. It may be more surprising when you live in a city were it does not rain. Then I had to go to the Manchester Metropolitan University where many of the GNOME contributors would be hosted. When the bus stopped on the Manchester Metropolitan University I went to the first building of it I see and asked how to get to the Birley Fields, our accomodations. A man told me the directions and gave me a map. After some minutes walking I got the office that assigns the rooms to the new residents. I met there Mario, a guy from GNOME who is involved in Flatpak. It was interesting to talk with him in English when at the end we realized that we both could speak in Spanish. After leaving my stuff on my room, I left the room to walk outside and I found David King. It was incredible because it was almost three years we didn’t see to each other. In that day, I also met hadess (Bastien Nocera). He helped me to get a traveler adapter. This was also the day of pre-registration in a bar called KROBAR. I got joined to the GStreamer folks who I met before in GUADEC 2014. Some of the guys came up with the idea that the GNOME logo needs a new design. I talked about it before on the #gnome-design channel. I also met ystreet00 (Matthew Waters) who helped once to create a GStreamer plugin with OpenGL.

alt text

GNOME stickers

The next day the talks started. In the venue, one of the first things I did was to buy a GNOME t-shirt. One of the talks I was very interested in was the one of Alexander Larsson about Flatpak and the one of Christian Hergert about GNOME Builder. I was very interested in the conference of hergertme because I has taken some of the ideas of GNOME Builder to apply them in Pitivi. I don’t always use this application because I am not totally adapted to it, but now I am considering to use it more instead of just coding on Gedit. That day I met Lucie Charvát, a student of the Google Summer of Code who is working in a nice feature of GNOME Builder that I always was thinking that was missing. Finally, I met to suhas2go (Suhas), other of the GSoC guys working on Pitivi like me. It was really awesome to meet him :) That day I also found Julita, who put strong effort to spread the word of GNOME in Peru. She introduced me to Rafał Lużyński who is from Krakow. It was a great coincidence because I was going to visit visit Warsaw and Krakow after GUADEC.

Davind King explaining me about GTK dialogs

David King explaining me about GTK dialogs

The second day of talks I woke up very early, about 4:00 a.m. and it was so the rest of the days. I took this as an advantage to continue with my GSoC project. One of the conferences I was most interested in was the one about Wayland, which is a project I have some interest on getting involved because it seems challenging to me and because of the EVoC. Other talk I found pretty interesting and I think I will investigate more about it was the one titled “Fantastic Layouts And Where To Find Them”. I promise to post about it as soon I try it, because you can create different layouts in GTK+ with a very simple syntax, that seems really easy to remember and understand. That day we had the party for the 20th anniversary of GNOME. I met there Federico Mena, who was telling me about how they started GNOME. It was awesome to listen to him, it was like traveling to the past and I am very grateful with the work of this man. After the event finish, I met Christian Hergert in person. I was talking with him about libpeas and GSettings. After talking with him I was convinced that Pitivi should use GSettings instead of ConfigParser.

With Federico Mena on the screens

Selfie with Federico Mena on the 20th anniversary party

The last day of the conference three Pitivi contributors (Jeff, Suhas and I) were together. I showed to Jeff a project I was working on during my vacations of the university that was a plugin for GStreamer I called gstblendersrc which I hope to continue and finish after GSoC finishes. During talks there was an open talk I was interested in that was Microsoft loves Linux. I has never supported Microsoft, but they are good in business, my interest was basically because before arriving to Manchester, during my flight from Madrid to Manchester I was reading a book named Conscious Capitalism by John Mackey (CEO of Whole Foods Market) who states that capitalism gives efficiency to non-profit organizations. By the way, I recommend the GNOME Board to read this book. Then the groupal photo took place and the lighning talks. I hope to be in Almeria the next year that is where next GUADEC will take place. Then it was the city tour, but, unfortunately, I lost the group. Anyway, it was a great opportunity to hack on Pitivi.

GUADEC 2017 t-shirt

GUADEC 2017 t-shirt

The next days were the workshops. Suhas and I were working on Pitivi. Suhas learns very fast, I think. When I did my first GSoC I had some problems. We sometimes helped between us, but that was not happening with frequency. The last day of the workshops I was looking for David King because I was thinking about working in Cheese for my thesis, but I couldn’t find him. I was with Julita and Felipe Borges. I told him about a project I have on my mind to implement in Cheese that is adding stickers over the detected faces on people. He started to give me more ideas, like to have a library of stickers fed by the community. Also he told me that it could be possible to add watermarks in Cheese so in presentations events of GNOME, people can take pictures with Cheese with the stickers and the final picture would have a watermark of even the Cheese logo or the GNOME logo. Now I need to talk about it to some professors. That was my next-to-last day in Manchester. Felipe Borges showed me some pictures of the tour he had in the Manchester United Stadium. So I went there the last day of the workshops.

I took a photo to Suhas without telling him

I took a photo to Suhas without telling him anything


August 15, 2017

Happy birthday, GNOME!

The GNOME desktop turns 20 today, and I'm so excited! Twenty years is a major milestone for any open source software project, especially a graphical desktop environment like GNOME that has to appeal to many different users. The 20th anniversary is definitely something to celebrate!

I wrote an article on about "GNOME at 20: Four reasons it's still my favorite GUI." I encourage you to read it!

In summary: GNOME was a big deal to me because when GNOME first appeared, we really didn't have a free software "desktop" system. The most common desktop environments at the time included FVWM, FVWM95, and their variants like WindowMaker or XFCE, but GNOME was the first complete, integrated "desktop" environment for Linux.

And over time, GNOME has evolved as technology has matured and Linux users demand more from their desktop than simply a system to manage files. GNOME 3 is modern yet familiar, striking that difficult balance between features and utility.

So, why do I still enjoy GNOME today?

  1. It's easy to get to work
  2. Open windows are easy to find
  3. No wasted screen space
  4. The desktop of the future

The article expands on these ideas, and provides a brief history of GNOME throughout the major milestones of GNOME 1, 2, and 3.

My first Keynote at CONECIT 2017 in Tingo Maria

Yesterday, I open the KeyNote session at CONEISC 2017 with a talk in an hour and a half. I have presented some experiences I had with HPC (High Performance Computing) in universities and during the ISC 2016 to show what is going on in the world regarding HPC, not only in architecture matters, also in programming. Comming soon the video 🙂It was a large audicence that congregates more than 1000 students and professionals in Computer Science and all the Engineering School in Peru.People participated with question I did and also it seems that are so interested in the topic.I want to thanks all people who helped me backstage, this is not only my effort, this is a community effort! Thanks Leyla Marcelo and Toto Cabezas, part of the GNOME Lima! ❤Thanks so much CONECIT 2017 – Tingo Maria 😀

Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: CONECIT, CONECIT 2017, CONECIT TGI, CONECIT Tingo Maria, fedora, GNO, GNOME, High Performance, HPC, HPC in the jungle, Julita Inca, Julita Inca Chiroque, KeyNote, Selva Peru

GUADEC 2017, part 4: Manchester, United Kingdom

Manchester is a puzzling city. On one hand you have lots of abandonned buildings, with black painted facades and big chains with locks on the doors. Some old brick buildings seem to fall to pieces.

On the other hand you have lots of new development. Many skyscrapers are being built. Sam told me that the city was thriving, especially with the economic situation in London making Manchester an attractive alternative.

Even though we had a greaty party at the Museum of Science and Industry, we didn’t really get to visit it. It’s a bit sad, as Manchester is a significant place in the history of Computer science, with inventions such as the Baby and dwellers such as Alan Turing.

Manchester prides itself with the importance music has for it, and if you dig a bit you’ll find a plethora of bands coming from there. It is a bit unfortunate that I didn’t get to be more exposed to the local musical heritage.

Of course this is England, you’ll be reminded of it by its black cabs and double deckers, but also by a few silly things. As another French dude said a while back:

Go and boil your bottoms, you sons of a silly person! I blow my nose at you, so-called “Arthur King,” you and all your silly English K-nig-hts.

Some places have two taps! Cold and hot water are separated. All power outlets also have individual switches. Foreigners can be deceived into thinking their laptop or phone is charging when it’s not.

Food was good, albeit expensive, even though I didn’t have that many “typical” dishes. The only ones that comes to mind, apart from the awesome full English breakfasts at the venue, are the black pudding with lentils I had on the first day and the fish and chips I had with some friends before I had to leave for the airport.

Wrapping up GUADEC 2017

I’m now back home after attending GUADEC 2017 in Manchester, followed by a week of half-vacation traveling around the Netherlands and visiting old friends. It was a fantastic opportunity to meet others in the GNOME community once again; having gone to a few hackfests and conferences in the past two years, I now recognize many friendly faces that I am happy to get a chance to see from time to time.


Here’s what I attended during the conference; I’ll link to the videos and provide a sentence or two of commentary.

  • The GNOME Way, Allan Day (video) — for me, one of the two highlights of the conference, a great statement of what makes GNOME tick, and a great opener for its 20th birthday.
  • Limited Connectivity, Endless Apps, Joaquim Rocha (video) — although already familiar to me, it was a nice overview of the product that I work on at Endless.
  • State of the Builder, Christian Hergert — one of these days I will start using Builder as soon as I can find some time to get it to learn my preferred keybindings.
  • The Battle over Our Technology, Karen Sandler (video) — the second of the two conference highlights, a timely reminder of why free software is important.
  • Seamless Integration to Hack Desktop Applications, Simon Schampijer (video) — my coworker and fellow-person-whose-last-name-gets-pronounced-wrong Simon showed off one of the most empowering features that I have ever seen.
  • Progressive Web Apps: an Opportunity for GNOME, Stephen Pearce (video) — I have been reading a lot about progressive web apps recently and am both excited and skeptical. (Stephen also made a webkit game in GJS in just one day.)
  • Different Ways of Outreaching Newcomers, Julita Inca (video) — it was fantastic to see this legendary GNOME mentor and organizer speak in person.
  • Lightning talks by the GSoC and Outreachy interns (video) — I always admire the intern sessions because I would have soiled myself had I had to speak to a 300-person conference room back when I was an intern. Hopefully next year the interns will have a session earlier in the day so their audience is fresher though! Also a shout out to my coworkers Kate Lasalle-Klein and Robin Tafel who are not interns but also gave a lightning talk during this session about working together with the GNOME design team. (If you’re looking for it in the other lightning talk video, you’re not finding it because it was in this session.)
  • Fantastic Layouts and Where to Find Them, Martin Abente Lahaye (video) — a great introduction to Emeus, the constraint layout manager, with a surprise appearance from an airplane hack.
  • Replacing C Library Code with Rust: What I Learned, Federico Mena Quintero (slides) — I am mentoring a Summer of Code student, Luke, who is doing some investigation into converting parts of GJS into Rust, and this talk really helped me understand some things from his work that I’ve been struggling with.
  • Continuous: Past, Present, Future, Emmanuele Bassi (video) — this talk made me want to help out on that lonely, lonely build sheriff mountain.
  • A Brief History of GNOME, Jonathan Blandford (video) — I had seen it before, but an hour well spent.
  • GNOME to 2020 and Beyond, Neil McGovern (video) — by turns optimistic and pessimistic, the new GNOME executive director talked about the future.
  • What’s Coverity Static Analysis Ever Done for Us?, Philip Withnall (video) — my coworker and fellow-person-with-an-excellent-first-name Philip talked about static analysis, which I cannot wait to start using on GJS.
  • Building a Secure Desktop with GNOME Technologies, Matthew Garrett (video) — the excellent “raise your hand if your system is bugging you to install an update right now” line made this talk for me.
  • GNOME Build Strategies and BuildStream, Tristan Van Berkom (video) — not quite what I expected, but luckily I got a better idea of what BuildStream does from the unconference session.
  • Bringing GNOME Home to Ubuntu, Tim Lunn (video) — it was a pleasure to meet Tim in person, who did the SpiderMonkey porting work on GJS before me, and whose commits I have often referred to.
  • GitLab, Carlos Soriano — I’m really excited to kick Bugzilla out of my workflow as soon as I can.
  • Microsoft ❤️ Linux, Julian Atanasoae — If nothing else Julian is brave to get up in front of this audience and sing the praises of Microsoft. I am skeptically optimistic; sure, Microsoft is doing some great things for open source, I even had a slide about some Microsoft tools in my talk, but on the other hand let’s not forget they were still trying to undermine and destroy our community not too long ago.
  • How to Get Better Mileage out of Glade, Juan Pablo Ugarte (video) — Slides created in Glade, what more could you ask for?
  • Lightning talks (video) —The highlight for me of the second lightning talk session was Sri’s self-described “rant.” There were a few talks in the lineup that I felt it was too bad didn’t get any time.

There were also so many talks that were programmed opposite the talks that I decided to go see. It seemed like that happened more often than last year! (Either my interests have broadened, or the quality of the talks is increasing…) I will be watching many videos in the coming days, now that they have been released, but I was especially sad not to see the two talks on animations by Jakub Steiner and Tobias Bernard because they were opposite (and immediately after, respectively) my own talk!

And the video of my talk is now published as well, although like many people I find it excruciating to watch myself on video; the rest of you can watch it, I’ll watch this instead.


The unconference part of the conference (where people propose topics, get together with like-minded attendees in a conference room, and talk or work together) was held in a nice workspace. I had one session on GJS on Monday where we first discussed how the Ubuntu desktop team (I got to meet Ken VanDine, Iain Lane, and Chris Coulson, as well as connect with Tim Lunn again) was going to deploy Mozilla’s security updates to Javascript (and therefore GJS and GNOME Shell) in Ubuntu’s long-term support releases. Then Stephen Pearce joined and suggested a GJS working group in order to make development more visible.

Later I joined the GNOME Shell performance session where I rebased Christian Hergert’s GJS profiler code and showed Jonas Adahl how it worked; we profiled opening and closing the overview.

On the second day I joined the Continuous and Builder sessions. Builder was looking good on a giant TV set!

On the third day I attended the BuildStream session and I’m quite excited about trying it out for a development workflow while hacking on a component of a Flatpak runtime, which is a shaky task at best using the current Flatpak tools.

In the afternoon I technically “had another GJS session” though it’s my experience on the third unconference day that all the sessions melt into one. This time many people went on a hike in the afternoon. I was very sad to have missed it, since I love hiking, but I was having an allergy attack at the time which made it difficult for me to be outside. However, I spent the afternoon working on the GObject class stuff for GJS instead, and chatting with people.

Social events

This GUADEC conference had the best social event on Saturday night: a GNOME 20th birthday party, complete with cake, old farts Q&A panel, trivia quiz, raffle, and a blast from the past dance floor with music from back when GNOME started. There was even an afterparty way into the small hours … which I did not go to because my talk was in the first slot on Sunday morning!

Apart from that there were many opportunities to connect with people, from day 1 through 6. One thing I like about GUADEC is that the days are not stuffed full of talks and there is plenty of time to have casual conversations with people. One “hallway session” that I had, for example, was a conversation with Marco Barisione, talking about the reverse debuggers RR and UndoDB. Another was with with Sri Ramkrishna, following on from his lightning “rant” on Sunday, about what kind of help beginning app writers are looking for, whether they can get it from tutorials or Stack Overflow, and what kinds of things get in their way.


Many thanks to the GNOME Foundation for sponsoring my attendance. I’m really glad to have been able to join in this year.


August 14, 2017

GIMP Motion: part 2 — complex animations

This is the second video to present GIMP Motion, our plug-in to create animations of professional quality in GIMP. As previously written, the code is pretty much work-in-progress, has its share of bugs and issues, and I am regularly reviewing some of the concepts as we experiment them on ZeMarmot. You are still welcome to play with the code, available on GIMP official source code repository under the same Free Software license (GPL v3 and over). Hopefully it will be at some point, not too far away, released with GIMP itself when I will deem it stable and good enough. The more funding (see in the end of the article for our crowdfunding links) we get, the faster it will happen.

Whereas the previous video was introducing “simple animations”, which are mostly animations where each layer is used as a different finale frame, this second video shows you how the plug-in handles animations where every frame can be composited from any number of layers. For instance a single layer for the background used throughout the whole animation, and separate layers for a character, other layers for a second character, and layers for other effects or objects (for instance the snow tracks in the example in the end of the video).

It also shows how we can “play” with the camera, for instance with a full cut larger than the scene where you “pan” while following the characters. In the end, we should be able to animate any effect (GEGL operations) as well. This could be to blur the background or foreground, adding light effects (lens flares for instance), or just artistic effects, even motion graphics…
All this is still very much work-in-progress.

One of the most difficult part is to find how to get the smoother experience. Rendering dozens of frames, each of these composited from several high resolution images and complex mathematical effects, takes time; yet one does not want to freeze the GUI, and the animation preview needs to be as smooth as possible as well. These are topics I worked on and experimented a lot too because these are some of the most painful aspect of working with Blender where we constantly had to render pieces of animation to see the real thing (the preview is terribly slow and we never found the right settings even with a good graphics card, 32GB of memory, a good processor, and SSD hard drives).
One of the results of my work in GIMP core should be to make libgimp finally thread-safe (my patch is still holding for review, yet it works very well for us already as you can see if you check out our branch). So it should be a very good step for all plug-ins, not only for animation only.
This allowed me to work more easily with multi-threading in my plug-in and I am pretty happy of the result so far (though I still plan a lot more work).

Another big workfield is to have a GUI as easy to use, yet powerful, as possible. We have so many issues with other software where the powerful options are just so complicated to use that we end up using them badly. That’s obviously a very difficult part (which is why it is so bad in so many software; I was not saying that’s because they are badly done: the solution is just never as easy as one can think of at first) and hopefully we will get something not too bad in the end. Aryeom is constantly reminding me and complaining of the bugs and GUI or experience issues in my software, so I have no other choices than do my best. 😉


You’ll note also that we work on very short animations. We actually only draw a single cut at a time in a given XCF file.  From GIMP Motion, we will then export images and will work on cut/scene transitions and other forms of compositing in another software (usually Blender VSE, but we hear a lot more good of Kdenlive lately, so we may give it a shot again; actually these 2 introduction videos were made in Kdenlive as a test). Since 2 cuts are a totally different viewpoint (per definition), there is not much interest on drawing them in the same file anyway. The other reasons is that GIMP is not made to work with thousands of high-definition layers. Even though GEGL allows GIMP to work on images bigger than memory size in theory, this may not be the best idea in practice, in particular if you want fast renders (some people tried and were not too happy, so I tested for debugging sake: that’s definitely not day-to-day workable). As long as GIMP core is made to work on images, it could be argued that it is acceptable. Maybe if animations were to make it to core one day, we could start thinking about how to be smarter on memory usage.
On the other hand, cuts are usually just a few seconds long which makes a single cut data pretty reasonable in memory. Also note that working and drawing animation films one cut at a time is a pretty standard workflow and makes complete sense (this is of course a whole different deal with live-action or 3D animation; I am really discussing the pure drawn animation style here), so this is actually not that huge of a deal for the time being.

To conclude, maybe you are wondering a bit about the term “cel animation”. Someday I guess I should explain more what was cel animation, also often called simply “traditional animation” and how our workflow is inspired by it. For now, just check Wikipedia, and you’ll see already how animation cels really fit well the concept of “layers” in GIMP. 🙂

Have a fun viewing!

ZeMarmot team

Reminder: my Free Software coding can be supported in
USD on Patreon or in EUR on Tipeee. The more we get
funding, the faster we will be able to have animation
capabilities in GIMP, along with a lot of other nice
features I work on in the same time. :-)

It’s always fun to be at GUADEC

Sponsored by GNOME FoundationAs usual I would like to thanks the GNOME foundation for sponsoring my trip to Manchester to enjoy such a wonderful conference and give me the opportunity to present to the community the work I been doing on Glade’s UI for the past few months.

You can find the slides at

Obviously they are made with Glade, something which might seems odd, but is it really?

modern-ui branch


So besides a PDF you will find a tarball with all the source files. BTW You can use glade-previewer and PgUp and PgDown to switch pages

$ glade-previewer -f --css talk.css --slideshow


And some random pictures from GUADEC…

Progress Report July 31st - August 13th

Some may notice that there's a week missing since my last progress report. That week was GSoC second evaluation week, and my mentors were notified in advance why I was absent during that week.

Since last time I posted, I have completed my primary goal to make mutter start without Xwayland present. Note however, that this is just like I said, "make mutter *start* without X11 present", and nothing else. Window managing is a bit mess, and that was set as my second goal, if the primary one was accomplished much earlier in the SoC. So yes, you can start but you can't use mutter without X11 if you use my latest work.

So, what remained to be done since the last time I posted? Not much. Workspace management had lot of X11 specifics, such as getting workspace layout, names and such from X11 atoms. It was easily fixed by splitting X11 specific code to meta-x11-display.c and making that code run when or after MetaX11Display was created, either in meta_x11_display_new (), or using special signal handlers which notify that something workspace related was changed. Similar work was done about StackTracker, which at startup synchronized stack with X server. 

Last piece of code was related to GTK, or specifically, GDK usage. Previously, GTK was initialized early in the code, even before MetaDisplay was created. As such, it was possible to use GtkSettings to retrieve some settings and manage them via MetaPreferences. This was also used to get primary screen and X display for MetaX11Display creation. After my changes, GDK initialization was moved early into MetaX11Display initialization, and GdkDisplay is now properly tracked, which makes creating and destroying MetaX11Display several times a bit easier, as well retrieving default screen and xdisplay.

But this comes at a cost - one cannot use GtkSettings anymore in the place it was used, and gtk-shell-shows-app-menu GTK setting is no longer honored. One has to use meta_prefs_set_show_fallback_app_menu () function which was added right before usage of GTK setting was dropped. Also, some X11 specific code that was ran when mutter was started with --nested on X11 used GDK code, while GDK is not initialized when --no-x11 was passed. This was easy to fix, falling back to using pure X11 calls, rather than GDK wrappers.

This sums up the work that was required to get mutter to start without X11. But that was not all. As feedback on previous work on my patches, I was suggested to split workspace management code into separate file and object - MetaWorkspaceManager. This was simply copy/paste/rename effort, but the final patch was larger than all of the patches above! And finally, some legacy screen size tracking, which was moved from MetaScreen to MetaDisplay is now replaced with fairly new MetaMonitorManager, with only one wrapper function remaining in place!

Sounds good, how do I try it?

The code is hosted at github, gsoc-v4 branch. It can be obtained and installed using

$ git clone -b gsoc-v4
$ cd mutter
$ ./ --prefix=$HOME/mutter-gsoc
$ make && make install
$ ln -s /usr/bin/Xwayland $HOME/mutter-gsoc/bin
$ export XDG_DATA_DIRS=$HOME/mutter-gsoc/share:/usr/share

It can be tested by either running the following from a terminal emulator

$ $HOME/mutter-gsoc/bin/mutter --nested --wayland --no-x11

Or the following from a VT

$ $HOME/mutter-gsoc/bin/mutter --display-server --wayland --no-x11

That's it. Don't expect much from it, except that it will start without spawning Xwayland and not crash!

In case you missed it, I've written a blog post last week which outlines all the API changes that happened during SoC, and can be found at

GNOME Logs Test Suite

Hello Everyone,

During past weeks, I have been into researching about unit testing as I am now currently working on my third task for GSoC , which is, writing a test suite for GNOME Logs. I would like to give you a brief background about the previous work already done on testing for Logs.

The previous work done is mostly based on testing the Logs frontend using dogtail and behave automated python testing frameworks. You can see more about it here. In my task, I will mostly be working on testing the existing backend functionality in Logs.

I will now try to explain you a little about the existing backend modules in Logs. The Logs backend consists of two modules currently:

  • GlJournal: This is the low level module that is responsible for interfacing with the system journal using sd-journal API. It does the work of querying for journal entries and traversing the system journal in the correct order.
  • GlJournalModel: This module is responsible for filtering and storing the journal entries according to some specified criteria. These stored journal entries are directly reflected in the frontend modules as well. Moreover, it also handles various attributes to be used in the frontend modules.

I have thought of some initial test scenarios for these modules. I would like to share them here:


  • Whether setting the “sort-order” key gives the journal entries in correct order.
  • Whether searching works correctly for each of the individual journal fields.
  • Whether tokenized mode of search works properly.
  • Whether the compressed entries are being counted properly.
  • Whether the compressed header is being inserted at the correct position.
  • Whether similar entries are being grouped correctly when using event compression.


  • Whether the latest boot is returned properly or not.
  • Whether the latest 5 boots are returned properly.
  • If the data for journal entries is queried correctly.
  • If invalid journal entries are filtered properly (skipping journal entries that are missing necessary journal fields).
  • Whether setting matches on the journal works as intended.
  • Whether the system journal is traversed in a proper order.

In the coming weeks, I will be implementing these test cases using Glib testing framework. Further, I am exploring the possibility of creating a dummy journal so that predefined journal entries can be added to it for testing purposes.

Apart from test suite, I am happy to tell you that the patch regarding improvements for event compression UI has been merged. Here are some screenshots:

Screenshot from 2017-08-14 22-09-14

As you can see above, the border colour has been changed so that there is a clear demarcation between the compressed header row and the compressed group. The selected entry is now highlighted with a blue background as shown below:

Screenshot from 2017-08-14 22-07-03

I would like to thank Jakub Steiner and Allan Day of GNOME Design Team for helping me in further improving the event compression UI and my mentors Jonathan Kang and David King for reviewing and merging the relevant patches.

Moreover, the patches regarding Logs shell search provider have received some further polish. You can follow the progress on it here.

That is all for now. See you in my next blog post 🙂

GNOME 3.26 Core Applications

Last year, I presented the GNOME 3.22 core applications: a recommendation for which GNOME applications have sufficiently-high general appeal that they should be installed out-of-the-box by operating systems that wish to ship the GNOME desktop the way that upstream intends. We received some complaints that various applications were missing from the list, but I was pretty satisfied with the end result. After all, not every high-quality application is going to have wide general appeal, and not every application with general appeal is going to meet our stringent design standards. It was entirely intentional there was not any email client (none met our standards) or chat application (IRC does not have general appeal) included, nor any developer tools (most people aren’t software developers). Our classification was, necessarily, highly-opinionated.

For GNOME 3.24, the list of core applications did not change.

For GNOME 3.26, I’m pleased to announce the addition of three new applications: GNOME Music, GNOME To Do, and Simple Scan. Distributions that choose to follow our recommendation should add these applications to their default install.

Music and To Do have spent the past year maturing. No software is perfect, but these applications are now good enough that it’s time for them to reach a wider audience and hopefully attract some new contributors. In particular, Music has had another major performance improvement that should make it more pleasant to use.

In contrast, Simple Scan has been a mature app for a long time, and has long followed GNOME design guidelines. I’m very happy to announce that development of Simple Scan has moved to GNOME infrastructure. I hope that GNOME will be a good home for Simple Scan for a long time to come.

The full list of core applications for GNOME 3.26 is as follows:

  • Archive Manager (File Roller)
  • Boxes
  • Calculator
  • Calendar (gnome-calendar, not california)
  • Characters (gnome-characters, not gucharmap)
  • Cheese
  • Clocks
  • Contacts
  • Disk Usage Analyzer (Baobab)
  • Disks (gnome-disk-utility)
  • Document Viewer (Evince)
  • Documents
  • Files (Nautilus)
  • Fonts  (gnome-font-viewer)
  • Help (Yelp)
  • Image Viewer (Eye of GNOME)
  • Logs (gnome-logs, not gnome-system-log)
  • Maps
  • Music
  • Photos
  • Screenshot
  • Software
  • Simple Scan
  • System Monitor
  • Terminal
  • Text Editor (gedit)
  • To Do
  • Videos (Totem)
  • Weather
  • Web (Epiphany)

We are now up to 30 core apps! We are not likely to go much higher than this for a while. What changes are likely in the future? My hope is that within the next year we can add Credentials, a replacement for Seahorse (which is not listed above due to quality issues), remove Disk Usage Analyzer and System Monitor in favor of a new Usage app, and hopefully remove Archive Manager in favor of better support for archives in Files. But it’s too early for any of that now. We’ll have to wait and see how things develop!

August 13, 2017

GSOC Keysign Bluetooth update and GUADEC 2017


With Bluetooth I ended up implementing two different ways to exchange keys because beforehand it was not clear which one of them was better.

Use BT discovery and change BT name

With this approach, after a key has been selected, the Bluetooh name will be changed to the key fingerprint. This is like how Avahi works because we create a local server with the name of the fingerprint.

In the receiving side the user needs to enter the fingerprint, then a Bluetooth discovery will start searching for the right device.

We need to start a discovery because in order to establish a Bluetooth connection we need to know the MAC address of the device to connect to.

These were in short the needed steps.


  • No extra codes, we use the same key fingerprint already used with Avahi


  • Need to change every time the BT name, a bit invasive approach
  • Relatively slow, the discovery to get the MAC address is not fast

Use the BT MAC address directly

Instead of using the key fingerprint, another way is using the Bluetooth MAC address. In this way the receiver will already knows who to connect to, and we can avoid to perform a discovery.

The problem is that now we have yet another code to display.

After some discussions I ended up with the choice of only embed the Bluetooth MAC address (the Bluetooth code) in the QR code. Doing so we can continue to display to the user always only one code. This has the downside to limit the Bluetooth use exclusively to QR code, but the reason behind it is also that the QR code is safer than manually entering the code (I’ll explain that later), so this is even an attempt to push the use of the QR.


  • Faster, because we don’t need to start a discovery
  • Don’t need to change the BT name


  • Need an extra code (BT MAC)


Nothing is still definitive, but after some testing and discussions probably the second method is the preferred one.


A transfer with Bluetooth could happen with or without pairing.

If two devices are paired the communication is encrypted, insted if they are not the communication is in plain text. Even if the encryption was a good extra feature, I avoided to add the pairing to Keysign because it required additional user interaction and also pairing a new device only for a single connection was a bit overkill. Also if afterwards the user doesn’t remember to delete the paired device it may even become a security problem.

So how can we be sure that the downloaded key has not been tampered? With an hash-based message authentication code (HMAC). In the QR code we embed a message authentication code that was also used for check the downloaded key with Avahi. In this way we can reutilize this mechanism also for Bluetooth and be reasonably safe that the received key has not been altered.

Presenting the Bluetooth option

I used the least intrusive way for the user: Bluetooth, if available, is automatically added to the exsisting transfer methods.

The advantage is that this approach requires no extra steps for the user and no extra buttons in the GUI.

Magic Wormhole

Refactoring to Inline Callbacks

Previously I implemented Magic Wormhole using the callback mechanism offered by Twisted.

After some discussions I decided to refactor the code to use the inline callbacks. The advantages are that the code flow is now more linear and easier to follow and maintain.


This was my first GUADEC and I must say that it was amazing! The conference gave me the opportunity to talk to very nice people and attend beautiful talks. There were also wonderful social events! (like the 20th GNOME birthday and the walking tour for example)

I think that the talks that I liked most were:

  • The GNOME Way (Allan Day) and The History of GNOME (Jonathan Blandford): As a newcomer to the GNOME development, these two talks helped me to really understand the principles of the GNOME community.
  • Keynote: The Battle Over Our Technology (Karen Sandler): Very passionate and informative talk. Unfortunately explaining the importance of Free Software to other people is not easy.
  • Resurrecting dinosaurs, what can possibly go wrong (Richard Brown): I didn’t expect this kind of talk. He expressed some concerns and objections regarding Flatpak and the other application technologies. This talk made me think a lot about the current state, and what we should do to improve the situation.

I’d like to thank the GNOME Foundation for sponsoring me. This was a fantastic experience and I hope that we can met again next year in Almería.

gnome badge

August 12, 2017


Few days ago I attended this year’s GUADEC which held at Manchester. This was my third GUADEC and, as the previous ones, attending to the conference gave me the opportunity to talk about both technical and ethical matters, hang out with old friends (even though unfortunately some of them were missing) and meet new ones. My general feeling is that each GUADEC is always better than the previous one and I think it is due to a more tight relationship with the members of the community. GUADEC is the event that keeps my motivation up: being able to talk in real life with people sharing the same concerns and ideas about software freedom helps me to feel less alone.

The conference was very well organized. GUADEC 2017 has been hosted at the Manchester Metropolitan University (MMU) and both the venue and the on-site accommodation infrastructures are impressive. If I had to compare them with the infrastructures in my country I would be really embarrassed. Most of the approved talks were a lot interesting to me. Unfortunately some of them were given at the same time, therefore I had to choose which one to listen. Hopefully, all the talks have been recorded. I’m looking forward to see them online! The social events were amazing. The biggest event was the Saturday party to celebrate GNOME’s 20th anniversary. Another event I really liked is the Tour of Manchester, which gave me the opportunity to quickly visit the center of Manchester and to discover the Alan Turing Statue at Sackville Gardens. In the previous GUADECs I didn’t have the opportunity to visit the center of the hosting city, so I really appreciated the organized tour.


Speaking about the talks, the ones that I liked the most (and that I really encourage you to listen when they will be available online) are:

The GNOME Way, by Allan Day

In this talk Allan discussed what the principles of GNOME are. Highlighting that GNOME isn’t just about code, but it includes different principle tat make it unique, such as being an inclusive community. Thanks to this talk now I know about the GNOME Foundation Charter. An important sentence from this document is the following:

The foundation should not be exclusionary or elitist. Every GNOME contributor, however small his or her contribution, must have the opportunity to participate in determining the direction and actions of the project.

which shows how inclusive the GNOME community has been since the beginning. You can find a bloggified version of the talk here.

The Battle Over Our Technology, by Karen Sandler

This talk mainly highlights the importance of software freedom, giving some examples on how to explain to non-technical people why software freedom is essential component of a free society and why they should care about it. I really liked this talk, mainly because I often find difficult to explain what are the issues that proprietary software may bring in our every-day life. When the video will be available online I will watch it again for sure.

The History of GNOME, by Jonathan Blandford

A brief history of GNOME. It starts from the beginning of the first graphics system, it goes through each GNOME milestone and it ends showing what the current opportunities are. The reason why I’ve found this talk really nice is that it shows how GNOME improved day-by-day and lists who were the big players that were interested in GNOME (for example, I didn’t know that the accessibility stack was sponsored by Sun Microsystems). If you can’t wait for the recorded version, the slides can be find here.

I would like to thank GNOME Foundation for sponsoring me and the travel committee for their work. Without to their support I wouldn’t have been able to attend to this GUADEC!

Next GUADEC will be in Almería. I hope to be there next year!

Confirming Fedora and GNOME presence in INFOSOFT 2017

INFOSOFT is a tech event organized by AAII (asociation of alumni of informatic and engineering)  from Pontificia Universidad Católica del Perú – PUCP.

This year is going to happen on September, and as you can see in the official Web, it is going to be a free fee event with previous registration until September 1st. It is open to everyone!Thanks to my trip and little training at GUADEC, I will be able to talk about GTK, Python and WebKit, with the Web Browser example

And I will also talk about tools to writer papers and good quality and free academic programs as Latex and Octave.

Here you can see my schedules of the workshops: I think it is important to show Linux, in my case with Fedora and GNOME in tech conferences where most of the time, other big companies have a strong presence.

Here is the list with some speakers:Thanks so much to the organizers to consider GNOME and Fedora in INFOSOFT 2017:I am exciting to be part of this edition of INFOSOFT in the 100th years of PUCP! <3

Filed under: FEDORA, GNOME Tagged: event, fedora, GNOME, Infosoft, INFOSOFT 2017:, Julita Inca, Julita Inca Chiroque, Lima, linux, Perú, PUCP, PUCP 100 years

Allan Day on The GNOME Way

If you don't read Allan Day's blog, I encourage you to do so. Allan is one of the designers on the GNOME Design team, and is also a great guy in person. Allan recently presented at GUADEC, the GNOME Users And Developers European Conference, about several key principles in GNOME design concepts. Allan's has turned his talk into a blog post: "The GNOME Way." You should read it.

Allan writes in the introduction: "In what follows, I’m going to summarise what I think are GNOME’s most important principles. It’s a personal list, but it’s also one that I’ve developed after years of working within the GNOME project, as well as talking to other members of the community. If you know the GNOME project, it should be familiar. If you don’t know it so well, it will hopefully help you understand why GNOME is important."

A quick summary of those key principles:

1: GNOME is principled
"Members of the GNOME project don’t just make things up as they go along and they don’t always take the easiest path."

2: software freedom
"GNOME was born out of a concern with software freedom: the desire to create a Free Software desktop. That commitment exists to this day. "

3: inclusive software
"GNOME is committed to making its software usable by as many people as possible. This principle emerged during the project’s early years."

4: high-quality engineering
"GNOME has high standards when it comes to engineering. We expect our software to be well-designed, reliable and performant. We expect our code to be well-written and easy to maintain."

5: we care about the stack
"GNOME cares about the entire system: how it performs, its architecture, its security."

6: take responsibility for the user’s experience
"Taking responsibility means taking quality seriously, and rejecting the “works for me” culture that is so common in open source. It requires testing and QA."

Allan's article is a terrific read for anyone interested in why GNOME is the way it is, and how it came to be. Thanks, Allan!

GSoC Report 3

I was not able to visit GUADEC this year :’(

But hey, suhas2go got me some goodies, so kudos to him!

PS: Suhas I'm not paying you back for the tees

Gamepad mappings code got pushed after going through many review iterations. Here’s a demo that I would have given, had I came to the conference.


  • Show a list of controllers
  • Show the current configurations of the gamepads.
    It's so fun pressing random buttons on the controller and seeing the svg light up :P
  • And finally remap them with this immersive UI
    Well it's hard to show here, but I remapped my buttons to triggers and vice versa (for an amazing Tekken 3 experience).

    In any case, if the users mess up, they can reset back to default mappings.


  • Remapping the gamepad is a lengthy task, hence we plan on providing quick configurations for some common gamepads. As an example we could have a simple button swapper.
Thanks Garett for the quick designs!
  • Adrien suggests that we should ask the users if they want to share their gamepad mappings online so that we can enhance the database.

August 11, 2017


Another year, another GUADEC — my 13th to date. Definitely not getting younger, here. 😉

As usual, it was great to see so many faces, old and new. Lots of faces, as well; attendance has been really good, this year.

The 20th anniversary party was a blast; the venue was brilliant, and watching people going around the tables in order to fill in slots for the raffle tickets was hilarious. I loved every minute of it — even if the ‘90s music was an assault on my teenage years. See above, re: getting older.

The talks were, as usual, stellar. It’s always so hard to chose from the embarrassment of riches that is the submission pool, but every year I think the quality of what ends up on the schedule is so high that I cannot be sad.

Lots and lots of people were happy to see the Endless contingent at the conference; the talks from my colleagues were really well received, and I’m sure we’re going to see even more collaboration spring from the seeds planted this year.

My talk about continuous integration in GNOME was well-received, I think; I had to speed up a bit at the end because I lost time while connecting to the projector (not enough juice when on battery to power the HDMI-over-USB C connector; lesson learned for the next talk). I would have liked to get some more time to explain what I’d like to achieve with Continuous.

Do not disturb the build sheriff

I ended up talking with many people at the unconference days, in any case. If you’re interested in helping out the automated build of GNOME components and to improve the reliability of the project, feel free to drop by on (or on Matrix!) in the #testable channel.

The unconference days were also very productive, for me. The GTK+ session was, as usual, a great way to plan ahead for the future; last year we defined the new release cycle for GTK+ and jump start the 4.0 development cycle. This year we drafted a roadmap with the remaining tasks.

I talked about Flatpak, FlatHub, Builder, performance in Mutter and GNOME Shell; I wanted to attend the Rust and GJS sessions, but that would have required the ability to clone myself, or be in more than one place at once.

During the unconference, I was also able to finally finish the GDK-Pixbuf port of the build system to Meson. Testing is very much welcome, before we bin the Autotools build and bring one of the oldest modules in GNOME into the future.

Additionally, I was invited to the GNOME Release Team, mostly to deal with the various continuous integration build issues. This, sadly, does not mean that I’m one step closer to my ascendance as the power mad dictator of all of GNOME, but it means that if there are issues with your module, you have a more-or-less official point of contact.

I can’t wait for GUADEC 2018! See you all in Almería!

Attended GUADEC 2017

Although I was still recovering from bronchitis and the English weather was not helping much, I really enjoyed this year’s GUADEC. Last 3 GUADECs suffered a bit from lower attendance, so it was great to see that the conference is bouncing back and the attendance is getting close to 300 again.

What I value the most about GUADEC are hallway conversations. A concrete outcome of it is that we’re currently working with Endless people on getting LibreOffice to Flathub. In the process of it we’d like to improve the LibreOffice flatpak, so that it will be a full replacement for the traditional version in packages: having Java available, having spell-checking dictionaries available etc.

I also spent quite a lot of time with the Engagement team because they’re trying to build local GNOME communities and also make improvements in their budgeting. This is something I spent several years working on in the Fedora Project and we have built robust systems for it there. The GNOME community can get an inspiration from it or even reuse it. That’s why I’d like to be active in the Engagement team at least a bit to help bring those things into life.

August 10, 2017

Dev v Ops

In his talk at the 2017 GUADEC in Manchester, Richard Brown presented a set of objections to the current trend of new packaging systems — mostly AppImage, Snap, and Flatpak — from the perspective of a Linux distribution integrator.

I’m not entirely sure he managed to convince everybody in the attendance, but he definitely presented a well-reasoned argument, steeped in history. I freely admit I went in not expecting to be convinced, but fully expecting to be engaged and I can definitely say I left the room thoroughly satisfied, and full of questions on how we can make the application development and distribution story on Linux much better. Talking with other people involved with Flatpak and Flathub we already identified various places where things need to be improved, and how to set up automated tools to ensure we don’t regress.

In the end, though, all I could think of in order to summarise it when describing the presentation to people that did not attend it, was this:

Linux distribution developer tells application and system developers that packaging is a solved problem, as long as everyone uses the same OS, distribution, tools, and becomes a distro packager.

Which, I’m the first to admit, seems to subject the presentation to impressive levels of lossy compression. I want to reiterate that I think Richard’s argument was presented much better than this; even if the talk was really doom and gloom predictions from a person who sees new technologies encroaching in his domain, Richard had wholesome intentions, so I feel a bit bad about condensing them into a quip.

Of course, this leaves me in quite a bind. It would be easy — incredibly easy — to dismiss a lot of the objections and points raised by Richard as a case of the Italian idiom “do not ask to the inn-keeper if the house wine is good”. Nevertheless, I want to understand why those objections where made in the first place, because it’s not going to be the last we are going to be hearing them.

I’ve been turning an answer to that question in my head for a while, now, and I think I finally came up with something that tries to rise to the level of Richard’s presentation, in the sense that I tried to capture the issue behind it, instead of just reacting to it.

Like many things in tech, it all comes down to developers and system administators.

I don’t think I’m being controversial, or exposing some knowledge for initiates, when I say that Linux distributions are not made by the people that write the software they distribute. Of course, there are various exceptions, with upstream developers being involved (by volunteer or more likely paid work) with a particular distribution of their software, but by and large there has been a complete disconnect between who writes the code and who packages it.

Another, I hope, uncontroversial statement is that people on the Linux distribution side of things are mostly interested in making sure that the overall OS fits into a fairly specific view of how computer systems should work: a central authority that oversees, via policies and validation tools that implement those policies, how all the pieces fit together, up to a certain point. There’s a name for that kind of authority: system administrators.

Linux distributions are the continuation of system administration policies via other means: all installed Linux machines are viewed as part of the same shared domain, with clear lines of responsibility and ownership that trace from a user installation to the packager, to the people that set up the policies of integration, and which may or may not involve the people that make the software in the first place — after all, that’s what distro patches are for.

You may have noticed that in the past 35 years the landscape of computing has been changed by the introduction of the personal computer; that the release of Windows 95 introduced the concept of a mass marketable operating system; and that, by and large, there has been a complete disintermediation between software vendors and users. A typical computer user won’t have an administrator giving them a machine with the OS, validating and managing all the updates; instead of asking an admin to buy, test, and deploy an application for them, users went to a shop and bought a box with floppies or an optical storage — and now they just go to online version of that shop (typically owned by the OS vendor) and download it. The online store may just provide users with the guarantee that the app won’t steal all their money without asking in advance, but that’s pretty much where the responsibility of the owners of the store ends.

Linux does not have stores.

You’re still supposed to go ask your sysadmin for an application to be available, and you’re still supposed to give your application to the sysadmin so that they can deploy it — with or without modifications.

Yet, in the 25 years of their history, Linux distribution haven’t managed to convince the developers of

  • Perl
  • Python
  • Ruby
  • JavaScript
  • Rust
  • Go
  • PHP
  • insert_your_language_here

applications to defer all their dependency handling and integration to distro packagers.

They have just about managed to convince C and C++ developers, because the practices of those languages are so old and entrenched, the tools so poor, and because they share part of the same heritage; and TeX writers, for some weird reason, as you can witness by looking at how popular distribution package all the texlive modules.

The crux is that nobody, on any existing major (≥ 5% of market penetration) platform, develops applications like Linux distributors want them to. Nobody wants to. Not even the walled gardens you keep in your pocket and use to browse the web, watch a video, play a game, and occasionally make a phone call, work like that, and those are the closest thing to a heavily managed system you can get outside of a data center.

The issue is not the “managed by somebody” part; the issue is the inevitable intermediary between an application developer and an application user.

Application developers want to be able to test and have a reproducible environment, because it makes it easier for them to find bugs and to ensure that their project works as they intented; the easiest way to do that is to have people literally use the developer’s computer — this is why web applications deployed on top of a web browser engine that consumes all your CPU cores in a fiery pit are eating everybody’s lunch; or because software as a service even exists. The closest thing application developers have found to ship their working laptop to the users of our applications without physically shipping hardware, is to give them a read-only file system image that we have built ourselves, or a list of dependencies hosted on a public source code repository that the build system will automatically check out prior to deploying the application.

The Linux distribution model is to have system administrators turned packagers control all the dependencies and the way they interact on a system; check all the licensing terms and security issues, when not accidentally introducing them; and then fight among themselves on the practicalities and ideologies of how that software should be distributed, installed, and managed.

The more I think about it, the less I understand how that ever worked in the first place. It is not a mystery, though, why it’s a dying model.

When I say that “nobody develops applications like the Linux distributions encourages and prefers” I’m not kidding around: Windows, macOS, iOS, Electron, and Android application developers are heavily based on the concept of a core set of OS services; a parallel installable blocks of system dependencies shipped and retired by the OS vendor; and a bundling system that allows application developers to provide their own dependencies, and control them.

Sounds familiar?

If it does, it’s becase, in the past 25 years, every other platform (and I include programming languages with a fairly comprehensive standard library in that definition, not just operating systems) has implemented something like this — even in free and open source software, where this kind of invention mostly exists both as a way to replicate Linux distributions on Windows, and to route around Linux distributions on Linux.

It should not come as a surprise that there’s going to be friction; while for the past two decades architects of both operating systems and programming languages have been trying to come up with a car, Linux distributions have been investing immeasurable efforts in order to come up with a jet fueled, SRB-augmented horse. Sure: it’s so easy to run apt install foo and get foo installed. How did foo get into the repository? How can you host a repository, if you can’t, or don’t want to host it on somebody else’s infrastructure? What happens when you have to deal with a bajillion, slightly conflicting, ever changing policies? How do you keep your work up to date for everyone, and every combination? What happens if you cannot give out the keys to your application to everyone, even if the application itself may be free software?

Scalability is the problem; too many intermediaries, too many gatekeepers. Even if we had a single one, that’s still one too many. People using computers expect to access whatever application they need, at the touch of a finger or at the click of a pointer; if they cannot get to something in time for the task they have to complete, they will simply leave and never come back. Sure, they can probably appreciate the ease of installing 30 different console text editors, 25 IRC clients, and 12 email clients, all in various state of disrepair and functionality; it won’t really mean much, though, because they will be using something else by that time.

Of course, now we in the Linux world are in the situation of reimplementing the past 20 years of mistakes other platforms have made; of course, there will be growing pains, and maybe, if we’re careful enough, we can actually learn for somebody else’s blunders, and avoid falling into common traps. We’re going to have new and exciting traps to fall into!

Does this mean it’s futile, and that we should just give up on everything and just go back to our comfort zone? If we did, it would not only be a disservice to our existing users, but also to the users of every other platform. Our — and I mean the larger free software ecosystem — proposition is that we wish all users to have the tools to modify the software they are using; to ensure that the software in question has not been modified against their will or knowledge; and to access their own data, instead of merely providing it to third parties and renting out services with it. We should have fewer intermediaries, not more. We should push for adoption and access. We should provide a credible alternative to other platforms.

This will not be easy.

We will need to grow up a lot, and in little time; adopt better standards than just “it builds on my laptop” or “it works if you have been in the business for 15 years and know all the missing stairways, and by the way, isn’t that a massive bear trap covered with a tarpaulin on the way to the goal”. Complex upstream projects will have to start caring about things like reproducibility; licensing; security updates; continuous integration; QA and validation. We will need to care about stable system services, and backward compatibility. We will not be shielded by a third party any more.

The good news is: we have a lot of people that know about this stuff, and we can ask them how to make it work. We can take existing tools and make them generic and part of our build pipeline, instead of having them inside silos. We can adopt shared policies upstream instead of applying them downstream, and twisting software to adapt to all of them.

Again, this won’t be easy.

If we wanted easy, though, we would not be making free and open source software for everyone.

All Systems Go! 2017 Speakers

The All Systems Go! 2017 Headline Speakers Announced!

Don't forget to send in your submissions to the All Systems Go! 2017 CfP! Proposals are accepted until September 3rd!

A couple of headline speakers have been announced now:

  • Alban Crequy (Kinvolk)
  • Brian "Redbeard" Harrington (CoreOS)
  • Gianluca Borello (Sysdig)
  • Jon Boulle (NStack/CoreOS)
  • Martin Pitt (Debian)
  • Thomas Graf (
  • Vincent Batts (Red Hat/OCI)
  • (and yours truly)

These folks will also review your submissions as part of the papers committee!

All Systems Go! is an Open Source community conference focused on the projects and technologies at the foundation of modern Linux systems — specifically low-level user-space technologies. Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward.

All Systems Go! 2017 takes place in Berlin, Germany on October 21st+22nd.

To submit your proposal now please visit our CFP submission web site.

For further information about All Systems Go! visit our conference web site.

Forward only binary patching

A couple of weeks ago I’ve added some new functionality to dfu-tool which is shipped in fwupd. The dfu-tool utility (via libdfu) now has the ability to forward-patch binary files, somewhat like bsdiff does. To do this it compares the old firmware with the new firmware, finding blocks of data that are different and storing the new content and the offset in a .dfup file. The reason for storing the new content rather than a binary diff (like bsdiff) is that you can remove non-free and non-redistributable code without actually including it in the diff file (which, you might be doing if you’re neutering/removing the Intel Management Engine). This does make reversing the binary patch process impossible, but this isn’t a huge problem if we keep the old file around for downgrades.

$ sha1sum ~/firmware-releases/colorhug-1.1.6.bin
955386767a0108faf104f74985ccbefcd2f6050c  ~/firmware-releases/colorhug-1.1.6.bin

$ sha1sum ~/firmware-releases/colorhug-1.1.7.bin
9b7dbb24dbcae85fbbf045e7ff401fb3f57ddf31  ~/firmware-releases/colorhug-1.1.7.bin

$  dfu-tool patch-create ~/firmware-releases/colorhug-1.1.6.bin
~/firmware-releases/colorhug-1.1.7.bin colorhug-1_1_6-to-1_1_7.dfup
Dfu-DEBUG: binary growing from: 19200 to 19712
Dfu-DEBUG: add chunk @0x0000 (len 3)
Dfu-DEBUG: add chunk @0x0058 (len 2)
Dfu-DEBUG: add chunk @0x023a (len 19142)
Dfu-DEBUG: blob size is 19231

$ dfu-tool patch-dump colorhug-1_1_6-to-1_1_7.dfup
checksum-old: 955386767a0108faf104f74985ccbefcd2f6050c
checksum-new: 9b7dbb24dbcae85fbbf045e7ff401fb3f57ddf31
chunk #00     0x0000, length 3
chunk #01     0x0058, length 2
chunk #02     0x023a, length 19142

$ dfu-tool patch-apply ~/firmware-releases/colorhug-1.1.6.bin
colorhug-1_1_6-to-1_1_7.dfup new.bin -v
Dfu-DEBUG: binary growing from: 19200 to 19712
Dfu-DEBUG: applying chunk 1/3 @0x0000 (length 3)
Dfu-DEBUG: applying chunk 2/3 @0x0058 (length 2)
Dfu-DEBUG: applying chunk 3/3 @0x023a (length 19142)

$ sha1sum new.bin
9b7dbb24dbcae85fbbf045e7ff401fb3f57ddf31  new.bin

Perhaps a bad example here, the compiler changed between 1.1.6 and 1.1.7 so lots of internal offsets changed and there’s no partitions inside the image; but you get the idea. For some system firmware where only a BIOS default was changed this can reduce the size of the download from megabytes to tens of bytes; the largest thing in the .cab then becomes the XML metadata (which also compresses rather well). Of course in this case you can also use bsdiff if it’s already installed — I’ve not yet decided if it makes sense for fwupd to runtime require tools like bspatch as these could be needed by the firmware builder bubblewrap functionality, or if it could just be included as statically linked binaries in the .cab file. Comments welcome.


I haven’t been blogging much lately but I couldn’t miss this opportunity of telling you about GUADEC 2017 in the hope that it is going to encourage you to attend our next year edition in Almería, Spain.

Looking back at the six editions of GUADEC that I have attended so far, I can honestly say that we are getting better and better, edition after edition. You might disagree but it is quite clear to me that we are evolving in a very promising direction as a software project and as a community (despite the political turmoil that our world is under).

The GNOME Way™ has shined as a promising path towards a sustainable and progressive community, where “It is a rejection of technological elitism. It is an egalitarian version of openness” that enables us to move forward in an ethical way.

This way I can guarantee that your attendance is going to be not only a pleasant but enlightening experience.

GUADEC 2017 - Group photoGUADEC 2017 – Group photo

In this edition, as always, we had an excellent selection of talks presented by our community members. It was extremely hard having to pick a talk when there were multiple ones happening simultaneously.

In the day one morning I was chairing the sessions at the Turing room (nice choice of names along side Hopper btw), which limited my attendance of talks happening in the Hopper room. But anyway I would have been experiencing FOMO if I would be chairing the other room instead. ;-)

My personal highlights are “Please Use GNOME Web” by Michael Catanzaro, and Christian Hergert rocking as always in his “State of the Builder” address.  Later that day our former executive director Karen Sandler was keynoting “The Battle Over Our Technology” where once more Software Freedom was in the spotlight.

After the afternoon brake, I chaired the sessions in the Hopper room, which gave me the opportunity to be part of the monetization discussions related to GNOME Software and Flatpak, presented by Jorge Garcia and Richard Hughsie. The activities of this room were closed by Julita Inca giving her reports of her outreachy activities in Peru.

The whole conference day ended with our traditional Interns Lightning Talks. As someone who has been in the other side, I can tell who anxious one must feel of speaking in front of such a qualified audience. But the whole tension disappears in the air as soon as you see how receptive the GNOME community is to Newcomers and their projects.

At day two I attended Jussi Pakkanen talk about meson, since I have been personally porting projects that I maintain into the build system, convincing me even more that this is a right choice. Unfortunately Nirbheek Chauhan couldn’t come, I hope his health is better now.

Carlos Garnacho and Florian Müllner talked about the future of our Shell (and handled very well the questions. ;-)

This day I also watched Federico share his experiences of porting librsvg to Rust, and Carlos Garnacho talk about the future of Tracker.

The main attraction of the day, IMO, was Jonathan Blandford’s “The History of GNOME” talk. If you’d have just 30 minutes to watch GUADEC talks, I would recommend this one. It was a zeitgeist of the last 20 years of our project/community with a good pinch of comedy and interaction with the living legends sitting in the audience.

Later everybody tied their ties to get serious for the AGM report. :-)

In the last conference day I skipped Philip’s JavaScript talk to see Jakub Steiner talking about transitions and try to imagine where he would fill drones in the slides. :-)

Continuing in the design world, I was at the audience of Tobias Bernard’s “Building interfaces from the future” as well. #Inspiring

Matthew Garrett (I’m a big fan btw) attended GUADEC to share with us his expertise in security. And after it I jumped to Tristan’s Buildstream talk in the other room.

After lunch I rushed into the conference room to see Tim Lunn talk about Ubuntu’s return to GNOME, since I have nothing but good hopes for both projects and mostly for the users of free desktops.

Peter Hutterer traveled a long distance to tell us about mice! :p Followed by the GitLab conversation which sounded like a very promising closure for all the debates that took place before in emails and forums.

I then hoped into the other room to watch Wim Taymans give a freestyle talk about his exciting experiments developing what we now call Pipewire. To end the activities in the Hopper room, Carlos Garnacho confessed the murder of GdkWindow in front of the audience.

The lightning talks were the cherry on top!

Other than the talks, we had social events which gathered us even closer by having beers and delicious food. A special highlight to the 20th anniversary party which was a fantastic surprise that got us all emotional and proud of our community.

During the Unconference days I took advantage of being a few meters apart from people that I work daily through the internet to have more discussions and insights about the stuff we hack on. I would like to thank Zeeshan Ali for the counselling regarding the future of Boxes.

All in all, I probably forgot to mention many other interactions and remarkable moments that I have experienced throughout the week in Manchester, but I guess you can figure everything else by reading all the other blog posts in Planet GNOME.

Last but not least, I would like to thank my employer Red Hat for sponsoring my trip and the GUADEC organizers for an awesome conference. See you all soon!

Status update, August 2017


In March I joined the team at eyeo GmbH, the company behind Adblock Plus, as a core developer. Among other things I'm improving the filtering capabilities.

While they are based in Cologne, Germany, I'm still working remotely from Montréal.

It is great to help making the web more user centric.

Personal project:

I started working again on Niepce, currently implementing the file import. I also started to rewrite the back-end in Rust. The long term is to move completely to Rust, this will happen in parallel with feature implementation.

This and other satellite projects are part of my great plan I have for digital photography on Linux with GNOME.

'til next time.

GUADEC 2017 presentation

During GUADEC this year I gave a presentation called Replacing C library code with Rust: what I learned with librsvg. This is the PDF file; be sure to scroll past the full-page presentation pages until you reach the speaker's notes, especially for the code sections!

Replacing C library code with Rust - link to PDF

You can also get the ODP file for the presentation. This is released under a CC-BY-SA license.

For the presentation, my daughter Luciana made some drawings of Ferris, the Rust mascot, also released under the same license:

Ferris says hi Ferris busy at work Ferris makes a mess Ferris presents her work

August 09, 2017

On Firefox Sync

Epiphany 3.26 is, unfortunately, not going to be packed with cool new features like 3.24 was. We’ve just been too busy working on improving WebKit this cycle. But there is one cool new thing: Firefox Sync support. You can sync bookmarks, history, passwords, and open tabs with other Epiphany instances and as well as both desktop and mobile Firefox. This is already enabled in 3.25.90. Just go to the Sync tab in Preferences and sign in or create your Firefox account there. Please test it out and report bugs now, so we can quash problems you find before 3.26.0 rather than after.

Some thank yous are in order:

  • Thanks to Gabriel Ivascu, for writing all the code.
  • Thanks to Google and Igalia for sponsoring Gabriel’s work.
  • Thanks to Mozilla. This project would never have been possible if Mozilla had not carefully written its terms of service to allow such use.

Go forth and sync!

GObject design pattern: attached class extension

I wanted to share one recurrent API design that I’ve implemented several times and that I’ve found useful. I’ve coined it “attached class extension”. It is not a complete description like the design patterns documented in the Gang of Four book (I didn’t want to write 10 pages on the subject), it is more a draft. Also the most difficult is to come up with good names, so comments welcome ;)


Adding a GObject property or signal to an existing class, but modifying that class is not possible (because it is part of another module), and creating a subclass is not desirable.

Also Unknown As

“One-to-one class extension”, or simply “class extension”, or “extending class”.


First example: in the gspell library, we would like to extend the GtkTextView class to add spell-checking. We need to create a boolean property to enable/disable the feature. Subclassing GtkTextView is not desirable because the GtkSourceView library already has a subclass (and it should be possible in an application to use both GtkSourceView and gspell at the same time1 ).

Before describing the “attached class extension” design pattern, another solution is described, to have some contrast and thus to better understand the design pattern.

Since subclassing is not desirable in our case, as always with Object-Oriented Programming: composition to the rescue! A possible solution is to create a direct subclass of GObject that takes by composition a GtkTextView reference (with a construct-only property). But this has a small disadvantage: the application needs to create and store an additional object. One example in the wild of such pattern is the GtkSourceSearchContext class which takes a GtkSourceBuffer reference to extend it with search capability. Note that there is a one-to-many relationship between GtkSourceBuffer and GtkSourceSearchContext, since it is possible to create several SearchContext instances for the same Buffer. And note that the application needs to store both the Buffer and the SearchContext objects. This pattern could be named “one-to-many class extension”, or “detached class extension”.

The solution with the “attached class extension” or “one-to-one class extension” design pattern also uses composition, but in the reverse direction: see the implementation of the gspell_text_view_get_from_gtk_text_view() function, it speaks for itself. It uses g_object_get_data() and g_object_set_data_full(), to store the GspellTextView object in the GtkTextView. So the GtkTextView object has a strong reference to the GspellTextView; when the GtkTextView is destroyed, so is the GspellTextView. The nice thing with this design pattern is that an application wanting to enable spell-checking doesn’t need to store any additional object, the application has normally already a reference to the GtkTextView, so, by extension, it has also access to the GspellTextView. With this implementation, GtkTextView can store only one GspellTextView, so it is a one-to-one relationship.

Other Examples

I’ve applied this design pattern several other times in gspell, Amtk and Tepl. To give a few other examples:

  • GspellEntry: adding spell-checking to GtkEntry. GspellEntry is not a subclass of GtkEntry because there is already GtkSearchEntry.
  • AmtkMenuShell that extends GtkMenuShell to add convenience signals. GtkMenuShell is the abstract class to derive the GtkMenu and GtkMenuBar subclasses. The convenience signals must work with any GtkMenuShell subclass.
  • AmtkApplicationWindow, an extension of GtkApplicationWindow to add a statusbar property. Subclassing GtkApplicationWindow in a library is not desirable, because several libraries might want to extend GtkApplicationWindow and an application needs to be able to use all those extensions at the same time (the same applies to GtkApplication).
  1. Why not implementing spell-checking in GtkSourceView then? Because gspell also supports GtkEntry.

DebConf 17: Flatpak and Debian

The indoor garden at Collège de Maisonneuve, the DebConf 17 venue
Decorative photo of the indoor garden

I'm currently at DebConf 17 in Montréal, back at DebConf for the first time in 10 years (last time was DebConf 7 in Edinburgh). It's great to put names to faces and meet more of my co-developers in person!

On Monday I gave a talk entitled “A Debian maintainer's guide to Flatpak”, aiming to introduce Debian developers to Flatpak, and show how Flatpak and Debian (and Debian derivatives like SteamOS) can help each other. It seems to have been quite well received, with people generally positive about the idea of using Flatpak to deliver backports and faster-moving leaf packages (games!) onto the stable base platform that Debian is so good at providing.

A video of the talk is available from the Debian Meetings Archive. I've also put up my slides in the DebConf git-annex repository, with some small edits to link to more source code: A Debian maintainer's guide to Flatpak. Source code for the slides is also available from Collabora's git server.

The next step is to take my proof-of-concept for building Flatpak runtimes and apps from Debian and SteamOS packages, flatdeb, get it a bit more production-ready, and perhaps start publishing some sample runtimes from a cron job on a Debian or Collabora server. (By the way, if you downloaded that source right after my talk, please update - I've now pushed some late changes that were necessary to fix the 3D drivers for my OpenArena demo.)

I don't think Debian will be going quite as far as Endless any time soon: as Cosimo outlined in the talk right before mine, they deploy their Debian derivative as an immutable base OS with libOSTree, with all the user-installable modules above that coming from Flatpak. That model is certainly an interesting thing to think about for Debian derivatives, though: at Collabora we work on a lot of appliance-like embedded Debian derivatives, with a lot of flexibility during development but very limited state on deployed systems, and Endless' approach seems a perfect fit for those situations.

[Edited 2017-08-16 to fix the link for the slides, and add links for the video]