November 16, 2019

GNOME Patent Troll Defense Fund reaches nearly 4,000 donors!

A lot has happened since our announcement that Rothschild Imaging Ltd was alleging that GNOME is violating one of their patents. We wanted to provide you with a brief update of what has been happening over the past few weeks.

Legal cases can be expensive, and the cost of a patent case can easily reach over a million dollars. As a small non-profit, we decided to reach out to our community and ask for financial support towards our efforts to keep patent trolls out of open source. More than 3,800 of you have stepped up and contributed to the GNOME Patent Troll Legal Defense Fund. We’d like to sincerely thank everyone who has donated. If you need any additional documentation for an employer match, please contact us.

Individuals aren’t the only supporters of this initial fundraiser. The Debian Project generously reached out with a donation and Igalia also donated to support our legal efforts.

There’s been a wonderful outpouring of support from the free and open source software communities. The Software Freedom Conservancy issued a statement. Meanwhile the Open Invention Network is lending a hand in the search for examples or prior art.

We set ourselves an ambitious fundraising goal of $1.1 million to support our defense. We expect the majority of this to be raised from corporate sponsorship, but we’re going to keep working for more individual and community donations. Please share our GiveLively donation page with your social media networks. If you’re a non-profit that has issued (or is is interested in issuing) a statement of support, we’d love to hear from you.

If you want to receive updates on the case, please sign up for the GNOME Legal Updates List.

November 15, 2019

Creating an USB3 OTG cable for the Thinkpad 8

The Lenovo Thinkpad 8 and also the Asus T100CHI both have an USB3 micro-B connector, but using a standard USB3 OTG (USB 3 micro-B to USB3-A receptacle) cable results in only USB2 devices working. USB3 devices are not recognized.

Searching the internet learns that many people have this problem and that the solution is to find a USB3 micro-A to USB3-A receptacle cable. This sounds like nonsense to me as micro-B really is micro-AB and is supposed to use the ID pin to automatically switch between modes dependent on the used cable.; and this does work for the USB-2 parts of the micro-B connector on the Thinkpad. Yet people do claim success with such cables (with a more square micro-A connector, instead of the trapezoid micro-B connector). The only problem is such cables are not for sale anywhere.

So I guessed that this means is that they have swapped the Rx and Tx superspeed pairs on the USB3 only part of the micro-B connector, and I decided to cut open one of my USB3 micro-A to USB3-A receptacle cables and swap the superspeed pairs. Here is what the cable looks like when it it cut open:

If you are going to make such a cable yourself, to get this picture I first removed the outer plastic isolation (part of it is still there on the right side in this picture). Then I folded away the shield wires till they are all on one side (wires at the top of the picture). After this I removed the metal foil underneath the shield wires.

Having removed the first 2 layers of shielding this reveals 4 wires in the standard USB2 colors: red, black, green and white. and 2 separately shielded cable pairs. On the picture above the separately shielded pairs have been cut, giving us 4 pairs, 2 on each end of cable; and the shielding has been removed from 3 of the 4 pairs, you can still see the shielding on the 4th pair.

A standard USB3 cable uses the following color codes:

  • Red: Vbus / 5 volt

  • White:  USB 2.0 Data -

  • Green: USB 2.0 Data +

  • Black: Ground

  • Purple: Superspeed RX -

  • Orange: Superspeed RX +

  • Blue: Superspeed TX -

  • Yellow: Superspeed TX -

So to swap RX and TX we need to connect purple to blue / blue to purple and orange to yellow / yellow to orange, resulting in:

Note the wires are just braided together here, not soldered yet. This is a good moment to carefully test the cable. Also note that the superspeed wire pairs must be length matched, so you need to cut and strip all 8 cables at the same length! If everything works you can put some solder on those braided together wires, re-test after soldering, and then cover them with some heat-shrink-tube:

And then cover the entire junction with a bigger heat-shrink-tube:

And you have a superspeed capable cable even though no one will sell you one.

Note that the Thinkpad 8 supports ACA mode so if you get an ACA capable "Y" cable or an ACA charging HUB then you can charge and use the Thinkpad 8 USB port at the same time. Typically ACA "Y" cables or hubs are USB2 only. So the superspeed mod from this blogpost will not help with those. The Asus T100CHI has a separate USB2 micro-B just for charging, so you do not need anything special there to charge + connect an USB device.

Catching Java exceptions in Swift via j2objc

Java can be converted to Objective-C and called from Swift code in an iOS app with the tool j2objc. I wrote this piece on how to handle Java exceptions from this Swift code.

Long-term betting on dependencies

Years ago, when I started planning my cocktail app, I looked at options for code re-use between Android and iOS. Critically, I knew it would take me a while to get the first platform release out so I was worried any tool I expected to use might be unmaintained by then.

I found the tool j2objc and it looked really promising:

  • open source
  • development funded by Google
  • it was being relied upon by Google for most of their apps on both mobile platforms (plus the Web)

So, I could always adapt the tool for my own needs if it did get abandoned. But the maintainers had significant motivation and resources to maintain the project so that seemed like a low risk anyhow.

I didn't imagine Android and iOS would change so much in the time it took to get my Android app completed. Both platforms even changed their primary development language in that span along with a lot of best practices and recommended components. Even though both platforms do a great job of keeping their documentation up-to-date and the most relevant pieces easy to find, learning to develop on one of these platforms is challenging. There still is a lot of outdated information out there (be it Stack Overflow posts or an incredible number of tutorials) and there are stacks and stacks of components to learn. Expectations for modern mobile apps are really high so the number of details to get right can be daunting.

Then to build your app for the other platform you get to start all over! :)

Thankfully, my bet on j2objc proved to be a good one. It's actively maintained by very helpful developers and works as expected. I've completed most of the risky work in porting the core of my app to iOS and any work I do on that core benefits the apps on both platforms.

There are very few compromises I have to make because language features in Java map surprisingly well to both Objective-C and Swift.

But one important exception remained. I'll cover that in a subsequent post.

November 14, 2019

fwupd and bolt power struggles

As readers of this blog might remember, there is a mode where the firmware (BIOS) is responsible for powering the Thunderbolt controller. This means that if no device is connected to the USB type C port the controller will be physically powered down. The obvious upside is battery savings. The downside is that, for a system in that state, we cannot tell if it has a Thunderbolt controller, nor determine any of its properties, like firmware version. Luckily, there is an interface to tell the firmware (BIOS) to "force-power" the controller. The interface is a write only sysfs attribute. The writes are not reference counted, i.e. two separate commands to enable the force-power state followed by a single disable, will indeed disable the controller. For some time boltd and the firmware update daemon both directly poked that interface. This lead to some interference, leading in turn to strange timing bugs. The canonical example goes like this: fwupd force-powers the controller, uevents will be triggered and Thunderbolt entries appear in sysfs. The boltd daemon will be started via udev+systemd activation. The daemon initializes itself and starts enumerating and probing the Thunderbolt controller. Meanwhile fwupd is done with its thing and cuts the power to the controller. That makes boltd and the controller sad because they were still in the middle of getting to know each other.

boltctl force-power

boltctl power -q can be used to inspect the current force power settings

To fix this issue, boltd gained a force-power D-Bus API and fwupd in turn gained support for using that new API. No more fighting over the force-power sysfs interface. So far so good. But an unintended side-effect of that change was that now bolt was always being started, indirectly by fwupd via D-Bus activation, even if there was no Thunderbolt controller in the system to begin with. Since the daemon currently does not exit even if there is no Thunderbolt hardware1, you have a system-daemon running, but not doing anything useful. This understandably made some people unhappy (rhbz#1650881, lp#1801796). I recently made a small change to the fwupd, which should do away with this issue: before making a call to boltd, fwupd now itself checks if the force-power facility is present. If not, don't bother asking boltd and starting it in the process. The change is included in fwupd 1.3.3. Now both machine and people should be happy, I hope.


  1. That is a complicated story that needs new systemd features. See #92 for the interesting technical details.

November 13, 2019

LAS 2019, Barcelona

The event

The Linux App Summit (LAS) is a great event that bring together a lot of linux application developers, from the bigger communities, it's organized by GNOME and KDE in collaboration and it's a good place to talk about the Linux desktop, application distribution and development.

This year the event was organized in Barcelona, this is not too far from my home town, Málaga, so I want to be there.

I sent a talk proposal and was accepted, so I was talking about distributing services with flatpak and problems related to service deployment in a flatpaked world.

Clicking in this image you can find my talk in the event streaming. The sound is not too good and my accent doesn't help, but there it is :D

The event was a really great event, with really good talks, about different topics, we've some technical talks, some talks about design, talks about language, about distribution, about the market and economics, and at least two about "removing" the system tray 😋

It was really interesting the talk about the "future" inclusion of payments in flathub because I think that this will give a new incentive to people to write and publish apps in flathub and could be a great step to get donations for developers.

Another talk that I liked was the one about the maintenance of flatpak repositories, it's always interesting to know how the things works and this talk give an easy introduction to ostree, flatpak, repositories and application distribution.

Besides the talks, this event is really interesting for the people that bring together. I've been talking with a lot of people, not too much, because I'm a shy person, but I've the opportunity to talk a bit with some Fractal developers, and during a coffee talk with Jordan Petridis, we've time to share some ideas about a cool new functionality that maybe we can implement in the near future, thanks to the outreachy program and maybe some help from the gstreamer people.

I'm also very happy to be able to spend some time talking with Martín Abente, about sugar labs, the hack computer and the different ways to teach kids with free software. Martín is a really nice person and I liked a lot to meet him and share some thoughts.

The city

This is not my first time in Barcelona, I was here at the beginning of this year, but this is a great city and I've no time to visit all the places the first time.

So I've spent the Thursday afternoon doing some tourism, visiting the "Sagrada Familia" and the "Montjuïc" fountain.

If you have not been in Barcelona and you have the opportunity to come here, don't hesitate, it's a really good city, with a great architecture to admire and really nice culture and people, and here you can find good food to enjoy.

Thank you all

I was sponsored by the GNOME Foundation, I'm really thankful for this opportunity, to come here, give a talk and share some time with great people that makes the awesome Linux and open source community possible.

I want to thank to my employer Endless because it's really a privilege to have a job that allows this kind of interactions with the community, and my team Hack, because they I've missed some meetings this week and I was not very responsive during the week.

And I want to thank to the LAS organization, because this was a really good event, good job, you can be very proud.

The GTK Rust bindings are not ready yet? Yes they are!

When talking to various people at conferences in the last year or at conferences, a recurring topic was that they believed that the GTK Rust bindings are not ready for use yet.

I don’t know where that perception comes from but if it was true, there wouldn’t have been applications like Fractal, Podcasts or Shortwave using GTK from Rust, or I wouldn’t be able to do a workshop about desktop application development in Rust with GTK and GStreamer at the Linux Application Summit in Barcelona this Friday (code can be found here already) or earlier this year at GUADEC.

One reason I sometimes hear is that there is not support for creating subclasses of GTK types in Rust yet. While that was true, it is not true anymore nowadays. But even more important: unless you want to create your own special widgets, you don’t need that. Many examples and tutorials in other languages make use of inheritance/subclassing for the applications’ architecture, but that’s because it is the idiomatic pattern in those languages. However, in Rust other patterns are more idiomatic and even for those examples and tutorials in other languages it wouldn’t be the one and only option to design applications.

Almost everything is included in the bindings at this point, so seriously consider writing your next GTK UI application in Rust. While some minor features are still missing from the bindings, none of those should prevent you from successfully writing your application.

And if something is actually missing for your use-case or something is not working as expected, please let us know. We’d be happy to make your life easier!


Some people are already experimenting with new UI development patterns on top of the GTK Rust bindings. So if you want to try developing an UI application but want to try something different than the usual signal/callback spaghetti code, also take a look at those.

November 12, 2019

CSS in librsvg is now in Rust, courtesy of Mozilla Servo

Summary: after an epic amount of refactoring, librsvg now does all CSS parsing and matching in Rust, without using libcroco. In addition, the CSS engine comes from Mozilla Servo, so it should be able to handle much more complex CSS than librsvg ever could before.

This is the story of CSS support in librsvg.


The first commit to introduce CSS parsing in librsvg dates from 2002. It was as minimal as possible, written to support a small subset of what was then CSS2.

Librsvg handled CSS stylesheets more "piecing them apart" than "parsing them". You know, when g_strsplit() is your best friend. The basic parsing algorithm was to turn a stylesheet like this:

rect { fill: blue; }

.classname {
    fill: green;
    stroke-width: 4;

Into a hash table whose keys are strings like rect and .classname, and whose values are everything inside curly braces.

The selector matching phase was equally simple. The code only handled a few possible match types as follows. If it wanted to match a certain kind of CSS selector, it would say, "what would this selector look like in CSS syntax", it would make up a string with that syntax, and compare it to the key strings it had stored in the hash table from above.

So, to match an element name selector, it would sprintf("%s", element->name), obtain something like rect and see if the hash table had such a key.

To match a class selector, it would sprintf(".%s", element->class), obtain something like .classname, and look it up in the hash table.

This scheme supported only a few combinations. It handled tag, .class, tag.class, and a few combinations with #id in them. This was enough to support very simple stylesheets.

The value corresponding to each key in the hash table was the stuff between curly braces in the stylesheet, so the second rule from the example above would contain fill: green; stroke-width: 4;. Once librsvg decided that an SVG element matched that CSS rule, it would re-parse the string with the CSS properties and apply them to the element's style.

I'm amazed that so little code was enough to deal with a good number of SVG files with stylesheets. I suspect that this was due to a few things:

  • While people were using complex CSS in HTML all the time, it was less common for SVG...

  • ... because CSS2 was somewhat new, and the SVG spec was still being written...

  • ... and SVGs created with illustration programs don't really use stylesheets; they include the full style information inside each element instead of symbolically referencing it from a stylesheet.

From the kinds of bugs that librsvg has gotten around "CSS support is too limited", it feels like SVGs which use CSS features are either hand-written, or machine-generated from custom programs like data plotting software. Illustration programs tend to list all style properties explicitly in each SVG element, and don't use CSS.

Libcroco appears

The first commit to introduce libcroco was to do CSS parsing, from March 2003.

At the same time, libcroco was introducing code to do CSS matching. However, this code never got used in librsvg; it still kept its simple string-based matcher. Maybe libcroco's API was not ready?

Libcroco fell out of maintainership around the first half of 2005, and volunteers have kept fixing it since then.

Problems with librsvg's string matcher for CSS

The C implementation of CSS matching in librsvg remained basically untouched until 2018, when Paolo Borelli and I started porting the surrounding code to Rust.

I had a lot of trouble figuring out the concepts from the code. I didn't know all the terminology of CSS implementations, and librsvg didn't use it, either.

I think that librsvg's code suffered from what the refactoring literature calls primitive obsession. Instead of having a parsed representation of CSS selectors, librsvg just stored a stringified version of them. So, a selector like rect#classname really was stored with a string like that, instead of an actual decomposition into structs.

Moreover, things were misnamed. This is the field that stored stylesheet data inside an RsvgHandle:

    GHashTable *css_props;

From just looking at the field declaration, this doesn't tell me anything about what kind of data is stored there. One has to grep the source code for where that field is used:

static void
rsvg_css_define_style (RsvgHandle * ctx,
                       const gchar * selector,
                       const gchar * style_name,
                       const gchar * style_value,
                       gboolean important)
    GHashTable *styles;

    styles = g_hash_table_lookup (ctx->priv->css_props, selector);

Okay, it looks up a selector by name in the css_props, and it gives back... another hash table styles? What's in there?

        g_hash_table_insert (styles,
                             g_strdup (style_name),
                             style_value_data_new (style_value, important));

Another string key called style_name, whose key is a StyleValueData; what's in it?

typedef struct _StyleValueData {
    gchar *value;
    gboolean important;
} StyleValueData;

The value is another string. Strings all the way!

At the time, I didn't really figure out what each level of nested hash tables was supposed to mean. I didn't understand why we handled style properties in a completely different part of the code, and yet this part had a css_props field that didn't seem to store properties at all.

It took a while to realize that css_props was misnamed. It wasn't storing a mapping of selector names to properties; it was storing a mapping of selector names to declaration lists, which are lists of property/value pairs.

So, when I started porting the CSS parsing code to Rust, I started to create real types with for each concept.

// Maps property_name -> Declaration
type DeclarationList = HashMap<String, Declaration>;

pub struct CssStyles {
    selectors_to_declarations: HashMap<String, DeclarationList>,

Even though the keys of those HashMaps are still strings, because librsvg didn't have a better way to represent their corresponding concepts, at least those declarations let one see what the hell is being stored without grepping the rest of the code. This is a part of the code that I didn't really touch very much, so it was nice to have that reminder.

The first port of the CSS matching code to Rust kept the same algorithm as the C code, the one that created strings with element.class and compared them to the stored selector names. Ugly, but it still worked in the same limited fashion.

Rustifying the CSS parsers

It turns out that CSS parsing is divided in two parts. One can have a style attribute inside an element, for example

<rect x="0" y="0" width="100" height="100"
      style="fill: green; stroke: magenta; stroke-width: 4;"/>

This is a plain declaration list which is not associated to any selectors, and which is applied directly to just the element in which it appears.

Then, there is the <style> element itself, with a normal-looking CSS stylesheet

<style type="text/css">
  rect {
    fill: green;
    stroke: magenta;
    stroke-width: 4;

This means that all <rect> elements will get that style applied.

I started to look for existing Rust crates to parse and handle CSS data. The cssparser and selectors crates come from Mozilla, so I thought they should do a pretty good job of things.

And they do! Except that they are not a drop-in replacement for anything. They are what gets used in Mozilla's Servo browser engine, so they are optimized to hell, and the code can be pretty intimidating.

Out of the box, cssparser provides a CSS tokenizer, but it does not know how to handle any properties/values in particular. One must use the tokenizer to implement a parser for each kind of CSS property one wants to support — Servo has mountains of code for all of HTML's style properties, and librsvg had to provide a smaller mountain of code for SVG style properties.

Thus started the big task of porting librsvg's string-based parsers for CSS properties into ones based on cssparser tokens. Cssparser provides a Parser struct, which extracts tokens out of a CSS stream. Out of this, librsvg defines a Parse trait for parsable things:

use cssparser::Parser;

pub trait Parse: Sized {
    type Err;

    fn parse(parser: &mut Parser<'_, '_>) -> Result<Self, Self::Err>;

What's with those two default lifetimes in Parser<'_, '_>? Cssparser tries very hard to be a zero-copy tokenizer. One of the lifetimes refers to the input string which is wrapped in a Tokenizer, which is wrapped in a ParserInput. The other lifetime is for the ParserInput itself.

In the actual implementation of that trait, the Err type also uses the lifetime that refers to the input string. For example, there is a BasicParseErrorKind::UnexpectedToken(Token<'i>), which one returns when there is an unexpected token. And to avoid copying the substring into the error, one returns a slice reference into the original string, thus the lifetime.

I was more of a Rust newbie back then, and it was very hard to make sense of how cssparser was meant to be used.

The process was more or less this:

  • Port the C parsers to Rust; implement types for each CSS property.

  • Port the &str-based parsers into ones that use cssparser.

  • Fix the error handling scheme to match what cssparser's high-level traits expect.

This last point was... hard. Again, I wasn't comfortable enough with Rust lifetimes and nested generics; in the end it was all right.

Moving declaration lists to Rust

With the individual parsers for CSS properties done, and with them already using a different type for each property, the next thing was to implement cssparser's traits to parse declaration lists.

Again, a declaration list looks like this:

fill: blue;
stroke-width: 4;

It's essentially a key/value list.

The trait that cssparser wants us to implement is this:

pub trait DeclarationParser<'i> {
    type Declaration;
    type Error: 'i;

    fn parse_value<'t>(
        &mut self,
        name: CowRcStr<'i>,
        input: &mut Parser<'i, 't>,
    ) -> Result<Self::Declaration, ParseError<'i, Self::Error>>;

That is, define a type for a Declaration, and implement a parse_value() method that takes a name and a Parser, and outputs a Declaration or an error.

What this really means is that the type you implement for Declaration needs to be able to represent all the CSS property types that you care about. Thus, a struct plus a big enum like this:

pub struct Declaration {
    pub prop_name: String,
    pub property: ParsedProperty,
    pub important: bool,

pub enum ParsedProperty {

This gives us declaration lists (the stuff inside curly braces in a CSS stylesheet), but it doesn't give us qualified rules, which are composed of selector names plus a declaration list.

Refactoring towards real CSS concepts

Paolo Borelli has been steadily refactoring librsvg and fixing things like the primitive obsession I mentioned above. We now have real concepts like a Document, Stylesheet, QualifiedRule, Rule, AtRule.

This refactoring took a long time, because it involved redoing the XML loading code and its interaction with the CSS parser a few times.

Implementing traits from the selectors crate

The selectors crate contains Servo's code for parsing CSS selectors and doing matching. However, it is extremely generic. Using it involves implementing a good number of concepts.

For example, this SelectorImpl trait has no methods, and is just a collection of types that refer to your implementation of an element tree. How do you represent an attribute/value? How do you represent an identifier? How do you represent a namespace and a local name?

pub trait SelectorImpl {
    type ExtraMatchingData: ...;
    type AttrValue: ...;
    type Identifier: ...;
    type ClassName: ...;
    type PartName: ...;
    type LocalName: ...;
    type NamespaceUrl: ...;
    type NamespacePrefix: ...;
    type BorrowedNamespaceUrl: ...;
    type BorrowedLocalName: ...;
    type NonTSPseudoClass: ...;
    type PseudoElement: ...;

A lot of those can be String, but Servo has smarter things in store. I ended up using the markup5ever crate, which provides a string interning framework for markup and XML concepts like a LocalName, a Namespace, etc. This reduces memory consumption, because instead of storing string copies of element names everywhere, one just stores tokens for interned strings.

(In the meantime I had to implement support for XML namespaces, which the selectors code really wants, but which librsvg never supported.)

Then, the selectors crate wants you to say how your code implements an element tree. It has a monster trait Element:

pub trait Element {
    type Impl: SelectorImpl;

    fn opaque(&self) -> OpaqueElement;

    fn parent_element(&self) -> Option<Self>;

    fn parent_node_is_shadow_root(&self) -> bool;


    fn prev_sibling_element(&self) -> Option<Self>;
    fn next_sibling_element(&self) -> Option<Self>;

    fn has_local_name(
        local_name: &<Self::Impl as SelectorImpl>::BorrowedLocalName
    ) -> bool;

    fn has_id(
        id: &<Self::Impl as SelectorImpl>::Identifier,
        case_sensitivity: CaseSensitivity,
    ) -> bool;


That is, when you provide an implementation of Element and SelectorImpl, the selectors crate will know how to navigate your element tree and ask it questions like, "does this element have the id #foo?"; "does this element have the name rect?". It makes perfect sense in the end, but it is quite intimidating when you are not 100% comfortable with webs of traits and associated types and generics with a bunch of trait bounds!

I tried implementing that trait twice in the last year, and failed. It turns out that its API needed a key fix that landed last June, but I didn't notice until a couple of weeks ago.


Two days ago, Paolo and I committed the last code to be able to completely replace libcroco.

And, after implementing CSS specificity (which was easy now that we have real CSS concepts and a good pipeline for the CSS cascade), a bunch of very old bugs started falling down (1 2 3 4 5 6).

Now it is going to be easy to implement things like letting the application specify a user stylesheet. In particular, this should let GTK remove the rather egregious hack it has to recolor SVG icons while using librsvg indirectly.


This will appear in librsvg 2.47.1 — that version will no longer require libcroco.

As far as I know, the only module that still depends on libcroco (in GNOME or otherwise) is gnome-shell. It uses libcroco to parse CSS and get the basic structure of selectors so it can implement matching by hand.

Gnome-shell has some code which looks awfully similar to what librsvg had when it was written in C:

  • StTheme has the high-level CSS stylesheet parser and the selector matching code.

  • StThemeNode has the low-level CSS property parsers.

... and it turns out that those files come all the way from HippoCanvas, the CSS-aware canvas that Mugshot used! Mugshot was a circa-2006 pre-Facebook aggregator for social media data like blogs, Flickr pictures, etc. HippoCanvas also got used in Sugar, the GUI for One Laptop Per Child. Yes, our code is that old.

Libcroco is unmaintained, and has outstanding CVEs. I would be very happy to assist someone in porting gnome-shell's CSS code to Rust :)

November 11, 2019

Mutter & GNOME Shell Hackfest

A couple of weeks ago, I was fortunate enough to attend the Mutter & GNOME Shell hackfest in Leidschendam.

The Mutter side has already been covered in detail by Carlos (in a daybyday blog series) and Georges (here), so I will fill in the Shell side a bit:

  • We finally landed Marco’s big “actorization” cleanup, which departs with the delegate pattern used all over gnome-shell (a JS object with an associated actorproperty, and a monkey-patched _delegateproperty that links back to the JS object). This was originally planned for 3.34, but was postponed as it’s an intrusive change that requires many extensions to adjust, and we were already in deep freeze territory. Plus there was some bit in there that I disliked, which we finally resolved in a nicer way at the hackfest.
  • Georges continued his mission to replace ClutterClones. Clones are an evil hack that disable important drawing optimizations in the source actor, so this will be a very welcome change.
  • Someone at the hackfest ran into a common trap, where StBin shadows some properties of its ClutterActor (grand)parent class. This is a long-standing source of confusion, but so far we shied away from addressing it because of the API break. Alas, we already have breaking changes lined up for 3.36, so this is finally fixed.
  • Getting together face-to-face allows for much quicker code-review iterations, so we managed to squeeze in a couple of smaller tasks like a redesigned power-off section in the status menu, a cleanup of system dialogs, wiggling entries after entering a wrong password, shadows in window screenshots, as well as plain bug fixes.
  • There was a small discussion about moving extension management out of GNOME Software and providing a less misleading way of doing extension updates
  • And last but not least of course, some exciting discussions about the designers’ masterplan for world domination that was already hinted at by Carlos and Georges

Besides the productive bits,it is always good to put faces on IRC nicknames: It was nice to finally meet Niels and Jonas D. in person.

Big thanks to Hans for hosting the event, and Maria and Carlos for accommodation and company (with an additional shoutout to Ada and Lis!).

As I am (finally) writing this, I am on a train to Barcelona for the Linux application summit – hope to meet many of you there!

November 10, 2019

First Shortwave Beta

Earlier this year I announced Shortwave, the successor of Gradio. Now, almost 11 months later, I’m proud to announce the first public beta of Shortwave! 🎉

Shortwave is an internet radio player that lets you search for stations, listen to them and record songs automatically.

Automatic recording of songs

When a station is being played, everything gets automatically recorded in the background. You hear a song you like? No problem, you can save the song afterwards and play it with your favorite music player. Songs are automatically detected based on the stream metadata.

Adaptive interface

The interface of Shortwave is completely adaptive and adapts to all screen sizes. So you can use it on the desktop, but also on your Linux (not Android!) based smartphone. For this Shortwave uses the awesome libhandy library!


It’s possible to stream the audio playback to a network device, which implements the Google Cast protocol (e.g. Chromecast). So you can easily listen to your favorite stations e.g. from a TV.

Access to a huge database

Shortwave uses the internet service as station database. It contains more than 25,000 stations. This ensures that you will find every radio station, whether a known or an exotic one. And if something is really missing, you can add your station here.

Where can I get it?

This is the first beta version of Shortwave. All basic features should work, but issues can appear. If somethings is wrong, please open a issue report here!

You can get it from Flathub (Beta). Install it with

flatpak install

or just click here.

November 08, 2019

Sprint 6: new Calendar icon, Flatpak portals in To Do, Privacy panel

The Sprint series comes out every 3 weeks or so. Focus will be on the apps I maintain (Calendar, To Do, Settings, and Mutter), but it may also include other applications that I contribute to.

This report is one Sprint late, since last Sprint was dominated by the GNOME Shell Hackfest. More on that below.


Calendar saw a fantastic amount of improvements. It was dominated by stability improvements, such as a large number of crashes being fixed. Calendar does not misinterpret the -d command line option as a short of --date anymore. The search engine of GNOME Calendar received an important improvement that makes it more robust and less flickery.

Thanks to Jakub, Calendar also received a fresh new icon! Check this out:

And also a development icon:

The new development icon is only enabled when running a development Flatpak.

To Do

GNOME To Do unfortunately didn’t see a lot of action during this Sprint. It now uses the Background portal instead of adding a custom file to .config/autostart.


GNOME Settings, on the other hand, received some large chunks of cleanup! Thanks to Robert Ancell, the Network panel is in the process of being cleaned up to match modern practices, such as GtkTemplate and g_autoptr.

The Privacy panel was also split to a number of self-contained panels! Take a look:

Keep in mind that this is just an initial implementation. There are plans and ongoing research to improve the sidebar navigation, new Display panel mockups, improvements in the Applications panel, and much more, so these screenshots are only representative of the current point in time.

I’ve started working on updating Bastien’s branch that adds a new dialog to select the Alternate Characters key. There is only a tiny fraction of work to be done, but I originally targeted it to be ready for next Sprint anyway.


I’m somewhat disappointed to see some fragility of Scrum when the team size is 1, and unexpected events happen. Even though I’m certain Scrum is supposed to be a team system, I admit I had higher expectations for it.

On the other hand, I’m positively surprised how much it allows me to do when I’m following a regular routine. Even dedicating mere 30 minutes of free time make a lot of difference with the focus the Sprints bring. And because I can manage these projects on my own, and prioritize tasks, I believe it has some balance between stability improvements, cleanups, code reviews, new features, redesigns, and issue tracker management.

November 07, 2019

Rewriting large parts of Beast and Bse

Last Tuesday Beast 0.15.0 was released. This is most probably the last release that supports the Gtk+ Beast UI. We have most of the bits and pieces together to move towards the new EBeast UI and a new synthesis core in the upcoming months and will get rid of a lot of legacy code along the way. For a…

Journées Du Logiciel Libre in Lyon

Seven months ago I had the chance to attend JDLL, a really nice French speaking FLOSS conference that occurs every year since 1998 in Lyon. It was my fifth attendance (I went in 2010, 2011, 2014, and 2016) so I already knew what to expect, but it grew a fair bit since my first one. Back then it was only two talk tracks, one workshop track and a dozen of booth, now they are at five tracks, one or two workshops at the same time depending on the time of the day, and a sports hall full of booths.

Speaking of booths, given the strong GNOME contributor presence in Lyon we have had a table there for a long time. Bastien took care of coordinating, registered for us and arranged for the event box to be shipped. We were four volunteers and took turns sitting behind the table and answering questions from visitors. The audience is quite different from most FLOSS conferences and many barely know what Linux is, so the most common question was what GNOME was.

Another difference from the usual conference is that there is a strong connection to other ethical concerns that are dear to the heart of many FLOSS enthusiasts. Not too far from ours was a booth for a worker union (targeted at people working in the software industry). Vegetarian friendly food was available. Some of the articles for sale here and there were pay what you want. The main theme for that edition (visible in some of the talks) was sustainable development.

I spent most of Saturday behind the booth and it was not too busy. In contrast Sunday was a packed day. The venue was opening at 10:30. We showed up, set up, and not too long after that it was already noon and time to have lunch. Adrien and I held a newcomer workshop at 13:00. We had three attendees and while we were not able to get them to the point of running an app they built themselves because of network issues, we managed to give them an extensive tour of the workflow, Builder and Gitlab. Hopefully they had everything they needed to get started by the end of the hour.

Right as the workshop ended at 14:00 we headed to another room where Adrien gave a talk showcasing the work done to get GNOME applications to work well on the Librem 5. At 15:00 I gave a talk about Matrix where I told people how awesome it is and why they should switch to it. Both talks were well attended and well received.

So why is that report being published so long after the event? Well our talks were recorded and we were told the videos would be available soon, so I wrote the draft for this post and waited for the videos, then I forgot about it, and I just found out that the organizers recently managed to publish the recordings. You can find the video of my talk along with the slides on the dedicated page. Adrien’s is available here.

And if you fancy trying out Fractal, a Matrix client for the GNOME desktop, you can install it by clicking on the following button.

November 06, 2019

First responder: chest pain, evacuation, medical emergency, strange smell, person stuck in elevator, stuck elevator

Within Netherlands each company is by law required to have first responders. These handle various situations until the professionals arrive. It’s usually one of (possible) fire, medical or an evacuation. Normally I’d post this at Google+ but as that’s gone I’m putting the details on this blog. I prefer writing it down so later on I still can read the details.

Beginning of July: chest pain

On a Monday in July I retrieve my pager from security office. After chatting for a bit one mentions something about parking level 1. I head to my desk. Not 5 minutes later I hear via walkie talkie two security people discussing the need for assistance to bring “the bag” to parking level 1. This likely refers to the first aid backpack; it’s filled mostly with all kinds stuff that’s never used though it also contains an AED. The evasive way of asking for the bag probably means that they’re with the person and they’re trying to avoid any additional stress to the person. Conclusion is that the person is conscious and the security person fears an heart attack. I could either a) take an AED from my office floor and directly go to parking level 1, or b) first go to security office, pick up the bag, then walk to parking level 1. Option b is slower but is safer (though maybe I assumed too much), plus the backpack hides the AED. I confirm I’ll respond, then take a few moments to decide what to bring (phone, screwdriver, bright vest, jacket).

Arriving to parking level 1 the person is very hesitant to get any additional help. Security (first aid trained) is with the person and trying to keep him calm. Person looks off, though difficult to pinpoint exactly what’s wrong. Status is chest pain, known heart troubles. We take a moment to convince the person that help is needed (to avoid stress). Finally we’ve convinced the person that help is definitely needed.

Calling 112 was interesting and lasted 4 min 33 secs (!). I mention being at address Hertekade 35. 112 responds that they’re aware of the person in the water at Prins Hendrikkade. Wtf! I mention it’s incorrect, followed by sharing the streetname a few times more. However, the person in the water results in loads of first responders heading towards that location and right past our building. It’s pretty difficult to communicate. Eventually they ask me for the postal code. I know the one for Boompjes (3011 XB), not for Hertekade (3011 XV). Hopefully they’re the same. Turns out it’s not. Eventually the street name is known by 112. Followed by loads of questions. Eventually they mention they’re sending an ambulance. I share this via walkie talkie (“ambulance coming to Hertekade”). This as at Hertekade we can easily get the stretcher into the building, at Boompjes there’s not even a place to stand still. I overheard communication of people being directed towards Boompjes.

A few of the regulars are off on vacation. Others seem to not really respond, which is a bit off as I’m pretty sure they listen to the walkie talkie. We could send a pager, but then we’d have loads of people. Only a few are needed. I think we need to discuss that any person should always just “show up” in case they hear something no matter if they’re specifically requested or not.

Ambulance shows up and the stretcher is guided to parking level 1. The ambulance people start making jokes even before they arrive to the person; I gather they picked up me being annoyed/stressed with the whole 112 call. Oops. I try to calm down. Further, might be a good idea for the ambulance crew to play poker with regular people. ;)

Ambulance crew investigates the person and decide to take the person with them. It always mixed feelings when this happens; not nice that the person is not well, but it does make me feel good about our policy of calling 112 even when it might be a minor thing. This time they took a person, so it’s ok if another time they come for something trivial… though seems it’s usually other people calling for non-ambulance related things.

As a consequence of this incident security checked the walkie talkie relay. The communication was a bit difficult despite trying a few walkie talkies. Secondly, we need to show the postal code in all the places we have the address for 112.

What happened after

Normally I do not hear (nor ask for) anything which happens after an incident. This because of privacy; if they maybe ever tell me, ok. I won’t ask though as maybe they’d feel forced.
Various months after above incident I heard that the person went home and was still at home. Needing to convince someone that it’s ok to call 112 results while actually it was a really serious impact to their life.

Security mentioned they heard another person also who went to 112 in another country, also chest pain. That was around the same time as the incident I responded to. The other person (in the other country) was so serious they didn’t allow him to move out of that country (too big of a risk). As a result, the person was also out for months.

Yearly evacuation

End of September the yearly evacuation drill was held. I was on vacation during that time. Another very experienced first responder was also not in the office. The evacuation seemed to go pretty much ok. Apparently it took one group a while to figure out how to use the walkie talkie. I’m pretty happy with this; I tried before to do a bit less during a drill so others could learn.. but not acting is quite difficult.

The drill included two “fake” people who pretended to have a medical emergency. One pretended to have trouble walking, the other was shaking on the ground (seizure). These were handled well. We have quite a few first aid trained people who usually (and unfortunately) do not participate in the evacuation drill. Hopefully they’ll participate in future.

Medical emergency

On the day of this incident I was chatting with security. They mentioned that the goods elevator was out of order. Normally that would be a very critical to the building as the goods elevator is also the elevator used by the fire department. No working elevator apparently means the building cannot be used. Since a few years, the building setup another elevator to be usable by the fire department (emergency power and some other stuff). That other elevator is much more of a hassle due to the way less convenient location and lack of accessibility from the road. The goods elevator is perfect for either fire department and especially great for ambulances.

The medical emergency happened during lunch. I had my pager with me but not my walkie talkie. My pager didn’t go off so I missed the entire thing. The reception within the building is poor, so there’s 11 signal boosters within the building to ensure a good reception. As a result the pager range outside of the building is pretty impressive. It’s the first time I managed to not receive a page, despite having the pager on me. The range is easily 1 or 2km.

A person in the building forgot to take some pills. As a result the situation deteriorated so much that a) taking the pills wouldn’t be enough b) they could only be helped by 112. The person informed the first responders within the company. They gathered, called 112, then security as well. Security asked for assistance via walkie talkie. After not getting a quick response they sent a pager. Another really active first responder works for the same company as the person needing 112. That first responder ran from their lunch back to the building when he was the pager going off. Meanwhile, the other first responders were pretty much ready.

Once the ambulance arrived they were taken to the relevant floor using the goods elevator. The elevator mechanic worked all morning to fix it; he fixed it and less than 10 minutes later security got the medical emergency call.

Security apparently had nothing to do and sent a picture of goods elevator camera to me as well as a first responder colleague (same one who missed the evacuation drill). I guess this was mainly to make fun of us missing an incident, we’re both are quick to respond to any incident.

Strange smell

Via the walkie talkie I heard security talking about a strange smell. I heard one of the security guys going up to investigate. As I was not busy with work, plus it’s been a while since I had to respond to an incident I mentioned I was joining (why ask if you can tell).

We checked the area itself, various ventilation shafts, different floors, etc. There’s some renovation going on but security already checked those and the work was already stopped an hour before. We checked those floors again but couldn’t find anything. I tried a few tricks to

What might’ve happened is that the smell took quite a while to reach the other floor. Within a big building the air doesn’t really go from one floor to another. It might first go to air filtration system, then eventually to another floor. Those floors might not be close to each other. To properly investigate you need knowledge on which floors are on the same air filtration system.

Security specifically mentioned multiple times to multiple people that they’re really happy to investigate and despite not finding anything to please keep calling no matter if they’re not sure. This is similar to us always calling 112 even if we’re in doubt.

Person stuck in elevator

On October 31 (“Halloween”) first responders had their yearly refresher training. This time I arranged space (2 combined meeting rooms) for this. The first responders are from various companies. By hosting the training at my company I get the benefit of additional first responders who have knowledge of our office layout. This as the layout can differ hugely per floor and/or company. I warned the trainer and most of the first responders that we would have a party starting 16.00 and that it might get a bit loud.

The party was for Halloween (not common to do in my country). Late in the afternoon the first responders somehow got notified that someone was stuck in an elevator. Interestingly, one of the first things they did is try and contact me. I forgot my walkie talkie and didn’t notice. They also tried my phone, but with lots of loud talking, didn’t notice. So a few went out of the meeting room and responded. Thereby passing me.

The person was stuck on a level without any outside doors (this building has an “interesting” design). The person was also very stressed, they called various times within 5 minutes. Normally we’d stand outside and calm them down. In this case you couldn’t really reach. In the past building security would override the elevator and move it manually (similar to an elevator mechanic). They’ve been trained on how to do so by elevator mechanics. But due to an incident security stopped doing this. Mostly because they want to know who is responsible in case they make a mistake. The lack of any answer results in security limiting themselves to some basics.

Security raised an priority incident with building management, which included details about the stressed person inside. Building management immediately raised an incident to yet another company. Eventually that other company raised it with the elevator mechanics company. The elevator mechanics were quite unhappy with the amount of delay. For one it’s wasted time, secondly they were initially quite close by.

Stuck elevator

The day when I went home I noticed security at the elevators. Apparently an elevator was stuck. I joined security to respond. No notification was performed on either the pager or via walkie talkie. As I was leaving the office I also didn’t have any walkie talkie with me.

I joined security to the elevator maintenance room. Apparently there’s even an indicator light nowadays to determine if the elevator is exactly at a floor level. If the elevator is not exactly at a floor level there’s a chance that the person would try to get out and fall down the elevator shaft. Security already checked the elevator and saw that nobody was inside (that or unconscious).

Security first turned on the elevator shaft lights, then turned off the power. After that we went down to where the elevator was. The elevator doors were still close together (maybe 1 cm apart). Normally if there was a person inside they’d immediately respond to the doors “unlocking” by pushing the doors open. Strangely the inside lights were off. Turning off the power usually still keeps the lights inside the elevator on.

This was the day after a person was stuck in an elevator. Apparently the day before first this elevator got stuck, then later in the day the other elevator got stuck with the person inside. The security person who had the evening shift (and not a regular) was a bit nervous as a result and went over the procedure with the other (regular) security person.

November 05, 2019

Congress/Conference organization tasks list draft

Well, when checking my blog looking for references about resources related with conferences organization I’ve found I had any link to this thing I compiled two years ago (!!??). So this post is fixing it.

After organizing a couple national and international conferences I compiled a set of tasks useful as an skeleton for you next conference. The list is not absolutely exhaustive neither strictly formal but it’s complete enough to be, I think, accurate and useful. In its current this task list is published at as Congress/Conference organization draft: «a simplified skeleton of a kanban project for the organization of conferences. It’s is specialized in technical and opensource activities based in real experience».

I think the resource is still valid and useful. So feel free to use it and provide feedback.

task list screenshot

Now, thinking aloud, and considering my crush with EPF Composer I seriously think I should model the tasks with it as an SPEM method and publish both sources and website. And, hopefully, create tools for creating project drafts in well known tools (Gitlab, Taiga itself, etc). Reach me if you are interested too :-)


Xwayland randr resolution change emulation now available in Fedora 31

As mentioned in an earlier blogpost, I have been working on fixing many games showing a small image centered on a black background when they are run fullscreen under Wayland. In that blogpost I was moslty looking at how to solve this for native Wayland games. But for various reasons almost all games still use X11, so instead I've ended up focussing on fixing this for games using Xwayland.

Xwayland now has support for emulating resolution changes requested by an app through the randr or vidmode extensions. If a client makes a resolution change requests this is remembered and if the client then creates a window located at the monitor's origin and sized to exactly that resolution, then Xwayland will ask the compositor to scale it to fill the entire monitor.

For apps which use _NET_WM_FULLLSCREEN (e.g. SDL2, SFML or OGRE based apps) to go fullscreen some help from the compositor is necessary. This is currently implemented in mutter. If you are a developer of another compositor and have questions about this, please drop me an email.

I failed to get this all upstream in time for Fedora 31 final. But now it is all upstream, so 've backported the changes and created an update with the changes. This update is currently in updates-testing, to install this update run the following command:

sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2019-103a594d07

November 01, 2019

Toolbox — A fall 2019 update


Things have been moving fast in Toolbox land, and it’s time to talk about what we have been doing lately.

New home

Toolbox is now part of the containers organization on GitHub. We felt that the project had outgrown the prototype stage — going by the activity on the GitHub project it’s safe to say that there are at least a few thousand users who rely on it to get their work done; and we are increasingly working towards expanding the scope of the project to go beyond just setting up a development environment.

Housing the project in my personal GitHub namespace meant that I couldn’t share admin access with other contributors, and this was a problem we had to address as more and more people keep joining the project. Over the past year, we have developed a really good working relationship with the Podman team and other members of the containers organization, without whom Toolbox wouldn’t exist, so moving in under the same umbrella felt like a natural next step towards growing the project.

Migration to cgroups v2

Fedora 31 ships with cgroups v2 by default. The major blocker for cgroups v2 adoption so far was the lack of support in the various container and virtualization tools, including the Podman stack. Since Toolbox containers are just OCI containers managed with Podman, we saw some action too.

After updating the host operating system to Fedora 31, Toolbox will try to migrate your existing containers to work with cgroups v2. Sadly, this is a somewhat complicated move, and in theory it’s possible that the migration might break some containers depending on how they were configured. So far, as per our testing, it seems that containers created by Toolbox do get smoothly migrated, so hopefully you won’t notice.

However, if things go wrong, barring a delicate surgery on the container requiring some pretty arcane knowledge, your only option might be to do a factory reset of your local Podman installation. As factory resets go, you will lose all your existing OCI containers and images on your local system. This is a sad outcome for those unfortunate enough to encounter it. However, if you do find yourself in this quagmire then take a look at the toolbox reset command.

Note that you need to have podman-1.6.2 and toolbox-0.0.16 for the above to work.

Also, this is one of those changes where it bears repeating that online RPM package updates are fragile. They are officially unsupported on Fedora Workstation, and variants like CoreOS and Silverblue make it even harder. A cgroups v2 migration is only expected to work on a freshly booted system.


The last six months have seen a whole boatload of new features and improvements. Here are some highlights.

On Fedora Silverblue and Workstation, GNOME Terminal keeps track of the current Toolbox container, and just like it preserves the current working directory when opening a new terminal, it’s also able to preserve the Toolbox environment. This is quite convenient when hacking on a Silverblue system, because it removes the extra step of entering a toolbox after opening a new tab or window.

The integration with the host operating system has been deepened. Toolbox containers can now access virtual machines managed by the host’s system libvirt instance, and the host’s ulimits are preserved. The entirety of /dev is made available inside the toolbox as a step towards supporting the proprietary Nvidia driver to enable CUDA for AI/ML frameworks like TensorFlow.

The container’s /run/host now has big chunks of the host’s file hierarchy. This is handy for one-off use-cases which require access to parts of the host that aren’t covered by Toolbox by default.

Last but not the least, Kerberos now works inside Toolbox containers. This will make it easier to contribute to Fedora itself from inside a toolbox.

Survey: making Getting Things GNOME sustainable as a productivity app for public good

Now that you’ve been introduced to the overall concept of Getting Things Done with the video in my previous blog post, let me show you the secret weapon of chaos warriors who want to follow that methodology with a digital tool they can truly own.

Your secret weapon: “Getting Things GNOME”

Getting Things GNOME” is a native GNOME desktop application that is entirely free and open-source (licensed under the GPL) and runs locally on your computer.

Here’s how it looks like on my computer (and yes, I do have over 600 actionable tasks at all times lately):

GTG 0.3.1 showing my personal task list

Getting Things GNOME is one of the most well-made productivity apps of the decade. It has:

  • A mature codebase with almost a decade of work that has been put into it
  • A nice GTK interface (GTK2 in the stable version, GTK3 in the development version), with a flexible free-form text editor for handling inline @tags, and extended descriptions/notes below the title
  • The ability to quickly defer tasks (ex: “do it tomorrow”, “do it next week”, “do it next month”, etc.)
  • Natural language parsing (you can tag things while you type the title, and you can use dates such as “Tomorrow”, “Thursday” in addition to the standard “YYYY-MM-DD”)
  • “Work view” mode for displaying only “actionable” tasks so you can focus on your work
  • The notion of sub-tasks and dependencies
  • Tags, which can also be made hierarchical (ex: “@phone” can be a child of “@work”) and can have colors and icons
  • Search and “saved searches”
  • Plugins!
  • Works offline, owned by you with no restrictions, no licensing B.S., software “by the people for the people”

This is what the Git (development) version of GTG’s UI currently looks like, showing my personal tasks (it needs some improvement, but it’s already impressive):

My tasks with the “kusanagi” tag, as shown by the development version’s GTK3 UI.

Here are some more (old) screenshots:

Historical context

Unfortunately, the previous maintainers never developed a business model to sustain development, and, after a long period of development hell, abandoned the project as they moved on to get various jobs.

This is understandable and I can’t fault them… however, I still need a working and maintained application that will continue working working throughout the years as our technological landscape evolves (the Linux platform is pretty ruthless on that front). Otherwise, you risk having your Linux distribution suddenly stop packaging the app you dearly depend on.

If anyone was wondering if all Free and Open-Source software can continue as a side-project forever, this is a prime case study. This is what happens when software goes unmaintained.

Meanwhile, in the world of proprietary software and software-as-a-service, countless people are paying a big one-time, monthly or yearly licensing fee to use to-do software applications that they don’t own, and for which they can’t be 110% certain that the software respects their digital rights. Depending on these raises fundamental questions when it’s something as long-lived and all-encompassing as a personal productivity system: Can you trust that cloud service? Or that this proprietary app isn’t profiling what’s going on in your OS, or uploading your data if it sees certain key words? That the licensing fee won’t increase? That the whole thing won’t change when the parent company becomes part of a merger, acquisition or bankruptcy? That the app will keep functioning throughout the years as you upgrade your operating systems, that it will work on more than one or two “authorized” computers?

All these questions are relevant when you’re depending on proprietary software. And when you’re depending on an online/cloud service, you’re always at the mercy of that service eventually shutting down (or being acquired, merged, etc.). There are numerous examples of that happening, including this, this, this, this and that, to name only a few.

This is not The End?

It doesn’t have to be that way.

What if we could bring “GTG” back from the dead, finish the job, and sustain it forever more so that we can all benefit from it?

There are two ways this can be done (which are not mutually exclusive, I might add):

  • From the existing userbase, reform a completely new team of software developers that will be willing to come together and “finish the job” to get a new release out of the door. I have obtained administration rights to the GitHub repository so I can, in practice, play “project manager” and help volunteers get on board. We could have a status analysis and brainstorming session to establish a roadmap with the shortest path to releasability, and beyond, and then work together for the months and years to come to accomplish our goals
  • Someone with enough freedom and time (it could be me, it could be someone else, it could be multiple people) gets paid to be day-to-day maintainer(s), the one(s) doing the majority of the core “boring” work to get everything back into shape and mentors the part-time contributors who contribute opportunistic (“drive-by”) patches and merge requests.

The two options are viable, but require enough public interest to happen. This is why I’m making a survey, linked below, to evaluate the potential for such an initiative. Are you interested in GTG coming back in full force? Whatever your answer, let me know through this 5 minutes survey. Your input is very much appreciated:

  • It is much more efficient to run this survey (and share results here) than to spend months preparing a campaign only to find out that there is or isn’t enough demand.
  • It also lets me raise awareness and hopefully assemble a team of new contributors (because you don’t just make them appear out of thin air); if enough volunteers show up (with a lot of time and passion to share) we can get started without much delay, get together and create momentum.

Help me evaluate how we can bring it back to life, with this 5 minutes survey

The survey can be found here:

Filling this survey should take 4-6 minutes at most.

Let’s be clear: I’m not doing this for myself (just grabbing a proprietary app package is much easier and would let me move on to MUCH more lucrative opportunities), I would be doing this for the greater public good, because it breaks my heart to think that GTG would die when it’s such a great piece of software.

There is no sane FLOSS native desktop alternative for Linux users, and open-source software should be worth more money than proprietary software, not less: you are getting better value out of it, with an implicit guarantee that the software respects your rights and privacy, and that it will remain available forever as long as there is someone on the planet willing to maintain it.

On the other hand, spending time creating software costs money; the alternative is not caring and pursuing a lucrative career, so the software remains unmaintained and everybody loses. So I need to know that nursing GTG back to health would be worth the effort, that the application would be used by many (not just a handful) of people around the world. I seek “meaningful” work.

Help me determine if this is worth my (or anyone’s) time by filling the survey today, and please share it with those around you, and elsewhere on the interwebs. Thanks!

The post Survey: making Getting Things GNOME sustainable as a productivity app for public good appeared first on The Open Sourcerer.

g_get_os_info() and GLib 2.63.1

GLib 2.63.1 has been released. The final new API to mention in this mini-series of blog posts about what’s in 2.63.1 is g_get_os_info().

g_get_os_info() is a way to get identifying information about the OS. On Linux, this is gathered from /etc/os-release. On other OSs, it’s gathered using platform-specific APIs (on other Unixes, this means falling back to uname()).

It was written by Robert Ancell, with contributions from Ruslan Izhbulatov, Ting-Wei Lan and Simon McVittie; and it came out of proposals from Robert at GUADEC.

To use it, pass it a key, like G_OS_INFO_KEY_PRETTY_NAME, and it’ll return a newly-allocated string with the corresponding value for the current OS, or NULL if there was none. Different OSs support different sets of keys, and the amount of support and set of keys may change over time.

An example:

g_autofree gchar *os_name = g_get_os_info (G_OS_INFO_KEY_PRETTY_NAME);
g_print ("OS: %s\n", os_name);
/* Prints “OS: Fedora 30 (Workstation Edition)” for me */

October 31, 2019

2019-10-31 Thursday.

  • Call with MikeK, calc renderiing code reading, ESC call, call with Marco, Felipe, mail & more code review.
  • Encouraged to see the Nodepad++ release. Obviously it is uncomfortable for rulers to have another authority capable of questioning the authority of the state.

What I did in October

October in Galicia has a weather surprise for every week. I like it because every time the sun appears you feel like you gotta enjoy it – there might be no more until March.

I didn’t do much work on Tracker this month, beside bug triage and a small amount of prep for the 2.3.1 stable release. The next step for Tracker 3.0 is still to fix a few regressions causing tests to fail in tracker-miners.git. Follow the Tracker 3.0 milestone for more information!


In September I began teaching English classes again after the summer, and so I’ve been polishing the tool that I wrote to index old lesson plans.

It looks a little cooler than before:

Screenshot of Planalyzer app

I’m still quite happy with the hybrid GTK+/webapp approach that I’m taking. I began this way because the app really needs to be available in a browser: you can’t rely on running a custom desktop app on a classroom PC. However, for my own use running it as a webapp is inconvenient, so I added a simple GTK+/WebKit wrapper. It’s kind of experimental and a few weird things come out of it, like how clipboard selections contain some unwanted style info that WebKit injects, but it’s been pretty quick and fun to build the app this way.

I see some developers using Electron these days. In some ways it’s good: apps have strong portabilility to Linux, and are usually easy to hack on too due to being mostly JavaScript. But having multiple 150MB binary builds of Chromium dotted about my machine makes me sad. In the Planalyzer app I use WebKitGTK+, which is already part of GNOME and it works very well. It would be cool if Electron could make use of this in future 🙂


I was always interested in making cool visuals, since I first learned about the PC demoscene back in the 1990s, but i was never very good at it. I once made a rather lame plasma demo using an algorithm i copied from somewhere else.

And then, while reading the Create Digital Music blog earlier this year, I discovered Hydra. I was immediately attracted by the simple, obvious interface: you chain JavaScript functions together and visuals appear right behind the code. You can try it here right away in your browser. I’ve been out of touch with the 3D graphics world forever, so I was impressed just to see that WebGL now exists and works.

I’ve been very much in touch with the world of audio synthesizers, so Hydra’s model of chaining together GL shaders as if it was a signal chain feels very natural to me. I still couldn’t write a fragment or a vertex shader myself, but now I don’t need to, I can skip to the creative part!

So far I’ve only made this rather basic webcam mashup but you can see a lot more Hydra examples in the @hydra_patterns Twitter account.

I also had a go at making online documentation, and added a few features that make it more suitable to non-live coding, such as loading prerecorded audio tracks and videos, and allowing you to record a .webm video of the output. I’m not sure this stuff will make it upstream, as the tool is intended for live coding use, but we’ll see. It’s been a lot of fun hacking on a project that’s so simple and yet so powerful, and hopefully you’ll see some cool music videos from me in the future!

I was there at GNOME.Asia Summit 2019

I was there at Gnome Asia Summit 2019 in Gresik, Indonesia as a speaker. Since I am very new to open source community this was a great experience. I conducted the New comers workshop with my pal Gaurav Agarwal from India. We had our parallel class on day 0 (which is actually day 1) on the premises of University of Muhammadhiya. The expenses of me to attend the event was sponsored by Gnome Foundation. I'm so thankful for this great opportunity.

Let me run through my experience in Gresik and in the conference. I and Guarav landed on 10th of October 2019 in Surabaya. Mr. Firdhous from local community came to pick us up and we had the hotel already booked in Gresik. They dropped us in Hotel Santika Gresik which was fabulous. On the next day I went to the conference and met with people from local community and open source community. I had my talk after lunch and It went very well. We used our personal experience and getting started with Gnome guide to steer the participants towards open source development.
On the first day we were taken to this beautiful hangout spot in a hilly area. I forgot the name of the place ;). All the speakers and organizers were there. we introduced our self and had a great dinner. People in Gresik were so nice.

On the second day it was the official start day of the conference. It was initiated by a beautiful traditional Indonesian dance. Followed by the keynote speeches. There were parallel classes after the end of the inauguration session. I had a hard time choosing which one to attend because all the talks were delivered by some of the best I've seen in open source community so far. I attended the Adhitiya's talk on Power management. That was an excellent and informative one. I think that was my favorite out of all the talks we had.
It was a tiring day but surprisingly the organizers have made dinner plans for us. we were taken to a traditional Indonesian hotel and all the speakers along with the local community volunteers had the dinner together. We had lot of time to talk and joke about each other. It was some of the best time I've had in that week.

Last day was almost the same as second day. We had parallel classes going on. I spend some time going through the stalls they had. Collected some awesome swags (Laptop stickers, camera covers etc). At last we had the closing ceremony. There were some awards given to the participants on answering questions related to talks on previous days and on that day. Finally we had the photo shoot.
It was hard for me to bid farewell. although i was there for just three days, I had so much connections. I loved the vibe. I got the urge to work with open source community more and more.

October 30, 2019

2019-10-30 Wednesday.

  • Mail; testing, code reading and more patch merging. Sales & Marketing call, back to calc tile rendering oddities.
  • All-Saints band-practice; pleased to see Barak Obama's "People who do really good stuff have flaws" commentary on our messy and ambiguous world. Personally I have the flaws, but sadly they don't guarentee doing the really good stuff. I'd also love all (particuarly old) media outlets to have a cultural shift to systematically provide a: "see this clip in its full context" button ~everywhere.

The goldsmith and the chaos warrior: a typology of workers

As I’ve spent a number of years working for various organizations, big and small, with different types of collaborators and staffers, I’ve devised a simple typology of workers that can help explain the various levels of success, self-organization, productivity and stress of those workers, depending on whether there is a fit between their work type and their work processes. This is one of the many typologies I use to describe human behavior, and I haven’t spent years and a Ph.D. thesis devising this, this is just some down-to-earth reflections I’ve had. Without much further ado, here’s what I’ve come up with so far.

The first type of worker is what I call the “chaos warrior”: this includes the busy managers, professional event organizers, executives, deal-with-everything assistants, researchers, freelancers or contractors.

In my view, “chaos warriors” are the types of workers who—from a systemic point of view—have to deal with constantly changing environments and demands, time-based deadlines, dependencies on other people or materials, multiple parallel projects, etc.

  • Chaos warriors, in their natural state, very rarely have the luxury of single-threaded work and interruption-free environments (though those certainly would be welcome, and chaos warriors sometimes have to naturally retreat to external “think spaces” to get foundational work done).
  • Chaos warriors don’t necessarily enjoy the chaos (some of them hate it, some of them crave it), but it’s part of the system they find themselves in, so they have to structure their workflow around it—or risk incompetence or burning out really fast. They have to become “organized” chaos warriors, otherwise they’re just chaos “victims”.
  • The in-between state, the somewhat-organized-but-not-zen chaos worker that many freelancers experience, is what I call, “Calm Like a Bomb”.

Note that the chaos I am referring to is cognitive, not physical; firefighters, paramedics and ER nurses, the police and military, are “emergency” workers, not warriors of cognitive “chaos”. They are beyond the scope of what I’m covering here (and what’s coming in my next blog posts). They don’t need a “productivity system” to sort through cognitive overload, they deal with whatever comes forth as best as they can in any given situation.

The ultimate embodiment of a chaos warrior is the nameless heroïne in the 4th DaiCon event opening animation from 1983: you can’t get a better representation of triumph amidst chaos than that!

The second category of workers is what I call the “goldsmith”—that is, people with a very specific role, who work in a single regular “employment” type of job, often with set hours, and possibly on-site (in an office/warehouse/shop/etc.).

This may include most office workers, public servants, software design & development folks who make a sharp separation between work and personal life, construction subcontractors working as part of a big real estate project, waiters and bartenders, technicians, retail sales & logistics, etc. I’m vastly simplifying and generalizing of course, but here I sketch the picture of someone who comes in in the morning, looks at the task list/assignments/inbox, works on that throughout the day, and then leaves their work life behind to enjoy their personal life; then the process resets on the next day.

  • “Pure” goldsmiths do not track work items outside of the workplace, and usually do not need to track personal items while at work (or aren’t allowed to). As such, in both settings, their mind is focused and clear. You arrive at context A, you work on context A’s items that are in front of you. You arrive at context B, you relax or deal with whatever has come up in your home “as it happens”. Arguably, from this standpoint of work-life separation, you could put some “emergency workers” in this category.
  • The goldsmith may have a simpler life, which is kind of a luxury, really: they can more easily have a tranquil mind, without the cognitive weight of hundreds of pending items and complex dependency chains governing their tasks. You do the job, you move on.
  • When they are asked to “produce” output in their area of expertise, those are often the type of workers that would benefit from a quiet, interruption-free work environment. There’s a reason why Joel Spolsky designed the FogCreek offices to allow developers to close the door and work in peace, instead of the chaos of open-space offices (that’s a story for another day).

Some specialized “creative” goldsmiths have a hard time separating work from personal life; even when they are home, they can’t help but think about potential creative solutions to the challenges they’re trying to solve at work. In that case, those may be “chaos warriors” in disguise.

In my view, personal productivity methodologies like GTD cater first and foremost to the “organized chaos warriors”, rather than the goldsmiths, who may have little use for all-encompassing cognitive techniques, or who may have tools that already structure their work for them.

Notably, in some industries like IT or the Free & Open-Source software sub-industry, we have done a pretty good job at externalizing (for better or for worse) the software developers and designers’ todo list as “bug/issue trackers”, and their assignments may often be linear and fairly predictable, allowing them to be “goldsmiths”. Most of the time, a software developer or designer, in their core duties, are going to deal with “whatever is in the issue list” (or kanban board), particularly in a team setting, and as such probably don’t feel the need to have a dedicated personal todo list, which might be considered duplication of information and management overhead. Input goes in (requirements, bug reports, feature requests), output goes out (a new feature, design, or fix). There are some exceptions to this generalization however:

  • When your issue tracker (bug inventory) is not actively managed (triaged, organized, regularly pruned), you eventually end up declaring “bugtracker bankruptcy”, or, like Benjamin Otte once said to me on IRC, “Whoever catches me first on IRC in the morning, wins.”
  • Some goldsmiths may have more complicated lives than just their job duties and might be interested by this approach nonetheless.
  • Sometimes, shared/public/open bug trackers are a tyranny on the mind, much like a popular email inbox: the demands are so numerous and complex (or unstructured) that they are not only externally imposed goals, they become imposed chaos—in which case the goldsmith may find that they need to extract a personal subset of the items from the “firehose” into a remixed, personalized, digestible task list for themselves.

Do you recognize yourself in one of these categories I’ve come up with? Or do you fit into some other category I might not have thought about? Did you find this essay interesting? Let me know in the comments!

If you’re a chaos warrior, or you fit any of the goldsmith’s “exceptions” (or if you’re interested in the field of personal productivity in general), you’ll probably be interested in reading my next article on (re)building the best free & open-source “GTD” application out there (but before that, if you haven’t read it already, check out my previous article on “getting things done”).

The post The goldsmith and the chaos warrior: a typology of workers appeared first on The Open Sourcerer.

Peruvian International Scientific Meeting: Sinapsis 2019

Last week, I traveled to Gent to attend the conference Sinapsis 2019 It was the fourth meeting of Peruvian scientists in Europe held on October 23th-25th, 2019. It was a very intense event since every day the journey started at 9 am and finished around 8 pm.

About the oral presentations

From almost 40 speakers, I highlight the work of six presentations that had my attention from the moment they started speaking until the very end. From my humble opinion and understanding, I decided to write about the work of three male and three female scientists.

The first speaker pictured is Prof. Jorge Chau from the Leibniz Institute in Germany and his talk named “Studies of mesospheric and lower thermospheric turbulence and waves with novel multi-static MIMO specular meteor radars”. He made a thoughtful and impassioned explanation of his work. This time I understood maths and its application. My second favorite talk was given by Lucia Fitts Vargas. She talked about “Effects of disturbances and land used change on carbon stocks in six US states ” from the University of Minnesota. I liked her talk because she was able to explain in a very simple way the presence of carbon in trees in our jungle in Peru and then she gradually jumped to the complexity explanation about her carbon stock research and tools used in the USA. I was impressed by the research of Jacqueline Valverde Villegas from INSERM, Université de Montpellier, France about the HIV: “Aspectos genéticos e inmunológicos en la infección por el VIH/SIDA”, and the work of Juan Carlos Hurtado from  the University of Barcelona: “Identificación de las causas de muerte en países de mediana y baja renta a través de la autopsia mínimamente invasiva”. I overheard good acceptance for the talk of Dr. Luis Dalguer about the earthquakes prediction in Switzerland: “Terremotos: su mecanismo físico, su predicción y prevención de desastres”. Lastly, the talk of Lucila Menacho from the University of Engineering in Peru named “Study, construction, and applications of supercapacitors based on graphene” was an interactive talk that everyone in the room paid attention.  Congrats in general to all because all they were interesting topics.The order’s speakers in the picture: Chau, Fitts, Valverde, Menacho, Hurtado, Dalguer.

About the poster presentations

54 posters were presented in two sessions. I had the opportunity to read a few of them since I was also presenting a poster. I pictured the posters around:

  1. “ El puesto de venta de carne de pollo como fuente de contaminación de cepas con variabilidad clonal de Escherichia coli. ” by Juan Raúl Lucas López from Barcelona.
  2. “Autoría femenina en la Revista Peruana de Medicina Experimental y Salud Pública: Análisis del periodo 1997-2017” by Reneé Pereyra Elías from Oxford University.
  3. “Gestión de lodos de fosas sépticas en pequeñas ciudades AltoAndinas (Saylla, Cusco, Perú)” by Nathaly Mishel Salinas Pimentel from the Universitat de Barcelona.

Women presence

Around 15 women out of 40 speakers did an oral presentation and 25 women out of 54 were poster presented. The presence of Peruvian women in science strengthened the influence in science and researching. Most of them were young scientists from different places Peru and national universities. I am so glad how they learn and how they want to contribute to Peru with their knowledge and professionalism. I talked to most of them, all they have a bright future.

My participation

I have more than ten years of experience in coordinating IT events, then I was able to give some suggestions to improve, such as the position of the banners for pictures and videos, and for organizing the group photo. That moment was captured in the picture! I was the number 02 poster, so the ambassador of Peru in Belgium had the privilege to hear my ideas of HPC and what I found so far in Peru. I hope that within 10 years all Peruvian scientists will be able to use HPC in their researches. 


One day before the conference, we went together to the company AGP where we saw onsite the process of the creation of new generation impact-resistant glass. It was also arranged a country yard day where we enjoyed Peruvian food. It was years of not trying “canchita serrana”. I had a great time remembering some local Peruvian jokes with the people I met.

Grupal photo

This event had planned to have a group photo every single day. It was unusual for me since I did not attend the previous editions of SINAPSIS. All we gave our best smile in the pics: Around the city

Ghent is a beautiful city and the season was OK for me. It was cold but not so much as in the UK this time. Some sunny days were perfect to take landmark pictures around the city:While I was walking to go to the conference, I saw some messages on the street that I like:

The bottom messages that are shown, seem too small in comparison to my face. They say:

  1. “You are not lost, you are here”
  2. “After all, strangers again”

Special thanks

I have to thanks people that made me feel so good at the event. Thanks to Carlota Roca for the support and the tourist night where I learned about the prisoner story. Thanks to Lucia, she is so smart and funny, I really enjoyed she is an authentic person. Thanks to Rene Pereyra for trying to help me in finding out what to do in my professional path. Thanks to Camacho for his understanding, he made me feel again people from Piura. He speaks like people in my family. Thanks to Ximena and Janeth, they are too young and inspirational as well. I did not have any picture with Luis Tay, he helped as an organizer.

Generally, it was a good experience. I have not seen another Peruvian in person more than a year and living again for a week with Wilson Valerio, Martin, Raul, Alisa, and others, made me remember my roots and why I am in Europe. I tried the best chocolate!

Thanks to the organization for the partial financial support received. Thanks SINAPSIS!

October 29, 2019

Five-or-More Modernisation: It's a Wrap

As probably most of you already know, or recently found out, at the beginning of this week the GSoC coding period officially ended, and it is time for us, GSoC students, to submit our final evaluations and the results we achieved thus far. This blog post, as you can probably tell from the title, will be a summary of all of the work I put into modernising Five or More throughout the summer months.

My main task was rewriting Five or More in Vala since this simple and fun game did not find its way to the list of those included in the Games Modernisation Initiative. This fun, strategy game consists of aligning, as often as possible, five or more objects of the same shape and color, to make them disappear and score points.

Besides the Vala rewrite, there were also some other tasks included, such as migrating to Meson and dropping autotools, as well as keeping the view and logic separated and updating the UI to make this game more relatable for the public and more fresh-looking. However, after thoroughly discussing the details with my mentor, Robert Roth (IRC: evfool), more emphasis was placed upon rewriting the code to Vala, since the GSoC program is specifically designed for software development. However, slight UI modifications were integrated as to match the visual layout guidelines.

Some of the tasks, namely porting to gettext, porting to Meson and dropping autotools happened earlier on, during the pre-GSoC time frame, in the attempt to familiarise myself with the project and the tasks that would come with it.

Afterward, I started with the easier tasks and advanced towards the more complex ones. At first, I ported the application window and the preferences menu and added callbacks for each of the preferences buttons. I then continued with porting the preview widget displaying the next objects to be rendered on the game board.

Next, it was time to channel my attention towards porting the game board, handling board colour changes and resizing the board alongside the window resize, by using the grid frame functionality inside the libgnome-games-support library.

The following target was implementing the actual gameplay logic, which consisted of a pathfinding algorithm based on A*, erasing all objects of the same shape and colour aligned in a row, column or diagonal from the board, if there were either five or more than five, and adding to the score whenever that happened. I also made the object movement possible with both clicking and keyboard keys, for more ease of use.

Finishing touches included adding the high scores tables via the libgnome-games-support library, displaying an option to change the theme of the game, adding a grid to be able to make out easier the actual cells in which different shaped and coloured objects should reside, as well as updating some information contained by the help pages.

Some features, however, could not be done during the GSoC period. These included handling raster game themes, making significant UI updates, as well as some other extra-features I wanted to add, which were not part of the original project description such as gamepad support or sound effects. The fist feature mentioned in this list, handling raster images was decided upon skipping as a suggestion from my mentor, as the existing raster themes were low-resolution and did not scale well to large size and HiDPI displays.

For easier reading, I decided to also include this list of the tasks successfully done during GSoC:

  • Ported from autotools to Meson
  • Ported application window and preferences menu
  • Added callbacks for all buttons
  • Ported widget of next objects to be rendered on the game board
  • Handled the resize and colour change of the game board
  • Implemented the gameplay logic (pathfinding, matching objects and calculating scores)
  • Made gameplay accessible for both mouse clicks and keyboard controls
  • Added high scores via the libgnome-games-support library
  • Handling vector image theme changing
  • Made slight modifications to the UI to serve as a template and changed the spacing as recommended in the visual layout guidelines.
  • Updated documentation.

All code is available by accessing this merge request link.

Five-or-More Modernisation: Now You Can Properly Play It

As Google Summer of Code is officially drawing to an end, all of my attention was focused towards making the Five or More Vala version feature-complete. As you probably already know from my previous blog post, the game was somehow playable at that time, but it was missing some of the key features included in the old version.

So what’s new this time? First and foremost, you can surely notice the game board now sports a grid, which wasn’t there until now. On the same note, there are also animations used for clicking a piece on the board, for an improved gaming experience. For further accessibility, some header bar hints are available at different stages in the game: at the start of any new game, at the end of each game, as well as whenever there is no clear path between the initial position and the cell indicated by the user for the current move.

Overall final game look

By using libgnome-games-support, I was able to implement a high scores table, which gets updated each time the player gets a score ranging in the top 10 highest scores for the chosen board size. The high scores for each of all of the three categories can also be viewed as showed in the screencast below. Also, you can see I have done quite my fair share of testing the functionalities and assuring everything worked as planned 😄.

High scores

Further changes include theme changing, although this momentarily only works for the vector themes available in Five or More, namely the ball and shape themes, as implementation for raster images is not fully functional as of this moment.

Changing theme

Also, now window size is being saved in between runs, so each time the game starts it will take into consideration the last window size settings, including wether the window was full screened or not.

As for the most exciting change revealed in this blog post, it concerns playing Five or More using keyboard controls. Basically, the user can play the game by navigating with the keyboard arrows, the home, end, page up, page down and space, enter or return keys, as described in the wikipage section for using the keyboard.

Playing Five-or-More usig keyboard controls

If you have been following my journey with GSoC up closely, you probablly remember me mentioning something about extra-features in my first blog post, such as adding gamepad support, sounds, or making some design changes. I need to say I feel optimistic about getting some of these features done in the weeks to come, post GSoC. I feel the ground has been already laid down somehow for gamepad support by adding keyboard controls. Also, Five or More has already undergone some slight design changes, such as widget spacing and alignment.

My name is Jo and this is home now

After just over three years, my family and I are now Lawful Permanent Residents (Green Card holders) of the United States of America. It’s been a long journey.


Before anything else, I want to credit those who made it possible to reach this point. My then-manager Duncan Mak, his manager Miguel de Icaza. Amy and Alex from work for bailing us out of a pickle. Microsoft’s immigration office/HR. Gigi, the “destination services consultant” from DwellWorks. The immigration attorneys at Fragomen LLP. Lynn Community Health Center. And my family, for their unwavering support.

The kick-off

It all began in July 2016. With support from my management chain, I went through the process of applying for an L-1 intracompany transferee visa – a 3-5 year dual-intent visa, essentially a time-limited secondment from Microsoft UK to Microsoft Corp. After a lot of paperwork and an in-person interview at the US consulate in London, we were finally granted the visa (and L-2 dependent visas for the family) in April 2017. We arranged the actual move in July 2017, giving us a short window to wind up our affairs in the UK as much as possible, and run out most of my eldest child’s school year.

We sold the house, sold the car, gave to family all the electronics which wouldn’t work in the US (even with a transformer), and stashed a few more goodies in my parents’ attic. Microsoft arranged for movers to come and pack up our lives; they arranged a car for us for the final week; and a hotel for the final week too (we rejected the initial golf-spa-resort they offered and opted for a budget hotel chain in our home town, to keep sending our eldest to school with minimal disruption). And on the final day we set off at the crack of dawn to Heathrow Airport, to fly to Boston, Massachusetts, and try for a new life in the USA.

Finding our footing

I cannot complain about the provisions made by Microsoft – although not without snags. The 3.5 hours we spent in Logan airport waiting at immigration due to some computer problem on the day did not help us relax. Neither did the cat arriving at our company-arranged temporary condo before we did (with no food, or litter, or anything). Nor did the fact that the satnav provided with the company-arranged hire car not work – and that when I tried using my phone to navigate, it shot under the passenger seat the first time I had to brake, leading to a fraught commute from Logan to Third St, Cambridge.

Nevertheless, the liquor store under our condo building, and my co-workers Amy and Alex dropping off an emergency run of cat essentials, helped calm things down. We managed a good first night’s exhausted sleep, and started the following day with pancakes and syrup at a place called The Friendly Toast.

With the support of Gigi, a consultant hired to help us with early-relocation basics like social security and bank accounts, we eventually made our way to our own rental in Melrose (a small suburb north of Boston, a shortish walk from the MBTA Orange Line); with our own car (once the money from selling our house in the UK finally arrived); with my eldest enrolled in a local school. Aiming for normality.

The process

Fairly soon after settling in to office life, the emails from Microsoft Immigration started, for the process to apply for permanent residency. We were acutely aware of the time ticking on the three year visas – and we already burned 3 months of time prior to the move. Work permits; permission to leave and re-enter; Department of Labor certification. Papers, forms, papers, forms. Swearing that none of us have ever recruited child soldiers, or engaged in sex trafficking.

Tick tock.

Months at a time without hearing anything from USCIS.

Tick tock.

Work permits for all, but big delays listed on the monthly USCIS visa bulletin.

Tick tock.

We got to August 2019, and I started to really worry about the next deadline – our eldest’s passport expiring, along with the initial visas a couple of weeks later.

Tick tock.

Then my wife had a smart idea for plan B, something better than the burned out Mad Max dystopia waiting for us back in the UK: Microsoft just opened a big .NET development office in Prague, so maybe I could make a business justification for relocation to the Czech Republic?

I start teaching myself Czech.

Duolingo screenshot, Czech language, “can you see my goose”

Tick tock.

Then, a month later, out of the blue, a notice from USCIS: our Adjustment of Status interviews (in theory the final piece before being granted green cards) were scheduled, for less than a month later. Suddenly we went from too much time, to too little.



The problem with the one month of notice is we had one crucial missing piece of paperwork – for each of us, an I-693 medical exam issued by a USCIS-certified civil surgeon. I started calling around, and got a response from an immigration clinic in Lynn, with a date in mid October. They also gave us a rough indication of medical exams and extra vaccinations required for the I-693 which we were told to source via our normal doctors (where they would be billable to insurance, if not free entirely). Any costs in the immigration clinic can’t go via insurance or an HSA, because they’re officially immigration paperwork, not medical paperwork. Total cost ends up being over a grand.

More calling around. We got scheduled for various shots and tests, and went to our medical appointment with everything sorted.


Turns out the TB tests the kids had were no longer recognised by USCIS. And all four of us had vaccination record gaps. So not only unexpected jabs after we promised them it was all over – unexpected bloodletting too. And a follow-up appointment for results and final paperwork, only 2 days prior to the AOS interview.

By this point, I’m something of a wreck. The whole middle of October has been a barrage of non-stop, short-term, absolutely critical appointments.

Any missing paperwork, any errors, and we can kiss our new lives in the US goodbye.

Wednesday, I can’t eat, I can’t sleep, and various other physiological issues. The AOS interview is the next day. I’m as prepared as I can be, but still more terrified than I ever have been.

Any missing paperwork, any errors, and we can kiss our new lives in the US goodbye.

I was never this worried about going through a comparable process when applying for the visa, because the worst case there was the status quo. Here the worst case is having to restart our green card process, with too little time to reapply before the visas expire. Having wasted two years of my family’s comfort with nothing to show for it. The year it took my son to settle again at school. All of it riding on one meeting.


Our AOS interviews are perfectly timed to coincide with lunch, so we load the kids up on snacks, and head to the USCIS office in Lawrence.

After parking up, we head inside, and wait. We have all the paperwork we could reasonably be expected to have – birth certificates, passports, even wedding photos to prove that our marriage is legit.

To keep the kids entertained in the absence of electronics (due to a no camera rule which bars tablets and phones) we have paper and crayons. I suggest “America” as a drawing prompt for my eldest, and he produces a statue of liberty and some flags, which I guess covers the topic for a 7 year old.

Finally we’re called in to see an Immigration Support Officer, the end-boss of American bureaucracy and… It’s fine. It’s fine! She just goes through our green card forms and confirms every answer; takes our medical forms and photos; checks the passports; asks us about our (Caribbean) wedding and takes a look at our photos; and gracefully accepts the eldest’s drawing for her wall.

We’re in and out of her office in under an hour. She tells us that unless she finds an issue in our background checks, we should be fine – expect an approval notice within 3 weeks, or call up if there’s still nothing in 6. Her tone is congratulatory, but with nothing tangible, and still the “unless” lingering, it’s hard to feel much of anything. We head home, numb more than anything.


After two fraught weeks, we’re both not entirely sure how to process things. I had expected a stress headache then normality, but instead it was more… Gradual.

During the following days, little things like the colours of the leaves leave me tearing up – and as my wife and I talk, we realise the extent to which the stress has been getting to us. And, more to the point, the extent to which being adrift without having somewhere we can confidently call home has caused us to close ourselves off.

The first day back in the office after the interview, a co-worker pulls me aside and asks if I’m okay – and I realise how much the answer has been “no”. Friday is the first day where I can even begin to figure that out.

The weekend continues with emotions all over the place, but a feeling of cautious optimism alongside.

I-485 Adjustment of Status approval notifications

On Monday, 4 calendar days after the AOS interview, we receive our notifications, confirming that we can stay. I’m still not sure I’m processing it right. We can start making real, long term plans now. Buying a house, the works.

I had it easy, and don’t deserve any sympathy

I’m a white guy, who had thousands of dollars’ worth of support from a global megacorp and their army of lawyers. The immigration process was fraught enough for me that I couldn’t sleep or eat – and I went through the process in one of the easiest routes available.

Youtube video from HBO show Last Week Tonight, covering legal migration into the USA

I am acutely aware of how much more terrifying and exhausting the process might be, for anyone without my resources and support.

Never, for a second, think that migration to the US – legal or otherwise – is easy.

The subheading where I answer the inevitable question from the peanut gallery

My eldest started school in the UK in September 2015. Previously he’d been at nursery, and we’d picked him up around 6-6:30pm every work day. Once he started at school, instead he needed picking up before 3pm. But my entire team at Xamarin was on Boston time, and did not have the world’s earliest risers – meaning I couldn’t have any meaningful conversations with co-workers until I had a child underfoot and the TV blaring. It made remote working suck, when it had been fine just a few months earlier. Don’t underestimate the impact of time zones on remote workers with families. I had begun to consider, at this point, my future at Microsoft, purely for logistical reasons.

And then, in June 2016, the UK suffered from a collective case of brain worms, and voted for self immolation.

I relocated my family to the US, because I could make a business case for my employer to fund it. It was the fastest, cheapest way to move my family away from the uncertainty of life in the UK after the brain-worm-addled plan to deport 13% of NHS workers. To cut off 40% of the national food supply. To make basic medications like Metformin and Estradiol rarities, rationed by pharmacists.

I relocated my family to the US, because despite all the country’s problems, despite the last three years of bureaucracy, it still gave them a better chance at a safe, stable life than staying in the UK.

And even if time proves me wrong about Brexit, at least now we can make our new lives, permanently, regardless.

October 28, 2019

Fantasy Premier League with AI - First 10 Gameweeks Review

What happens when you combine your love for football and programming? I am a huge fan of the English Premier League and its fantasy league game which allows players to play as managers by creating their team and earn points based on the performance of their selected players in the real field.

Fantasy Premier League is the most famous fantasy game in the world with over 7 million players(that’s more than New Zealand’s population), known as managers, trying their managing skills and getting as many points as they can over 38 game weeks spanned over 10 months of football.

Now you might be asking what’s difficult about it. Just select the best players in the league and let ‘em do the rest. Things aren’t as easy as they seem. It’s a complex game of certain sets of rules that are to be followed to create a team, then there is Champions League, Europa League, International Breaks and the most dreaded one, Pep’s Rotation. So to be a successful manager you have to make good decisions, considering all the above factors and also be able to foresee the future and create something that will give you a chance to leap in front of others.

The big question now is what can we manage a team using Data Science and Artificial Intelligence. Certain previous managing experience says yes. The problem with managers is, while creating a team, we indulge in favoritism and sheep herding. What this means is we intend to choose players from our favorite teams or players who only have hype. Such players usually do not add any value to the squad.

But in data, we can trust. Hence what we do is get the data, process it and find meaningful information from it and see whether it works or not. After all, it is a game of chance and a problem is that a machine doesn’t show patience, this AI would work in collaboration with the manager to get the best results.

Week by Week Report

Gameweek 1

Ravgeet Dhillon's Fantasy AI Gamweek 1 Report

At the very start of the season, I had to get a team that could do well until the first international break. So I had to use the previous season’s data for the AI and it gave the following team. I suffered a huge blow as Allison was injured in the very first half and the same happened with Holebas as well. So this cost me at least 6 points. Other than that I guess it did pretty well, in fact, better than my expectations.

Gameweek 2

Ravgeet Dhillon's Fantasy AI Gamweek 2 Report

I didn’t do any transfers here as I didn’t have any internet on the final day of the transfer deadline. And just look at the results. Disappointing! But there was a bright side as well. I got two free transfers for the next game week and so the next game week was going to be the good first test for my AI.

Gameweek 3

Ravgeet Dhillon's Fantasy AI Gamweek 3 Report

Look at that beauty. I had 2 free transfers, so my AI suggested me to do two replacements, Rui Patricio in for Allison as a Goalkeeper and Teemu Pukki in for Raul Jimenez as a Forward. It was just a bad captaincy choice that cost me some points, otherwise, it was a good week for the business.

Gameweek 4

Ravgeet Dhillon's Fantasy AI Gamweek 4 Report

At this point just one week before the international break, I thought of using a Wildcard and assembled a whole new squad. My AI did a great job but it was again my fault of handing Teemu Pukki the armband and dropped points for a bad captaincy pick.

Gameweek 5

Ravgeet Dhillon's Fantasy AI Gamweek 5 Report

During the international break, I did some changes to my AI gave it a pinch more of thinking prowess. If you are interested in the technicalities of my AI, you can find a detailed blog post here. I took 2 hits because I know the decisions made by AI were reasonable and I could easily recover my lost points. Just another average gameweek.

Gameweek 6

Ravgeet Dhillon's Fantasy AI Gamweek 6 Report

This was the gameweek I have been waiting for. I was among 10% of managers. Just look at the bench. 19 points! Sterling didn’t play, so he was automatically substituted by Mark Nobel and gave me crucial 5 points. I think I could have made 100 points in this gameweek. My AI gave me the best set of players but I let it down by choosing a wrong captain and having points worthy players on the bench.

Gameweek 7

Ravgeet Dhillon's Fantasy AI Gamweek 7 Report

My AI was asking me to remove Ashley Barnes as he was only giving me playing points. But I decided to show patience for a week as he had easy fixtures ahead. This gave me two free transfers for the next week. Otherwise, it was a good week.

Gameweek 8

Ravgeet Dhillon's Fantasy AI Gamweek 8 Report

Finally, my patience ran out with Ashley Barnes, I decided to use my two free transfers and my AI suggested to bring in two Chelsea players, Tammy Abraham and Fikayo Tomori. And look at that 8 points from Abraham. That’s my AI. Intelligent!

Gameweek 9

Ravgeet Dhillon's Fantasy AI Gamweek 9 Report

So this was a gameweek after the international break and I had one free transfer and asked my AI whether I should go for it. My AI referred me to sell Raheem Sterling and get in Sadio Mane. Being a Liverpool fan, I was really disappointed that we couldn’t beat Manchester United and Mane was lackluster. But what about other players? Ask any FPL manager in the world, it was the poorest gameweek in the last few. Only 18 goals scored across 10 matches. The average points reflect the pain.

Gameweek 10

Ravgeet Dhillon's Fantasy AI Gamweek 10 Report

For this one, my AI suggested me to bring in Rui Patricio in place of Tom Heaton as Aston Villa has difficult fixtures ahead. But I wish I had some Leicester City players in my squad as they ran riot against Southampton, scoring 9 goals and kept a clean sheet as well. It was a bad week at the back. Pukki party is over now and keeping Sergio Aguero is too risky due to Pep’s Rotation.


Ravgeet Dhillon's Fantasy AI Progress for first 10 game weeks

This above graph again shows that humans are inferior to the machine. My AI gave me the right choices but I couldn’t decide on captains and bench players. The BEST line shows what would have been my progress if I had made the right decisions. Well, on one side I am happy as well because my AI is something that I wrote and it’s doing well. The thing is the big buck players haven’t really fired in the last few game weeks. So should the formation go from 3-4-3 to 3-5-2 or 4-4-2? Well, that’s a big question.

Lessons Learnt

Patience is an important thing in this game. Trust the AI, it’s guiding you towards the right path. Data is the secret. During the first phase, we had less data, so the suggestions were approximate. But the graph is an encouragement, as we are nearing the top-ranked manager.

What’s next

I would be doing another blog post regarding my progress on Christmas but still, I would really love to have suggestions from you guys. Should I do blog posts more often or should I write for each game week? Also, if you guys are interested in knowing the technicalities of the AI, just mention me on twitter.

Lemme know if you have any doubt, appreciation or anything else that you would like to communicate to me. You can tweet me @ravgeetdhillon. I reply to all the questions as quickly as possible. 😄 And if you liked this post, please share it with your twitter community as well.

Friends of GNOME Update – October 2019

Welcome to the October 2019 Friends of GNOME Update!

A jack-o-lantern: an orange pumpkin with a GNOME foot carved into it and candle light coming through the foot.

GNOME on the Road

  • Molly de Blanc and Sri Ramkrishna were at All Things Open this past month. They both gave talks, ran a booth, and met lots of great people who were excited to learn about GNOME. They ran out of stickers.
  • Neil McGovern and Rosanna Yuen attended GNOME.Asia Summit, both delivering keynotes! While he was in Indonesia, Neil also delivered a keynote at openSUSE.Asia Summit.
  • Board member Carlos Soriano spoke at GitLab Commit about how GNOME uses GitLab.
  • Not quite on the road, but Neil was on FLOSS Weekly. You can watch the episode on their web site.

If you have a GNOME-related speaking engagement coming up, feel free to drop us a line!


Technical developments

  • There have been changes to buildbot in order to accommodate the latest release of the FreeDesktop SDK.
  • You can now find org.freedesktop.Sdk.Extension.golang for FreeDesktop SDK 19.08 and io.qt.qtwebkit.BaseApp for KDE runtime 5.13.

Read all about it!


  • Along with our friends at KDE, we’re organizing the Linux App Summit (LAS). LAS is taking place this year in Barcelona, Spain, from November 12 – 15th. Registration is open so sign up today.

Thank you!

As always, thank you for being a Friend of GNOME!

Photo courtesy of Britt Yazel under a CC-BY license.

October 24, 2019

GNOME Asia 2019

I feel very emotional and excited while sharing my first ever GNOME+ FOSS + International conference experience with you all.

So let’s get started.

Before starting my journey I was really nervous as I never traveled outside my homeland, I was travelling alone, I am a pure vegetarian and I am someone who fear air travel 😥  (but not anymore :p)

Speaker 😎 , Loved the Design Team’s Work !

So, firstly and foremost I would like to thanks a lot lot lot lot …… to local committee for making everything smooth and already planned for the guests and speakers. You folks did a lot of hard work, were awake till late nights to make sure next day goes alright and what not. The local committee was so so helpful and gracious that they arranged us an 45 KM+ far Airport pickup, Awesome venue, Lunch (And for me I was vegetarian they took care of that also) , and I think almost everyday at the conference there was some sort of Dinner party organized by them, the city tour, great swags, lots of love and support.

It’s the people of Indonesia and local team which made all of my fear went away as soon as I entered the Indonesian land.

From local team, I can’t remember all the names but those I remember I would love to share Mr. Andik , Mr. Ahmad Haris, Mr. Kukuh ,Mr. Jordan, Mr. Mohammad Fadhil, Mr. Firdaus etc …. To me personally now it feels like I left a family back there and I miss you all ! Everyone of you were really friendly and made us feel at home 😭 

Day 0,

There was inauguration ceremony, and later on Newcomer’s workshop 🙂 , I took sometime to figure out the huge campus and missed a bit from the starting half of the day 😦 , When I arrived the next schedule was for Newcomer’s workshop which was organized by Felipe, Me and Sajeer Ahmad. Sadly Felipe wasn’t able to attend the conference but he kindly took time out and was available online helping us out the whole time.

Our workshop was for 3 hrs which was like the only thing apart from the other workshop parallel to ours that day.

In this workshop we had quite a good amount of attendees where some came from far off distances like 200+ KM which made me feel more excited and honored about organizing it.

I talked about the GNOME application projects which are a good way to start, newcomers wiki, Introduction to Git concepts, Role of IRC for a newcomer, our Gitlab infrastructure, GNOME builder 🥰 , code reviews etc.

Audience was fairly new to GNOME technologies, FOSS development and internet was not so fast at that location so it became a bit hard for us to solve and solve issues then and there, So I took another approach and demonstrated them how to solve bugs using my development setup and some of the bugs which were already solved and were easily visible in our gitlab, this also gave me a chance to explain how to submit patches on gitlab and how discussion look like there.

I also got a chance to explain them about Git as my friend Exalm once pointed out during newcomers IRC discussions long time ago that ‘It’s a common problem for newcomers where they forget to fork projects’, so I took this chance and explained concepts of Forking, Branching, Merge Requests, Cloning, etc which can give them a boost towards submitting their first patch 🙂

In the workshop I also took a chance to make students aware about programs like GSOC, GNOME coding education challenge, Outreachy, etc. In the end I told them for a student FOSS not only has rewarding opportunities like internships and jobs but most importantly you get a lot of learning. When you as a student get to work with highly skilled people working as an experts in their fields the learning you will get is enormous. And for me personally the most important thing is I get to hangout with amazing folks 🙂

Sajeer contributed to GStreamer side, so he took a chance to explain newcomers on how they can contribute there. (More to follow about that in his post :p )

So in short a lot happened in newcomers workshop 😉

Thanks a lot to Felipe who kept mentoring us online and helped us from very start.

I was really nervous without having him around and this was my first ever workshop :/ , but at last I feel good now that it went more good then I we all imagined.

I met Andre Kalpper , Kiki from Mozilla foundation, Mr. Khairul Aizat and his lovely little son, Bin Li, Haris, Franklin, Mr. Enoki , and lots of amazing people which was a really amazing and lovely experience for me. 🙂

Day 1,

This day, I attended talks but mostly talked a lot with folks in the speakers room 😉

The talk which I loved the most was on “Running linux on PS4” by Mr. Iwan S.

In the speakers room, I looted Kiki’s Mozilla swags 😂 , she is really sweet and was kind enough to share her swags with every one 🙂 , I plan to spread the swags among my local community.

The most inspiring thing which I learnt from the whole trip was Mr. Iwan’s story. My eyes still shine like a baby’s eyes listening what he achieved and how well he’s ready to share back to community.

Mr. Iwan is sponsor from which is a local Indonesian brand having products in footwear. But to my surprise he was not just sponsor he was a speaker also. And he had a really amazing story.

He told me that he has saved 30 K USD just by adopting FOSS in his company per year in departments like designing, office suites and Operating systems. He also helped publishing one designing book for Inkscape in local language which was phenomenal.

Hearing from someone that they have saved thousands of dollars in their business in person is amazing 🤯 , but what’s super amazing is that the guy is in front of you and is giving a lot back to community not just by sponsorship but by actively participating in entire conference. Plus his humility and kindness, I am simply a fan of fans now 😌

Now I feel even more proud working for FOSS 🥳 

Mr Iwan and Me :p

During the conference I certainly made lots of friends, I mostly enjoyed hanging out with Andre and Haris, Andre also share some of his own farm grown apples back there which I would never forget, I never had such amazing apples tbh. thanks a lot for that Andre!

Haris was absolutely amazing guy, I never felt he is some guy whom I met 2 days back, instead I felt like he’s a friend from years. We talked on lots of things from community in Indonesia , his love for guitars to Him being a supreme leader :p , his family, his company, open source, ….. to him helping me figure out vegetarian options there.

Haris with his lovely daughter 😋

Day 3,

It was amazing keynotes and talks, I loved the talk by Haris and Mr. Khairul Aizat where he explained the plan to have next GNOME Asia in Malaysia 😉

For me my biggest takeaway from the conference is,

FOSS is about people, community, freedom and code. I personally really loved the people part of it. People like Haris, Jordan, Andik, Kukuh etc etc from Indonesia are simply amazing infact everyone there is so respectful, friendly, helpful, cheerful …. I just don’t have words to describe.

Hospitality of Indonesia is simply great! , FOSS community is great you will enjoy a lot lot more when you will hang around with people in person! 😇

(There is really a lot left for me to say, I just can’t cover everything up in words it was truly amazing experience for me and I see a lot of great potential and strength in FOSS community back in Indonesia! )

I would like to thank GNOME Foundation for sponsoring my travel. It was such a great pleasure to see everyone in person and I hope to meet everyone more and more!

VCR to WebM with GStreamer and hardware encoding

My family had bought many years ago a Panasonic VHS video camera and we had recorded quite a lot of things, holidays, some local shows, etc. I even got paid 5000 pesetas (30€ more than 20 years ago) a couple of times to record weddings in a amateur way. Since my father passed less than a year ago I have wanted to convert those VHS tapes into something that can survive better technologically speaking.

For the job I bought a USB 2.0 dongle and connected it to a VHS VCR through a SCART to RCA cable.

The dongle creates a V4L2 device for video and is detected by Pulseaudio for audio. As I want to see what I am converting live I need to tee both audio and video to the corresponding sinks and the other part would go to to the encoders, muxer and filesink. The command line for that would be:

gst-launch-1.0 matroskamux name=mux ! filesink location=/tmp/test.webm \
v4l2src device=/dev/video2 norm=255 io-mode=mmap ! queue ! vaapipostproc ! tee name=video_t ! \
queue ! vaapivp9enc rate-control=4 bitrate=1536 ! mux.video_0 \
video_t. ! queue ! xvimagesink \
pulsesrc device=alsa_input.usb-MACROSIL_AV_TO_USB2.0-02.analog-stereo ! 'audio/x-raw,rate=48000,channels=2' ! tee name=audio_t ! \
queue ! pulsesink \
audio_t. ! queue ! vorbisenc ! mux.audio_0

As you can see I convert to WebM with VP9 and Vorbis. Something interesting can be passing norm=255 to the v4l2src element so it’s capturing PAL and the rate-control=4 for VBR to the vaapivp9enc element, otherwise it will use cqp as default and file size would end up being huge.

You can see the pipeline, which is beatiful, here:

As you can see, we’re using vaapivp9enc here which is hardware enabled and having this pipeline running in my computer was consuming more or less 20% of CPU with the CPU absolutely relaxed, leaving me the necessary computing power for my daily work. This would not be possible without GStreamer and GStreamer VAAPI plugins, which is what happens with other solutions whose instructions you can find online.

If for some reason you can’t find vaapivp9enc in Debian, you should know there are a couple of packages for the intel drivers and that the one you should install is intel-media-va-driver. Thanks go to my colleague at Igalia Víctor Jáquez, who maintains gstreamer-vaapi and helped me solving this problem.

My workflow for this was converting all tapes into WebM and then cutting them in the different relevant pieces with PiTiVi running GStreamer Editing Services both co-maintained by my colleague at Igalia, Thibault Saunier.

GUADEC 2019 - A brief update

I attended GUADEC 2019 in Thessaloniki, Greece held in August. It was a great experience this time as this was my second GUADEC (after missing two in a row).

Following were the talks that interested me the most:

  • Modernising Linux desktop app development
  • Environment friendly GNOME
  • Desktop secret management for future
  • Introducing user research to non-UXers
  • Is Linux desktop really dead?
  • About Maintainers and contributors (watched video post-GUADEC)
  • Product Metrics and privacy
  • Maintaining a Flatpak repository
  • Taking out the Garbage
  • Designing GNOME Mobile apps

Videos are GUADEC 2019 are available here.

There were many technical and non-technical hallway dicussions with devs who attended that address our day-to-day work and chasing reviews on outstanding merge requests :P During the one hackfest day I attended, the travel committee got together to resolve long standing tickets and discussed future plans to make the travel-request pipeline a bit smoother and faster.

I would like to thank GNOME Foundation for sponsoring my travel. It’s always a pleasure to get together with the community in person.

Markdowm Image

October 23, 2019

GNOME Calculator and GTK’s Entry

Recently GNOME Calculator has gained a library for math expression parsing and calculation, as a parallel effort to the one used internally for the application, called GCalc.

While GCalc allows to take a string to create an object oriented representation of it and can perform multi-precision calculations, this feature was unable to be used outside, just VDA is using it for math expression parsing, but no-more, yet.

In order to expose GCalc features to user oriented applications, now a new library called GCi was added. GCi provides, for now, a controller for GTK Entries.

GTK Entries controller in GCi, can add calculation features to your Entry. Once you have created a GCi.EntryController and set its entry property to the one in your UI application, a secondary icon using a calculator is added, while allows your users to  write =8*2+1, hit enter or click on the secondary icon and the math expression is replaced by its calculation result (and yes you should use = in order to establish it as a math expression to replace).

Let’s fight back against patent trolls

The GNOME Foundation has taken the extraordinary step of not just defending itself against a patent troll but to aggressively go after them. This is an important battle. Let’s me explain.

The initial reason for Rothschild to come after us they clearly believe that the GNOME Foundation has money and that they can shake us down and get some easy money with their portfolio of patents.

If we had lost or given them the money, it would have made us a mark to not just Rothschild, but to every other patent troll who are probably watching this unfold. Worse, it means that all the other non-profits would be fair game . We do not want to set that precedent. We need to set a strong message that if they attack us they attack us all.

The GNOME Foundation manages infrastructure around the GNOME Project which consists of an incredible amount of software over a nearly 23 year period. This software is used in everything from medical devices, to consumer devices like the Amazon Kindle and Smart TVs, and of course the GNOME desktop.

The GNOME Project provides the tooling, software, and more importantly the maintenance and support for for the community. Bankrupting the GNOME Foundation would mean that these functions would take a terrible blow and cripple the important work we do. The companies that depend on these tools and software will also be similarly hit. That is just one non-profit foundation.

There are many others, Apache, Software Freedom Conservancy and the FSF amongst others. They would be just as vulnerable as we are now.

What Rothschild has done is not just attack GNOME, but all of us in Free Software and Open Source, our toolchains that we depend, and the software we use. We can’t let that happen. We need to strongly repudiate this patent troll, and not only defend ourselves but to neuter them and make an example of them to warn off any other patent troll that thinks we are easy pickings.

Companies, individuals, governments should give money so we can make a singularly statement – not here, not now, not ever!  Let’s set that precedence. Donate to the cause. GNOME has a history of conquering its bullies. But we can’t do that without your help.

An American President once said “They counted on us to be passive. They counted wrong.”

Donate now! 


October 22, 2019

Testing Indico opensource event management software

Indico event management tool

After organizing a bunch of conferences in the past years I found some communities had problems choosing a conference management software. One alternative or others had some limitations in one way or another. In the middle I collected a list of opensource alternatives and recently I’m very interested in Indico. This project is created and maintained by the CERN (yes, those guys who invented the WWW too).

The most interesting reasons for me are:

Jornadas WMES 2019

With the help of Franc Rodríguez we set up an Indico testing instance at This system is ready to be broken so feel free to experiment.

So this post is an invitation to any opensource community wanting to test the feasiability of Indico for their future events. Please consider to give it an opportunity.

Here are some items I consider relevant for you:

And some potential enhancements (not fully check if currently available or not):

  • videoconf alternatives:
  • social networks integration
    • Twitter
    • Mastodon
    • Matrix
  • exports formats
    • pentabarf
    • xcal, etc
  • full GDPR compliance (seems it just need to add the relevant information to your instance)
  • gravatar support
  • integration with SSO used by the respective community (to be honest I didn’t checked the Flask-Multipass features)
  • maybe a easier inviting procedure: sending inviting links to an email for full setup;
  • map integration (OSM and others).

For your tests you’ll need to register at the site and contact me (look at the botton of this page) to add you as a manager of your community.

I think it would be awesome for many communities sharing a common software product. Isn’t it?

PD: Great news, next March CERN will host an Indico meeting!
PPD: Here you can check a full configured event organized by the Libre Space Foundation people: Open Source CubeSat Workshop 2019.
PPPD: And now I got your attention check our Congress/Conference organization tasks list. It’s free!

GNOME, and Free Software Is Under Attack

A month ago, GNOME was hit by a patent troll. We’re fighting, but need some money to fund the legal defense, and counterclaim. I just donated, and if you use or develop free software you should too.

October 18, 2019

GNOME Shell Hackfest 2019

This week, I have attended the GNOME Shell Hackfest 2019 held in Leidschendam, The Netherlands. It was a fantastic event, in a fantastic city! The list of attendees was composed of key members of the community, so we managed to get a lot done — a high amount of achievements for only three days of hackfest, in fact.


My personal interests in this hackfest floated around Mutter more than GNOME Shell. In fact, we managed to either review, merge, or discuss a few things interesting points:

  • Multiple frame clocks for multiple monitors; Jonas and Carlos talked about event delivery and painting routines, and my small contribution to the discussion was about the requirements for moving actors between monitors and their corresponding frame clocks. Looks like we managed to get a pretty solid picture of what should happen next!
  • Carlos opened merge requests for his ClutterSeat work, and it looks promising.
  • Landed a couple of improvements to geometric picking.
  • Carlos, Jonas and I are now officially maintainers of Mutter, in addition to Florian!

Some of my own work also had a share of love:

  • My graphene branch was merged! I’ll continue the CoglMatrix → graphene_matrix_t transition, but that’s a huge step already. Not only graphene allows us to offload a lot from Cogl, but in the future it will probably play an important role in reducing CPU usage.
  • Another cool merge request making CoglJournal more useful was also merged. Though in the future, we might as well drop the journal entirely due to paint nodes.
  • During the flight back, I managed to plot the full transition to paint nodes in Clutter, and actually write about a third of the code for it. It went all the way from splitting pick and paint code paths, to introducing new types of paint nodes, to post-transition plans such as paint node caching, diffing, etc. It will be a massive change, but the possibilities it will bring are well worth it. For GNOME 3.36, I expect to land at the very least half of what’s necessary for the paint node transition to happen.


During the past few months, I found myself involved in a couple of very big changes in GNOME Shell: the new lock screen, and a bigger design review of the overview.

What started as an attempt to upstream part of Endless OS features (Drag n’ Drop of icons in custom positions) triggered a bigger change in how other elements of the Shell are places, and how they behave. More on this GNOME Shell front will be shared later; right now, it is too much of a moving target, and any exposition of what’s going on would only be misleading.

One small nicety that sparked during the hackfest was the wiggling effect of failed password attempts. It was a 30-minute hack that turned into a very nice effect, designers enjoyed and polished a few parameters, it was merged and will be part of 3.36. Nice!


This hackfest was very productive and I believe it had a big impact, both in short and long term plans. I’d like to appreciate Carlos Garnacho for all his work on organizing the event, and Hans de Goede for being our host. The hackfest was absolutely smooth, and we could focus on what matters because of your work!

Letting Birds scooters fly free

(Note: These issues were disclosed to Bird, and they tell me that fixes have rolled out. I haven't independently verified)

Bird produce a range of rental scooters that are available in multiple markets. With the exception of the Bird Zero[1], all their scooters share a common control board described in FCC filings. The board contains three primary components - a Nordic NRF52 Bluetooth controller, an STM32 SoC and a Quectel EC21-V modem. The Bluetooth and modem are both attached to the STM32 over serial and have no direct control over the rest of the scooter. The STM32 is tied to the scooter's engine control unit and lights, and also receives input from the throttle (and, on some scooters, the brakes).

The pads labeled TP7-TP11 near the underside of the STM32 and the pads labeled TP1-TP5 near the underside of the NRF52 provide Serial Wire Debug, although confusingly the data and clock pins are the opposite way around between the STM and the NRF. Hooking this up via an STLink and using OpenOCD allows dumping of the firmware from both chips, which is where the fun begins. Running strings over the firmware from the STM32 revealed "Set mode to Free Drive Mode". Challenge accepted.

Working back from the code that printed that, it was clear that commands could be delivered to the STM from the Bluetooth controller. The Nordic NRF52 parts are an interesting design - like the STM, they have an ARM Cortex-M microcontroller core. Their firmware is split into two halves, one the low level Bluetooth code and the other application code. They provide an SDK for writing the application code, and working through Ghidra made it clear that the majority of the application firmware on this chip was just SDK code. That made it easier to find the actual functionality, which was just listening for writes to a specific BLE attribute and then hitting a switch statement depending on what was sent. Most of these commands just got passed over the wire to the STM, so it seemed simple enough to just send the "Free drive mode" command to the Bluetooth controller, have it pass that on to the STM and win. Obviously, though, things weren't so easy.

It turned out that passing most of the interesting commands on to the STM was conditional on a variable being set, and the code path that hit that variable had some impressively complicated looking code. Fortunately, I got lucky - the code referenced a bunch of data, and searching for some of the values in that data revealed that they were the AES S-box values. Enabling the full set of commands required you to send an encrypted command to the scooter, which would then decrypt it and verify that the cleartext contained a specific value. Implementing this would be straightforward as long as I knew the key.

Most AES keys are 128 bits, or 16 bytes. Digging through the code revealed 8 bytes worth of key fairly quickly, but the other 8 bytes were less obvious. I finally figured out that 4 more bytes were the value of another Bluetooth variable which could be simply read out by a client. The final 4 bytes were more confusing, because all the evidence made no sense. It looked like it came from passing the scooter serial number to atoi(), which converts an ASCII representation of a number to an integer. But this seemed wrong, because atoi() stops at the first non-numeric value and the scooter serial numbers all started with a letter[2]. It turned out that I was overthinking it and for the vast majority of scooters in the fleet, this section of the key was always "0".

At that point I had everything I need to write a simple app to unlock the scooters, and it worked! For about 2 minutes, at which point the network would notice that the scooter was unlocked when it should be locked and sent a lock command to force disable the scooter again. Ah well.

So, what else could I do? The next thing I tried was just modifying some STM firmware and flashing it onto a board. It still booted, indicating that there was no sort of verified boot process. Remember what I mentioned about the throttle being hooked through the STM32's analogue to digital converters[3]? A bit of hacking later and I had a board that would appear to work normally, but about a minute after starting the ride would cut the throttle. Alternative options are left as an exercise for the reader.

Finally, there was the component I hadn't really looked at yet. The Quectel modem actually contains its own application processor that runs Linux, making it significantly more powerful than any of the chips actually running the scooter application[4]. The STM communicates with the modem over serial, sending it an AT command asking it to make an SSL connection to a remote endpoint. It then uses further AT commands to send data over this SSL connection, allowing it to talk to the internet without having any sort of IP stack. Figuring out just what was going over this connection was made slightly difficult by virtue of all the debug functionality having been ripped out of the STM's firmware, so in the end I took a more brute force approach - I identified the address of the function that sends data to the modem, hooked up OpenOCD to the SWD pins on the STM, ran OpenOCD's gdb stub, attached gdb, set a breakpoint for that function and then dumped the arguments being passed to that function. A couple of minutes later and I had a full transaction between the scooter and the remote.

The scooter authenticates against the remote endpoint by sending its serial number and IMEI. You need to send both, but the IMEI didn't seem to need to be associated with the serial number at all. New connections seemed to take precedence over existing connections, so it would be simple to just pretend to be every scooter and hijack all the connections, resulting in scooter unlock commands being sent to you rather than to the scooter or allowing someone to send fake GPS data and make it impossible for users to find scooters.

In summary: Secrets that are stored on hardware that attackers can run arbitrary code on probably aren't secret, not having verified boot on safety critical components isn't ideal, devices should have meaningful cryptographic identity when authenticating against a remote endpoint.

Bird responded quickly to my reports, accepted my 90 day disclosure period and didn't threaten to sue me at any point in the process, so good work Bird.

(Hey scooter companies I will absolutely accept gifts of interesting hardware in return for a cursory security audit)

[1] And some very early M365 scooters
[2] The M365 scooters that Bird originally deployed did have numeric serial numbers, but they were 6 characters of type code followed by a / followed by the actual serial number - the number of type codes was very constrained and atoi() would terminate at the / so this was still not a large keyspace
[3] Interestingly, Lime made a different design choice here and plumb the controls directly through to the engine control unit without the application processor having any involvement
[4] Lime run their entire software stack on the modem's application processor, but because of [3] they don't have any realtime requirements so this is more straightforward

comment count unavailable comments

g_array_steal() and g_ptr_array_steal() in GLib 2.63.1

Another set of new APIs in the upcoming GLib 2.63.1 release allow you to steal all the contents of a GArray, GPtrArray or GByteArray, and continue using the array container to add more contents to in future.

This is work by Paolo Bonzini and Emmanuel Fleury, and will be available in the soon-to-be-released 2.63.1 release.

Here’s a quick example using GPtrArray — usage is similar in GArray and GByteArray:

g_autoptr(GPtrArray) chunk_buffer = g_ptr_array_new_with_free_func (g_bytes_unref);

/* Some part of your application appends a number of chunks to the pointer array. */
g_ptr_array_add (chunk_buffer, g_bytes_new_static ("hello", 5));
g_ptr_array_add (chunk_buffer, g_bytes_new_static ("world", 5));


/* Periodically, the chunks need to be sent as an array-and-length to some other part of the program. */
GBytes **chunks;
gsize n_chunks;

chunks = g_ptr_array_steal (chunk_buffer, &amp;n_chunks);
for (gsize i = 0; i < n_chunks; i++)
    /* Do something with each chunk here, and then free them, since g_ptr_array_steal() transfers ownership of all the elements and the array to the caller. */

    g_bytes_unref (chunks[i]);

g_free (chunks);

/* After calling g_ptr_array_steal(), the pointer array can be reused for the next set of chunks. */
g_assert (chunk_buffer->len == 0);