24 hours a day, 7 days a week, 365 days per year...

March 21, 2018

GTask and Threaded Workers

GTask is super handy, but it’s important you’re very careful with it when threading is involved. For example, the normal threaded use case might be something like this:

state = g_slice_new0 (State);
state->frob = get_frob_state (self);
state->baz = get_baz_state (self);

task = g_task_new (self, cancellable, callback, user_data);
g_task_set_task_data (task, state, state_free);
g_task_run_in_thread (task, state_worker_func);

The idea here is that you create your state upfront, and pass that state to the worker thread so that you don’t race accessing self-> fields from multiple threads. The “shared nothing” approach, if you will.

However, even this isn’t safe if self has thread usage requirements. For example, if self is a GtkWidget or some other object that is expected to only be used from the main-thread, there is a chance your object could be finalized in a thread.

Furthermore, the task_data you set could also be finalized in the thread. If your task data also holds references to objects which have thread requirements, those too can be unref’d from the thread (thereby cascading through the object graph should you hit this undesirable race).

Such can happen when you call g_task_return_pointer() or any of the other return variants from the worker thread. That call will queue the result to be dispatched to the GMainContext that created the task. If your CPU task-switches to that thread before the worker thread has released it’s reference you risk the chance the thread holds the last reference to the task.

In that situation self and task_data will both be finalized in that worker thread.

Addressing this in Builder

We already have various thread pools in Builder for work items so it would be nice if we could both fix the issue in our usage as well as unify the thread pools. Additionally, there are cases where it would be nice to “chain task results” to avoid doing duplicate work when two subsystems request the same work to be performed.

So now Builder has IdeTask which is very similar in API to GTask but provides some additional guarantees that would be very difficult to introduce back into the GTask implementation (without breaking semantics). We do this by passing the result and the threads last ownership reference to the IdeTask back to the GMainContext at the same time, ensuring the last unref happens in the expected context.

While I was at it, I added a bunch of debugging tools for myself which caught some bugs in my previous usage of GTask. Bugs were filed, GTask has been improved, yadda yadda.

But I anticipate the threading situation to remain in GTask and you should be aware of that if you’re writing new code using GTask.

SVG Rendering and GSVGtk

For SVG rendering, we have few options: librsvg, as the most popular one,,and Lasem, maybe others. Both take an SVG file, parse it and render it over specified Cairo.Context.

GSVGtk is using librsvg by default, but have started to experiment with a new Rendered, written in Vala and using GSVG objects directly.

Using GSVG objects directly, means you can create programmatically an SVG image, then pass it directly to GSVGtk’s Renderer to render it to screen, create an image or a PDF, without write down to an string and then parse back in the renderer.

GSVGtk’s renderer, is not a fully compliant SVG 1.1 specification. It is enough for basic shapes and texts. Needs transformations at least for a more useful renderer for a canvas user like libgtkcanvas.

GSVGtk’s new renderer is in render branch if you want to try it.

For some reason libgrsvg creates an translucent area, while GSVGtk’s renderer is not, so in the video you can see how each shape is rendered clearly when it is drawn inside a Clutter.Actor, so now with this in shape, is possible to think to add some controls for shape editing in GSVGtk, now that invalidating each actor drawing area allows to render just the object it cares about, but without text intermediate format.

Modernization of games

Five or More

This year I have proposed a Google Summer of Code idea (we are in student applications period) for modernizing Five-or-More, a game left out from the last games modernization round, when most of the games have been ported to Vala.

Five or More

Several people did ask about what this project was, some of them started contributing, so I got in trouble with finding tasks which will not get obsoleted by the full Vala rewrite, so I started reporting tasks, and they started to solve them at a pace I had a hard time to follow with reviews, but here are the most important updates so far, already on master:
  • migration from intltool to gettext
  • migration from autotools to meson
  • split the monolithic five-or-more.c file into several components
  • migration from custom games score tracking to libgnome-games-support
There still are some more tasks open, Flatpak packaging is being worked on, and let's see if someone really applies for the modernization, then we will need some help from a designer, and casual gamers will simply have to wait for the next stable release to enjoy a modern (inside-out, modernized from the git hosting, throughout the build infrastructure and through the programming language, all the way up to an improved UI/UX). But that is the future.


The current default theme (32x32 images), created by jimmac in 2000
I have been working a bit recently on Atomix (being my first project migrated to GNOME gitlab, congratulations to everyone involved on that front). On that front other than the internal modernization (using gettext, using meson) I have been experimenting with graphics changes, to overcome the barely visible connections between atoms and to modernize the looks a bit. I have been experimenting with Kenney's public domain images for now, CC0 images, and I have to say I'm quite pleased with the results (still need some fine-tuning). As a picture's worth one thousand words, I'm posting pictures worth three thousand words, a screenshot of the current interface, a screenshot with my preferred theme, and one with the adjustments suggested by my wife.
Initially I was thinking about using "this much" colors, but... we are talking about a game, it can not and probably should not strictly follow the color scheme used by applications. In my wildest dreams games should also use different theming for the components, something more playful, colorful, relaxing.

Light theme, darker background, lighter walls, more visible connections
Dark theme, lighter background, thinner walls, visible connections
What do you think? Which one of the three do you prefer and why?

Consider this as a call for help, if you are a designer, or just want to have fun doing some artwork for a small game, contact me to fine-tune the current theme to have a more modern look (the old one is already grown-up, having 18 years).

Progress in monitoring

I did work a bit on various projects, I will try to summarize and ask for help, as any of these fronts could use help, feel free to contact me if you are interested.

Let's start with the netstats (hard)work @antares has done (still under review for merging into libgtop master, #1 merge request on libgtop gitlab): she did investigate a lot to find the best way to get per-process network statistics into libgtop, something Usage and System Monitor both should benefit from. This is implemented currently as a root daemon using libpcap for capturing packets and summing their sizes, exposing a dbus-interface, congratulate her for the great job and tremendous patience she has shown enduring all my reviews and nitpicking comments.
In the long-term I would like to also support the gtop daemon used on *BSDs on Linux, which we couldn't get to work, but Ben Dejean has already come up with a solution to our problems and with his help I'm sure we will have a libgtop linux daemon. For the internal network statistics calculation we are investigating alternatives to libpcap (an ebpf-based - based on a suggestion from libgtop senior Ben Dejean, mostly running in-kernel -  also needing root, and another suggestion from Philip Whitnall using cgroups + iptables + netfilter, which could get away with less privileges). Any ideas, other options or help on that front would be welcome, these ideas are only sketched, so replacing the currently working libpcap-based one will take some work.

System Monitor using libdazzle cpu chart
On another front, I have tried improving System Monitor charting, namely transitioning from the custom charts implementation to the graphs from libdazzle, which are also used in Builder. This is a work in progress, anyone can check out the wip-charts branch of system monitor. Currently the CPU chart was migrated, still using the dazzle CPU chart implementation, meaning the colors are not customizable using the color pickers and there are no labels on the axis, nor labels with the current usage for each CPU, see the screenshot. It would be great to finish the implementation with a custom gobject class allowing for customization of the colors, then I would happy to drop the custom graphing implementation. Thanks go to Christian Hergert for being helpful on libdazzle, timely reviews and suggestions on libdazzle-related issues.

gksu is dead. Long live PolicyKit

Today, gksu was removed from Debian unstable. It was already removed 2 months ago from Debian Testing (which will eventually be released as Debian 10 “Buster”).

It’s not been decided yet if gksu will be removed from Ubuntu 18.04 LTS. There is one blocker bug there.


Continues Integration in Librsvg, Part 1

The base setup

Rust makes it trivial to write any kind of tests for your project. But what good are they if you do not run them? In this blog series I am gonna explore the capabilities of Gitlab-CI and document how it is used in Librsvg.

First things first. What’s CI? It stands for Continues integration, basically it makes sure what you push in your repository continues to build and pass the tests. Even if someone committed something without testings it, or the tests happened to pass on their machine but not but not in a clean environment, we can know without having to clone and built manually.

CI also can have other uses, like enforcing a coding style or running resource heavy tests.

What’s Librsvg?

As the file puts it.

It’s a small library to render Scalable Vector Graphics(SVG), associated with the GNOME Project. It renders SVG files to Cairo surfaces. Cairo is the 2D, antialiased drawing library that GNOME uses to draw things to the screen or to generate output for printing.

Basic test case

First of all we will add a .gitlab-ci.yml file in the repo.

We will start of with a simple case. A single stage and a single job. A job, is a single action that can be done. A stage is a collection of jobs. Jobs of the same state can be run in parallel.

Minor things were omitted, such as the full list of dependencies. The original file is here.

  - test

  image: opensuse:tumbleweed
  stage: test

    - zypper install -y gcc rust ... gtk3-devel

    - ./ --enable-debug
    - make check

Line, 1 and 2 define the our stages. If a stage is defined but has no jobs attached it is skipped.

Line 3 defines our job, with the name opensuse:tumbleweed.

Line 4 will fetch the opensuse:tumbleweed OCI image from dockerhub.

In line 5 we specify that that job is part of the test stage that we defined in line 2.

before_script: is something like a setup phase. In our case we will install our dependencies.

after_script: accordingly is what runs after every job including failed ones. We are not going to use it yet though.

Then in line 11 we will write our script. The commands that would have to be run to build librsvg like if we where to do it from a shell. Indeed the script: part is like a shell script.

If everything went well hopefully it will look like this.

Testing Multiple Distributions

Builds on opensuse based images work, but we can do better. We can test multiple distros!

Let’s add Debian testing and Fedora 27 builds to the pipeline.

  image: fedora:latest
  stage: test

    - dnf install -y gcc rust ...  gtk3-devel

    - ./ --enable-debug
    - make check

  image: debian:testing
  stage: test

    - apt install -y gcc rust ... libgtk-3-dev

    - ./ --enable-debug
    - make check

Similar to what we did for opensuse. Notice that the only things that change are the names of the container images and the before_script:

specific to each distro’s package manager. This will work even better when we add caching and artifacts extractions into the template. But that’s for a later post.

We could refactor the above by using a template(yaml anchors). Here is how our file will look like after that.

  - test

.base_template: &distro_test
  stage: test

    - ./ --enable-debug
    - make check

  image: opensuse:tumbleweed
     - zypper install -y gcc rust ... gdk-pixbuf-devel gtk3-devel
  <<: *distro_test

  image: fedora:latest
    - dnf install -y gcc rust ... gdk-pixbuf-devel gtk3-devel
  <<: *distro_test

  image: fedora:latest

    - dnf install -y gcc rust ... gdk-pixbuf-devel gtk3-devel
  <<: *distro_test

And Failure :(. I mean Success!

* Debian test was added later

Apparently the librsvg test-suite was failing on anything other than opensuse. Later we found out that this was the result of freetype being a bit outdated on the system Federico used to generate the reference “good” result. In Freetype 2.8/2.9 there was a bugfix that affected how the test cases were rendered. Thankfully this wasn’t librsvg‘s code misbehaving but rather a bug only in the test-suite. After regenerating the reference results with a newer version of Freetype everything worked.

Adding Rust Lints

Rust has it’s own style formatting tool, rustfmt, which is highly configurable. We will use it to make sure our codebase style stays  consistent. By adding a test in the Gitlab-CI we can be sure that Merge Requests will be properly formatted before reviewing and merging them.

There’s also clippy! An amazing collection of a bunch of lints for Rust code.  If we would have used it sooner it would probably have even caught a couple bugs  occurring when comparing floating point numbers. We haven’t decide yet on what lints to enable/deny, so it has a manual trigger for now and won’t be run unless explicitly triggered by someone. I hope that will change Soon™.

First we will add another test stage called lint.

  - test
  - lint

Then we will add 2 jobs. One for each tool. Both tools require the rust nightly toolchain of the compiler.

# Configure and run rustfmt on nightly toolchain
# Exits and builds fails if on bad format
  image: "rustlang/rust:nightly"
  stage: lint

    - rustc --version && cargo --version
    - cargo install rustfmt-nightly --force
    - cargo fmt --all -- --write-mode=diff

# Configure and run clippy on nightly toolchain
  image: "rustlang/rust:nightly"
  stage: lint
    - apt update -yqq
    - apt-get install -y libgdk-pixbuf2.0-dev ... libxml2-dev

    - rustc --version && cargo --version
    - cargo install clippy --force
    - cargo clippy --all
  when: manual


And that’s it, with the only caveat that it would take 40-60min for each  pipeline run to complete. There are couple of ways this could be sped up though, which will be the topic of part2 and part3.

** During the first experiments, rustfmt was set as a manual trigger(enabled by default later) and cross-distro tests were grouped to their own stage. But it’s functional identical to the setup described in the post.

Continues Integration in Librsvg, Part 3

Caching stuff

Generally 5min/job does not seem like a terribly long time to wait, but it can add up really quickly when you add couple of jobs to the pipeline. First let’s take a look where most of the time is spent. First of jobs currently are spawned in a clean environment, which means each time we want to build the Rust part of librsvg, we download the whole cargo registry and all of the cargo dependencies each time. That’s our first low hanging fruit! Apart from that another side-effect of the clean environment is that we build librsvg from scratch each time, meaning we don’t make use of the incremental compilation that modern compilers offer. So let’s get started.

Cargo Registry

According to the cargo docs there’s a cache of the registry is stored in $CARGO_HOME(default under $HOME/.cargo). Gitlab-CI though only allows you to cache things that exists inside your projects root (it does not need to be tracked by git). So we have to somehow relocate $CARGO_HOME somewhere where we can extract it. Thankfully that’s as easy as setting $CARGO_HOME to our desired path.

.test_template: &distro_test
    - mkdir -p .cargo_cache
    # Only stuff inside the repo directory can be cached
    # Override the CARGO_HOME variable to force it location
    - export CARGO_HOME="${PWD}/.cargo_cache"
    - echo foo
      - .cargo_cache/

What’s new in our template above compared to part 1, are the before_script: and cache: blocks. In the before_script: first we create the .cacrgo_cache folder if it not exists (cargo is probably smart enough to not need this but ccache isn’t! So better be safe I guess). And then we export the new $CARGO_HOME location. Then in the cache: block we set what folder we want to cache. That’s it, now our cargo registry and downloaded crates should persist across builds!

Caching Rust Artifacts

The only thing needed to cache the rustc build artifacts is to add target/ in the cache: block. That’s it I am serious.

    - target/

Caching C Artifacts with ccache

C and ccache on the other hand are a completely different story sadly. To that it did not contribute that my knowledge of C and build systems is approximating 0. Thankfully while searching I found another post from Ted Gould where he describes how ccache was setup for Inkscape.  The following config ended up working for librsvg‘s current autotools setup.

.test_template: &distro_test
    # ccache Config
    - mkdir -p ccache
    - export CCACHE_BASEDIR=${PWD}
    - export CCACHE_DIR=${PWD}/ccache
    - export CC="ccache gcc"

    - echo foo
      - ccache/

I got stuck on how to actually call gcc through ccache since it depends on the build system you use (see export CC). Shout out to Christian Hergert for showing me how to do it!

Cache behavior

One last thing, is that we want our each of our job to have an independent cache as opposed to a shared one across the pipeline. This can be achieved by using the key: directive. I am not sure how it works and I wish the jobs would elaborate a bit more. In practice the following line will make sure that each job on each branch will have it’s own cache. For more complex configurations I suggest looking at the gitlab docs.

    # JOB_NAME - Each job will have it's own cache
    # COMMIT_REF_SLUG = Lowercase name of the branch
    # ^ Keep different caches for each branch

Final config and results

So here is the cache config as it exists today on Librsvg‘s master branch. This brought build of each job from ~4-5min, which we left it in part 2, to ~1-1:30min. Pretty damn fast! But what if you wanted to do a clean build or rule the out the possibility that the cache is causing bugs and failed runs? Well if you happen to use gitlab 10.4 or later (and GNOME is) you can do it from the Web GUI. If not you probably have to contact a gitlab administrator.

.test_template: &distro_test
    # CCache Config
    - mkdir -p ccache
    - mkdir -p .cargo_cache
    - export CCACHE_BASEDIR=${PWD}
    - export CCACHE_DIR=${PWD}/ccache
    - export CC="ccache gcc"

    # Only stuff inside the repo directory can be cached
    # Override the CARGO_HOME variable to force it location
    - export CARGO_HOME="${PWD}/.cargo_cache"
    - echo foo

    # JOB_NAME - Each job will have it's own cache
    # COMMIT_REF_SLUG = Lowercase name of the branch
    # ^ Keep diffrerent caches for each branch
      - target/
      - .cargo_cache/
      - ccache/

Making sure the repository doesn't break, automatically

Gitlab has a fairly conventional Continuous Integration system: you push some commits, the CI pipelines build the code and presumably run the test suite, and later you can know if this succeeded of failed.

But by the time something fails, the broken code is already in the public repository.

The Rust community uses Bors, a bot that prevents this from happening:

  • You push some commits and submit a merge request.

  • A human looks at your merge request; they may tell you to make changes, or they may tell Bors that your request is approved for merging.

  • Bors looks for approved merge requests. It merges each into a temporary branch and waits for the CI pipeline to run there. If CI passes, Bors automatically merges to master. If CI fails, Bors annotates the merge request with the failure, and the main repository stays working.

Bors also tells you if the mainline has moved forward and there's a merge conflict. In that case you need to do a rebase yourself; the repository stays working in the meantime.

This leads to a very fair, very transparent process for contributors and for maintainers. For all the details, watch Emily Dunham's presentation on Rust's community automation (transcript).

For a description of where Bors came from, read Graydon Hoare's blog.

Bors evolved into Homu and it is what Rust and Servo use currently. However, Homu depends on Github.

I just found out that there is a port of Homu for Gitlab. Would anyone care to set it up?

Making sure the repository doesn't break, automatically

Gitlab has a fairly conventional Continuous Integration system: you push some commits, the CI pipelines build the code and presumably run the test suite, and later you can know if this succeeded of failed.

But by the time something fails, the broken code is already in the public repository.

The Rust community uses Bors, a bot that prevents this from happening:

  • You push some commits and submit a merge request.

  • A human looks at your merge request; they may tell you to make changes, or they may tell Bors that your request is approved for merging.

  • Bors looks for approved merge requests. It merges each into a temporary branch and waits for the CI pipeline to run there. If CI passes, Bors automatically merges to master. If CI fails, Bors annotates the merge request with the failure, and the main repository stays working.

Bors also tells you if the mainline has moved forward and there's a merge conflict. In that case you need to do a rebase yourself; the repository stays working in the meantime.

This leads to a very fair, very transparent process for contributors and for maintainers. For all the details, watch Emily Dunham's presentation on Rust's community automation (transcript).

For a description of where Bors came from, read Graydon Hoare's blog.

Bors evolved into Homu and it is what Rust and Servo use currently. However, Homu depends on Github.

I just found out that there is a port of Homu for Gitlab. Would anyone care to set it up?

Update: Two people have suggested porting Bors-ng to Gitlab instead, for scalability reasons.

Reflections on Distractions in Work, Productivity and Time Usage

For the past year or so I have mostly worked at home or remote in my daily life. Currently I’m engaged in my master thesis and need to manage my daily time and energy to work on it. It is no surprise to many of us that working using your internet-connected personal computer at home can make you prone to many distractions. However, managing your own time is not just about whipping and self-discipline. It is about setting yourself up in a structure which rewards you for hard work and gives your mind the breaks it needs. Based on reflections and experimentation with many scheduling systems and tools I finally felt I have achieved a set of principles I really like and that’s what I’ll be sharing with you today.

Identifying the distractions

Here’s a typical scenario I used to experience: I would wake up and often the first thing I do is turn on my computer, check my e-mail, check social media, check the news. I then go eat my breakfast and start working. After a while I would find myself returning to check mail and social media. Not that much important necessarily happened. But it’s fairly easy for me to press “Super” and type “Gea” and press “Enter” (and Geary will show my e-mail inbox). It’s also fairly easy to press “Ctrl+L” to focus the address bar in Firefox and write “f” (and is autocompleted). Firefox is by default so (ironically) helpful to suggest At other times, a distraction can simply be an innocent line of thought that hits you fx. “oh it would be so cool if I started sorting my pictures folder, let me just start on that quickly before I continue my work“.

From speaking with friends I am fairly sure this type of behavior is not uncommon at all. The first step in trying to combat it myself was to identify the scope of it. I don’t blame anyone else for dealing with this – I see this more as an unfortunate design consequence of the way our personal computers are “universal” and isn’t context-aware enough. Afterall, GNOME Shell was just trying to be helpful, Firefox was also just trying to be helpful, although they are also in some aspects making it easier for me to distract myself like that.

Weapons against distractions

Let me start with a few practical suggestions, which helped me initially break the worst patterns (using big hammers).

  • Stylish: using Inspection tools and CSS hacks I remove endless scrolling news feeds, and news content from websites that I might otherwise on reflex open up and read when in a distracted scenario. The CSS hacks are easy to turn off again of course, but it adds an extra step and makes it purposely less appealing for me to do unless it’s for something important.

  • BlockSite: I use BlockSite in “Whitelist mode” and turn it on while I work. This is a big hammer which essentially blocks all of internet except for whitelisted websites I use for work. Knowing that you can’t access anything really had a positive initial psychological effect for me.
  • Minimizing shell notifications: While I don’t have the same big hammer to “block access to my e-mail” here, I decided to change the order of my e-mail inboxes in Geary so my more relevant (and far less activity prone) student e-mail inbox appears first. I also turned off the background e-mail daemon and turned off notification banners in GNOME Shell.
  • Putting Phone in Ultra Battery Saving Mode: I restrict my phone to calls and SMS so that I don’t receive notifications from various chat apps which are irrelevant whilst working. This also saves the battery nicely.

My final weapon is The Work Schedule.This doesn’t sound new or surprising and we probably all tried it, however with more or less success.

..Schedules can be terrible.

I’m actually not that big a fan of putting microscheduling my life usually. Traditional time schedules are too focused around doing things from timestamp X to timestamp Y. They require that you “judge” how fast you are in working and their structure just feels super inflexible. The truth in real life is that my day never look like how I planned it to be. In fact, I found myself sometimes even more demotivated (and distracted) because I was failing to live up to my own schedule and by the end of the day never really managed to complete that “ideal day”. The traditional time schedule ended up completely missing up what it was supposed to fix and help against.

But on the other hand, working without a schedule often results in:

  • Forgetting to take breaks from work which is unhealthy and kills my productivity later.
  • No sense of progress except from the work itself but if the work is ongoing for longer time this will feel endless and exhausting.
  • Lack of work duration meant that my productivity continued to fluctate between overwork and underwork since it is hard to judge when it is okay to stop.

The resulting system

For the past couple of weeks I have been using a system which is a bit like a “semi-structured time schedule”. To you it might just seem like a list of checkboxes and in some sense it is! However, the simplicity in this system has some important principles behind it I have learned along the way:

  • Checking the checkboxes give a sense of progress as I work throughout my day.
  • The schedule supports adding breaks in-between work sessions and puts my day in an order.
  • The schedule makes no assumptions about “What work” I will be doing or reaching that day. Instead it specifies that I work for 1 hour and this enables me to funnel my energy. I use GNOME Clock’s Timer function and let it count down for 1 hour until there’s a nice simple “ding” to be heard when it finishes. It’s up to you whether you then take the break or continue a bit longer.
  • The schedule makes no assumptions about “When” I will do work and only approximates for how long. In reality I might wake up at 7:00, 8:00 or 9:00 AM and it doesn’t really matter. What’s important is that I do as listed and take my breaks in the order presented.
  • If there are aspects of the order I end up changing, the schedule permits it – It is possible to tick off tasks independent of the order.
  • If I get ideas for additional things I need to do (banking, sending an important e-mail, etc) I can add them to the bottom of the list.
  • The list is made the day before. This makes it easier to follow it straight after waking up.
  • I always use the breaks for something which does not involve computers. I use dancing, going for a walk or various house duties (Interestingly house duties become more exciting for me to do as work break items, than as items I do in my free time).
  • In the start you won’t have much feeling for how much work you can manage to make and it is easy to overestimate and get out of breath or unable to complete everything. It works much better for me to underestimate my performance (fx 2 hours of focused work before lunch instead of 3 hours) and feel rewarded that I did everything I had planned and perhaps even more than that.
  • I insert items I want to do in my free time into my scheduling after I finish work. These items are purely there to give additional incentive and motivation to finish.
  • The system is analog on purpose because I’m interested in keeping the list visually present on my desk at all times. I also think it is an advantage that making changes to the list doesn’t interfere with the work context I maintain on the computer.

Lastly, I want to give two additional tips. If you like listening to music while working, consider whether it might affect your productivity. For example, I found music with vocals to be distracting me if I try to immerse myself in reading difficult litterature. I can really recommend Doctor Turtle’s acoustic instrumental music while working though (all free). Secondly, I find that different types of tasks requires different postures. For abstract, high-level or vaguely formulated tasks (fx formulating goals, reviewing something or reflecting), I find interacting with the computer whilst standing up and walking around to really help gather my thoughts. On the other hand with practical tasks or tasks which require immersion (fx programming tasks), I find sitting down to be much more comfortable.

Hopefully my experiences here might be useful or interesting for some of you. Let me know!

March 20, 2018

IMPORTANT: GitLab mass migration plan

I know some fellows doesn’t read desktop-devel-list, so let me share here an email that it’s important for all to read: We have put in place the plan for the mass migration to GitLab and the steps maintainers needs to do.

Read about it in the email sent to the mailing list.

PD: What a historical moment, isn’t it? 🎉

ED Update – week 11

It’s time (Well, long overdue) for a quick update on stuff I’ve been doing recently, and some things that are coming up. I’ve worked out a new way of doing these, so they should be more regular now, about every couple of weeks or so.

  • The annual report is moving ahead. I’ve moved up the timelines a bit here from previous years, so hopefully, the people who very kindly help author this can remember what we did in the 2016/17 financial year!
  • GUADEC/GNOME.Asia/LAS sponsorship – elements are coming together for the sponsorship brochure
    • Some sponsors are lined up, and these will be announced by the usual channels – thanks to everyone who supports the project and our conferences!
  • Shell Extensions – It’s been noticed that reviews of extensions have been taking quite some time recently, so I’ve stepped in to help. I still think that part of the process could be automated, but at the moment it’s quite manual. Help is very much appreciated!
  • The Code of Conduct consultation has been useful, and there’s been a couple of points raised where clarity could be added. I’m getting those drafted at the moment, and hope to get the board to approve this soon.
  • A couple of administrative bits:
    • We now have a filing system for paperwork in NextCloud
    • Reviewing accounts for the end of year accounts – it’s the end of the tax year, so our finances need to go to the IRS
    • Tracking of accounts receivable hasn’t been great in the past, probably not helped by GNUCash. I’m looking at alternatives at the moment.
  • Helping out with a couple of trademark issues that have come up
  • Regular working sessions for Flathub legal bits with our lawyers
  • I’ll be at LibrePlanet 2018 this weekend, and I’m giving a talk on Sunday. With the FSF, we’re hosting a SpinachCon on Friday. This aims to do some usability testing and finding those small things which annoy people.

GStreamer Rust bindings 0.11 / plugin writing infrastructure 0.2 release

Following the GStreamer 1.14 release and the new round of gtk-rs releases, there are also new releases for the GStreamer Rust bindings (0.11) and the plugin writing infrastructure (0.2).

Thanks also to all the contributors for making these releases happen and adding lots of valuable changes and API additions.

GStreamer Rust Bindings

The main changes in the Rust bindings were the update to GStreamer 1.14 (which brings in quite some new API, like GstPromise), a couple of API additions (GstBufferPool specifically) and the addition of the GstRtspServer and GstPbutils crates. The former allows writing a full RTSP server in a couple of lines of code (with lots of potential for customizations), the latter provides access to the GstDiscoverer helper object that allows inspecting files and streams for their container format, codecs, tags and all kinds of other metadata.

The GstPbutils crate will also get other features added in the near future, like encoding profile bindings to allow using the encodebin GStreamer element (a helper element for automatically selecting/configuring encoders and muxers) from Rust.

But the biggest changes in my opinion is some refactoring that was done to the Event, Message and Query APIs. Previously you would have to use a view on a newly created query to be able to use the type-specific functions on it

let mut q = gst::Query::new_position(gst::Format::Time);
if pipeline.query(q.get_mut().unwrap()) {
    match q.view() {
        QueryView::Position(ref p) => Some(p.get_result()),
        _ => None,
} else {

Now you can directly use the type-specific functions on a newly created query

let mut q = gst::Query::new_position(gst::Format::Time);
if pipeline.query(&mut q) {
} else {

In addition, the views can now dereference directly to the event/message/query itself and provide access to their API, which simplifies some code even more.

Plugin Writing Infrastructure

While the plugin writing infrastructure did not see that many changes apart from a couple of bugfixes and updating to the new versions of everything else, this does not mean that development on it stalled. Quite the opposite. The existing code works very well already and there was just no need for adding anything new for the projects I and others did on top of it, most of the required API additions were in the GStreamer bindings.

So the status here is the same as last time, get started writing GStreamer plugins in Rust. It works well!

March 19, 2018

GitLab + Flatpak – GNOME’s full flow

In this post I will explain how GitLab, CI, Flatpak and GNOME apps come together into, in my opinion, a dream-come-true full flow for GNOME, a proposal to be implemented by all GNOME apps.

Needless to say I enjoy seeing a plan that involves several moving pieces from different initiatives and people being put together into something bigger, I definitely had a good time ✌.

Generated Flatpak for every work in progress

The biggest news: From now on designers, testers and people with curiosity can install any work in progress (a.k.a ‘merge request‘) in an automated way with a simple click and a few minutes. With the integrated GitLab CI now we generate a Flatpak file for every merge request in Nautilus!

In case you are not familiar with Flatpak, this technology allows anyone using different Linux distributions to install an application that will use exactly the same environment as the developers are using, providing a seamless synchronized experience.

For example, do you want to try out the recent work done by Nikita that makes Nautilus views distribute the space between icons? Simply click here or download the artifacts of any merge request pipeline. It’s also possible to browse other artifacts, like build and test logs:


Notes: Due to a recent bug in Software you might need to install the 3.28 Flatpak Platform & Sdk manually; this usually happen automatically. In the meantime install the current master development Flatpak Nautilus with a single click here. In Ubuntu you might need to install Flatpak first.

Parallel installation

Now, a way to quickly test latest works in progress in Nautilus it’s a considerable improvement, but a user probably don’t want to mess up with the system installation of Nautilus or other GNOME projects installation, specially since it’s a system component. So we have worked on a way to make possible a full parallel installation and full parallel run of Nautilus versions alongside the system installation. We have also provided support for this setup in the UI to make it easily recognizable and ensure the user is not confused about what version of Nautilus is looking at. This is how it looks after installing any of the Flatpak files mentioned above:

Screenshot from 2018-03-19 21-35-41.png

We can see Nautilus system installation and the developer preview running at the same time, the unstable version has a blue color in the header bar and a icon with gears. As a side note you can also see the work of Nikita I mentioned before, the developer version of the views now distribute the space of the icons.

It’s possible to install more versions and run them all at the same time, you can see here how the different installed versions are found in the search of GNOME Shell where I also have the stable Flatpak Nautilus installed:

Screenshot from 2018-03-19 21-37-24

Another positive note is that this also removes the need to close the system instance of the app when contributing to GNOME, it was one of the most reported confusing steps of our newcomers guide.

Issues templates

One of the biggest difficulties we have for people reporting issues is that they either have an outdated application, the application is modified downstream, or the environment is completely different as the one the developers are using, making the experience difficult and frustrating for both the reporter and the developer.  Needless to say all of us had to deal with ‘worksforme‘ issues…

With Flatpak, GitLab and the work explained before we can fix this and boost considerably our success with bugs.

We have created a “bug” template where reporters are instructed to download the Flatpaked application in order to test and reproduce in the exact same environment and version as the developers, testers, and everyone involved is using. Here’s part of how the issue template looks like:


When created, the issue renders as:


Which is considerably clearer.

Notes: The plan is to provide the stable app Flatpak too.

Full continuous integration

The last step to close this plan is to make sure that GNOME projects build in all the major distributors. After all, most of us are working both upstream in GNOME and downstream in a Linux distribution. For that, we have made a full array of builds that runs weekly:


Which also fixes another issue we have experience for years: Distribution packagers delivering some GNOME applications different than intended, causing subtle but also sometimes major issues. Now we can point out to this graph that contains the commands to build the application as exact documentation on how to package GNOME projects, directly from the maintainer.

‘How to’ for GNOME maintainers

For the full CI and Flatpak file generation take a look at Nautilus GitLab CI. For the cross distro weekly array additionally create an scheduled pipeline like this. It’s also possible to do more regularly the weekly array of CI, however keep in mind the resources are limited and that the important part is that every MR is buildable and that the tests passes. Otherwise it can be confusing to contributors if the pipeline is failing for a one of the jobs and not for others. For non apps projects, you can pick a single distribution you are comfortable with, other ideas are welcome.

A more complex CI is possible, take a look at the magic work of Jordan Petridis in librsvg. I heard Jordan will do a blog post soon about more CI magic, which will be interesting to read.

For parallel installation, it’s mainly this MR for master and this commit for the stable version; however there has been a couple of commits on top of each, follow them up to today’s date (19-03-2018).

For issues templates, take a look at the templates folder. We were discussing here a  default template to be used for GNOME projects,however there was not much input in there so for now I imagined better to experiment with this in Nautilus. Also, this will make more sense once we can put a default template in place, this is something GitLab will probably work on soon.

Finishing up…

On the last 4 days Ernestas Kulik, Jordan Petridis, and me have been working trying to time box this effort and come with a complete proposal by today, each of us working in a part of the plan, and I think we can say we achieved it. Alex Larsson and other people around in #flatpak provided us with valuable help. Work by Florian Mullner and Christian Hergert were an inspiration for us too. Andrea Veri and Javier Jardon put a considerable amount of their time into setting up an AWS instance for CI so we can have fast builds. Big thanks to all of them.

As you may guess, this CI setup for an organization like GNOME with more than 500 projects is quite resource consuming. Good news is that we have some help from sponsors happening, many thanks to them! Stay tuned for the announcements.

Hope you like the direction GNOME is going, for me it’s exciting to modernize and make more dynamic how GNOME development happens, I can see we have come a long way since a year ago. If you have any thoughts, comments or ideas let any of us know!

GStreamer’s playbin3 overview for application developers

Multimedia applications based on GStreamer usually handle playback with the playbin element. I recently added support for playbin3 in WebKit. This post aims to document the changes needed on application side to support this new generation flavour of playbin.

So, first of, why is it named playbin3 anyway? The GStreamer 0.10.x series had a playbin element but a first rewrite (playbin2) made it obsolete in the GStreamer 1.x series. So playbin2 was renamed to playbin. That’s why a second rewrite is nicknamed playbin3, I suppose :)

Why should you care about playbin3? Playbin3 (and the elements it’s using internally: parsebin, decodebin3, uridecodebin3 among others) is the result of a deep re-design of playbin2 (along with decodebin2 and uridecodebin) to better support:

  • gapless playback
  • audio cross-fading support (not yet implemented)
  • adaptive streaming
  • reduced CPU, memory and I/O resource usage
  • faster stream switching and full control over the stream selection process

This work was carried on mostly by Edward Hervey, he presented his work in detail at 3 GStreamer conferences. If you want to learn more about this and the internals of playbin3 make sure to watch his awesome presentations at the 2015 gst-conf, 2016 gst-conf and 2017 gst-conf.

Playbin3 was added in GStreamer 1.10. It is still considered experimental but in my experience it works already very well. Just keep in mind you should use at least the latest GStreamer 1.12 (or even the upcoming 1.14) release before reporting any issue in Bugzilla. Playbin3 is not a drop-in replacement for playbin, both elements share only a sub-set of GObject properties and signals. However, if you don’t want to modify your application source code just yet, it’s very easy to try playbin3 anyway:

$ USE_PLAYBIN3=1 my-playbin-based-app

Setting the USE_PLAYBIN environment variable enables a code path inside the GStreamer playback plugin which swaps the playbin element for the playbin3 element. This trick provides a glance to the playbin3 element for the most lazy people :) The problem is that depending on your use of playbin, you might get runtime warnings, here’s an example with the Totem player:

$ USE_PLAYBIN3=1 totem ~/Videos/Agent327.mp4
(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'video-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'audio-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'text-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'video-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'audio-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

(totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'text-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'
sys:1: Warning: g_object_get_is_valid_property: object class 'GstPlayBin3' has no property named 'n-audio'
sys:1: Warning: g_object_get_is_valid_property: object class 'GstPlayBin3' has no property named 'n-text'
sys:1: Warning: ../../../../gobject/gsignal.c:3492: signal name 'get-video-pad' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'

As mentioned previously, playbin and playbin3 don’t share the same set of GObject properties and signals, so some changes in your application are required in order to use playbin3.

If your application is based on the GstPlayer library then you should set the GST_PLAYER_USE_PLAYBIN3 environment variable. GstPlayer already handles both playbin and playbin3, so no changes needed in your application if you use GstPlayer!

Ok, so what if your application relies directly on playbin? Some changes are needed! If you previously used playbin stream selection properties and signals, you will now need to handle the GstStream and GstStreamCollection APIs. Playbin3 will emit a stream collection message on the bus, this is very nice because the collection includes information (metadata!) about the streams (or tracks) the media asset contains. In playbin this was handled with a bunch of signals (audio-tags-changed, audio-changed, etc), properties (n-audio, n-video, etc) and action signals (get-audio-tags, get-audio-pad, etc). The new GstStream API provides a centralized and non-playbin-specific access point for all these informations. To select streams with playbin3 you now need to send a select_streams event so that the demuxer can know exactly which streams should be exposed to downstream elements. That means potentially improved performance! Once playbin3 completed the stream selection it will emit a streams selected message, the application should handle this message and potentially update its internal state about the selected streams. This is also the best moment to update your UI regarding the selected streams (like audio track language, video track dimensions, etc).

Another small difference between playbin and playbin3 is about the source element setup. In playbin there is a source read-only GObject property and a source-setup GObject signal. In playbin3 only the latter is available, so your application should rely on source-setup instead of the notify::source GObject signal.

The gst-play-1.0 playback utility program already supports playbin3 so it provides a good source of inspiration if you consider porting your application to playbin3. As mentioned at the beginning of this post, WebKit also now supports playbin3, however it needs to be enabled at build time using the CMake -DUSE_GSTREAMER_PLAYBIN3=ON option. This feature is not part of the WebKitGTK+ 2.20 series but should be shipped in 2.22. As a final note I wanted to acknowledge my favorite worker-owned coop Igalia for allowing me to work on this WebKit feature and also our friends over at Centricular for all the quality work on playbin3.

March 18, 2018

textures and paintables

With GTK4, we’ve been trying to find better solution for image data. In GTK3 the objects we used for this were pixbufs and Cairo surfaces. But they don’t fit the bill anymore, so now we have GdkTexture and GdkPaintable.


GdkTexture is the replacement for GdkPixbuf. Why is it better?
For a start, it is a lot simpler. The API looks like this:

int gdk_texture_get_width (GdkTexture *texture);
int gdk_texture_get_height (GdkTexture *texture);

void gdk_texture_download (GdkTexture *texture,
                           guchar     *data,
                           gsize       stride);

So it is a 2D pixel array and if you want to, you can download the pixels. It is also guaranteed immutable, so the pixels will never change. Lots of constructors exist to create textures from files, resources, data or pixbufs.

But the biggest difference between textures and pixbufs is that they don’t expose the memory that they use to store the pixels. In fact, before gdk_texture_download() is called, that data doesn’t even need to exist.
And this is used in the GL texture. The GtkGLArea widget for example uses this method to pass data around. GStreamer is expected to pass video in the form of GL textures, too.


But sometimes, you have something more complex than an immutable bunch of pixels. For example you could have an animated GIF or a scalable SVG. That’s where GdkPaintable comes in.
In abstract terms, GdkPaintable is an interface for objects that know how to render themselves at any size. Inspired by CSS images, they can optionally provide intrinsic sizing information that GTK widgets can use to place them.
So the core of the GdkPaintable interface are the function make the paintable render itself and the 3 functions that provide sizing information:

void gdk_paintable_snapshot (GdkPaintable *paintable,
                             GdkSnapshot  *snapshot,
                             double        width,
                             double        height);

int gdk_paintable_get_intrinsic_width (GdkPaintable *paintable);
int gdk_paintable_get_intrinsic_height (GdkPaintable *paintable);
int gdk_paintable_get_intrinsic_aspect_ratio (GdkPaintable *paintable);

On top of that, the paintable can emit the “invalidate-contents” and “invalidate-size” signals when its contents or size changes.

To make this more concrete, let’s take a scalable SVG as an example: The paintable implementation would return no intrinsic size (the return value 0 for those sizing function achieves that) and whenever it is drawn, it would draw itself pixel-exact at the given size.
Or take the example of the animated GIF: It would provide its pixel size as its intrinsic size and draw the current frame of the animation scaled to the given size. And whenever the next frame of the animation should be displayed, it would emit the “invalidate-size” signal.
And last but not least, GdkTexture implements this interface.

We’re currently in the process of changing all the code that in GTK3 accepted GdkPixbuf to now accept GdkPaintable. The GtkImage widget of course has been changed already, as have the drag’n’drop icons or GtkAboutDialog. Experimental patches exist to let applications provide paintables to the GTK CSS engine.

And if you now put all this information together about GStreamer potentially providing textures backed by GL images and creating paintables that do animations that can then be uploaded to CSS, you can maybe see where this is going

On the way to 0.28

Shotwell 0.28 “Braunschweig” is out.

Shotwell 0.28 about box

Half a year later than I was expecting it to be, sorry. This release fixes 60 Bugs! Get it at GNOME’s download server, from GIT or in the Shotwell PPA really soon™. A big thank you to all contributors that make up all the bits and pieces for such a release.

Notable features:

  • The viewer can now be started in full screen mode
  • We have added better support for images with transparent backgrounds
  • The image processing pipeline got much faster
  • Importing RAW images got much faster
  • Finally got rid of one of the “Cannot claim USB device” errors for camera import
  • Tumblr got promoted to be in the main export plugin set
  • It can be build using meson

Things we have lost:

  • We removed F-Spot import. I believe that this is not necessary anymore now.
  • Racje and Yandex plugins are no longer in the plugin set that is build by default

Web Engines Hackfest 2014

Last week I attended the Web Engines Hackfest. The event was sponsored by Igalia (also hosting the event), Adobe and Collabora.

As usual I spent most of the time working on the WebKitGTK+ GStreamer backend and Sebastian Dröge kindly joined and helped out quite a bit, make sure to read his post about the event!

We first worked on the WebAudio GStreamer backend, Sebastian cleaned up various parts of the code, including the playback pipeline and the source element we use to bridge the WebCore AudioBus with the playback pipeline. On my side I finished the AudioSourceProvider patch that was abandoned for a few months (years) in Bugzilla. It’s an interesting feature to have so that web apps can use the WebAudio API with raw audio coming from Media elements.

I also hacked on GstGL support for video rendering. It’s quite interesting to be able to share the GL context of WebKit with GStreamer! The patch is not ready yet for landing but thanks to the reviews from Sebastian, Mathew Waters and Julien Isorce I’ll improve it and hopefully commit it soon in WebKit ToT.

Sebastian also worked on Media Source Extensions support. We had a very basic, non-working, backend that required… a rewrite, basically :) I hope we will have this reworked backend soon in trunk. Sebastian already has it working on Youtube!

The event was interesting in general, with discussions about rendering engines, rendering and JavaScript.

Flatpaking application plugins

Sometimes you simply do not want to bundle everything in a single package such as optional plugins with large dependencies or third party plugins that are not supported. In this post I’ll show you how to handle this with Flatpak using HexChat as an example.

Flatpak has a feature called extensions that allows a package to be mounted within another package. This is used in a variety of ways but it can be used by any application as a way to insert any optional bits. So lets see how to define one (details omitted for brevity):

  "app-id": "io.github.Hexchat",
  "add-extensions": {
    "io.github.Hexchat.Plugin": {
      "version": "2",
      "directory": "extensions",
      "add-ld-path": "lib",
      "merge-dirs": "lib/hexchat/plugins",
      "subdirectories": true,
      "no-autodownload": true,
      "autodelete": true
  "modules": [
      "name": "hexchat",
      "post-install": [
       "install -d /app/extensions"

The exact details of these are best documented in the Extension section of man flatpak-metadata but I’ll go over the ones used here:

  • io.github.Hexchat.Plugin is the name of the extension point and all extensions will have the same prefix.
  • version allows you to have parallel installations of extensions if you break ABI or API for example, 2 refers to HexChat 2.x incase it makes a 3.0 with an API break (It is probably smart in the future to add the runtime version here also since it will break ABI also).
  • directory sets a subdirectory where everything is mounted relative to your prefix so /app/extensions is where they will go.
  • subdirectories allows you to have multiple extensions and each one will get their own subdirectory. So io.github.Hexchat.Plugin.Perl is mounted at /app/extensions/Perl.
  • merge-dirs will merge the contents of subdirectories that match these paths (relative to their prefix). So for this case the contents of /app/extensions/Perl/lib/hexchat/plugins and /app/extensions/Python/lib/hexchat/plugins will both be in /app/extensions/lib/hexchat/plugins. This allows limiting the complexity of your loader to only need to look in one directory (applications will need to be configured/patched to look there).
  • add-ld-path adds a path, relative to extensions prefix, to the library path so for example /app/extensions/Python/lib/ can be loaded.
  • no-autodownload will not automatically install all extensions which is the default.
  • autodelete will remove all extensions when the application is removed.

So now that we defined an extension point lets make an extension:

  "id": "io.github.Hexchat.Plugin.Perl",
  "branch": "2",
  "runtime": "io.github.Hexchat",
  "runtime-version": "stable",
  "sdk": "org.gnome.Sdk//3.26",
  "build-extension": true,
  "separate-locales": false,
  "appstream-compose": false,
  "build-options": {
    "prefix": "/app/extensions/Perl",
    "env": {
      "PATH": "/app/extensions/Perl/bin:/app/bin:/usr/bin"
  "modules": [
      "name": "perl"
      "name": "hexchat-perl",
      "post-install": [
        "install -Dm644 plugins/perl/ ${FLATPAK_DEST}/lib/hexchat/plugins/",
        "install -Dm644 --target-directory=${FLATPAK_DEST}/share/metainfo data/misc/io.github.Hexchat.Plugin.Perl.metainfo.xml",
        "appstream-compose --basename=io.github.Hexchat.Plugin.Perl --prefix=${FLATPAK_DEST} --origin=flatpak io.github.Hexchat.Plugin.Perl"

So again going over some key points quickly: id has the correct prefix, branch refers to the extension version, build-extension should be obvious, runtime is what defines the extension-point. Some less obvious things to make note of is that your extensions prefix will not be in $PATH or $PKG_CONFIG_PATH by default so you may need to set them (see build-options in man flatpak-manifest). $FLATPAK_DEST is also defined as your extensions prefix though not everything expands variables.

While not required you also should install appstream metainfo for easy discover-ability. For example:

<?xml version="1.0" encoding="UTF-8"?>
<component type="addon">
  <name>Perl Plugin</name>
  <summary>Provides a scripting interface in Perl</summary>
  <url type="homepage"></url>

Which will be shown in GNOME-Software:


March 16, 2018

Fedora Atomic Workstation: Ruling the commandline

In my recent posts, I’ve mostly focused on finding my way around with GNOME Builder and using it to do development in Flatpak sandboxes. But I am not really the easiest target audience for an IDE like GNOME Builder, having spent most of my life on the commandline with tools like vim and make.

So, what about the commandline in an Atomic Workstation environment? There are many container tools, like buildah, atomic, oc, podman, and so on. I am not going to talk about these, since I don’t know them very well, and they are covered, e.g. on

But there are a few commands that are essential to life on the Atomic Workstation: rpm-ostree and flatpak.


First of all, there’s rpm-ostree, which is the commandline frontend to the rpm-ostreed daemon that manages the OS image(s) on the Atomic Workstation.

You can run

rpm-ostree status

to get some information about your OS image (and the other images that may be present on your system). And you can run

rpm-ostree upgrade

to get the latest update for your OS image (the terminology clash here is a bit unfortunate; rpm-ostree calls an upgrade what most Linux distros and packaging tools call an update).

You can run this command as normal user in a terminal, and rpm-ostreed will present you with a polkit dialog to do its privileged operations. Recently, rpm-ostreed has also gained the ability to check for and deploy upgrades automatically.

An important thing to keep in mind is that rpm-ostree never changes your running system. You have to reboot into the new image to see the changes, so

systemctl reboot

should be in your repertoire of commands as well. Alternatively, you can use the –reboot option to tell rpm-ostree to reboot when the upgrade command completes.


The other essential command is flatpak. Where rpm-ostree controls your OS image, flatpak rules the applications. flatpak has many commands that are worth exploring, I’ll only mention the most important ones here.

It is quite common to have more than one source for flatpaks enabled.

flatpak remotes

lists them all. If you want to find applications, then

flatpak search

will do that for you, and

flatpak install

will let you install what you found. An important detail to point out here is that applications can be installed in system-wide (in /var) or per-user (in ~/.local/share). You can choose the location with the –user and  –system options. If you choose to install system-wide, you will get a polkit prompt, since this is a privileged operation.

After installing applications, you should keep them up-to-date by installing updates. The most straightforward way to so is to just run

flatpak update

which will install available updates for all applications. To just check if updates are available, you can use

flatpak remote-ls --updates
Launching applications

Probably the most important thing you will want to do with flatpak is to run applications. Unsurprisingly, the command to do so is called run, and it expects you to specify the unique application ID:

flatpak run org.gnome.gitg

This is certainly a departure from the traditional commandline, and could be considered cumbersome (even though it has bash completion for the application ID).

Thankfully, flatpak has recently gained a way to recover the familiar interface. It now installs shell wrappers for the flatpak run command in ~/.local/share/flatpak/bin. After adding that directory to your PATH, you can run gitg like this:


If (like me), you are still not satisfied with this, you can add a shell alias to get the traditional command name back:

alias gitg=org.gnome.gitg

Now gitg works again, as it used to. Nice!


March 15, 2018

pkg-config and paths

This is something of a frequently asked question, as it comes up every once in a while. The pkg-config documentation is fairly terse, and even pkgconf hasn’t improved on that.

The problem

Let’s assume you maintain a project that has a dependency using pkg-config.

Let’s also assume that the project you are depending on loads some files from a system path, and your project plans to install some files under that path.

The questions are:

  • how can the project you are depending on provide an appropriate way for you to discover where that path is
  • how can the project you maintain use that information

The answer to both questions is: by using variables in the pkg-config file. Sadly, there’s still some confusion as to how those variables work, so this is my attempt at clarifying the issue.

Defining variables in pkg-config files

The typical preamble stanza of a pkg-config file is something like this:


Each variable can reference other variables; for instance, in the example above, all the other directories are relative to the prefix variable.

Those variables that can be extracted via pkg-config itself:

$ pkg-config --variable=includedir project-a

As you can see, the --variable command line argument will automatically expand the ${prefix} token with the content of the prefix variable.

Of course, you can define any and all variables inside your own pkg-config file; for instance, this is the definition of the giomoduledir variable inside the gio-2.0 pkg-config file:




This way, the giomoduledir variable will be expanded to /usr/lib/gio/modules when asking for it.

If you are defining a path inside your project’s pkg-config file, always make sure you’re using a relative path!

We’re going to see why this is important in the next section.

Using variables from pkg-config files

Now, this is where things get complicated.

As I said above, pkg-config will expand the variables using the definitions coming from the pkg-config file; so, in the example above, getting the giomoduledir will use the prefix provided by the gio-2.0 pkg-config file, which is the prefix into which GIO was installed. This is all well and good if you just want to know where GIO installed its own modules, in the same way you want to know where its headers are installed, or where the library is located.

What happens, though, if your own project needs to install GIO modules in a shared location? More importantly, what happens if you’re building your project in a separate prefix?

If you’re thinking: “I should install it into the same location as specified by the GIO pkg-config file”, think again. What happens if you are building against the system’s GIO library? The prefix into which it has been installed is only going to be accessible by the administrator user; or it could be on a read-only volume, managed by libostree, so sudo won’t save you.

Since you’re using a separate prefix, you really want to install the files provided by your project under the prefix used to configure your project. That does require knowing all the possible paths used by your dependencies, hard coding them into your own project, and ensuring that they never change.

This is clearly not great, and it places additional burdens on your role as a maintainer.

The correct solution is to tell pkg-config to expand variables using your own values:

$ pkg-config \
> --define-variable=prefix=/your/prefix \
> --variable=giomoduledir
> gio-2.0

This lets you rely on the paths as defined by your dependencies, and does not attempt to install files in locations you don’t have access to.

Build systems

How does this work, in practice, when building your own software?

If you’re using Meson, you can use the get_pkgconfig_variable() method of the dependency object, making sure to replace variables:

gio_dep = dependency('gio-2.0')
giomoduledir = gio_dep.get_pkgconfig_variable(
  define_variable: [ 'libdir', get_option('libdir') ],

This is the equivalent of the --define-variable/--variable command line arguments.

If you are using Autotools, sadly, the PKG_CHECK_VAR m4 macro won’t be able to help you, because it does not allow you to expand variables. This means you’ll have to deal with it in the old fashioned way:

giomoduledir=`$PKG_CONFIG --define-variable=libdir=$libdir --variable=giomoduledir gio-2.0`

Which is annoying, and yet another reason why you should move off from Autotools and to Meson. 😃


All of this, of course, works only if paths are expressed as locations relative to other variables. If that does not happen, you’re going to have a bad time. You’ll still get the variable as requested, but you won’t be able to make it relative to your prefix.

If you maintain a project with paths expressed as variables in your pkg-config file, check them now, and make them relative to existing variables, like prefix, libdir, or datadir.

If you’re using Meson to generate your pkg-config file, make sure that the paths are relative to other variables, and file bugs if they aren’t.

March 14, 2018

Reflections on the GNOME 3.28 Release Video

I just flipped the switch for the 3.28 Release Video. I’m really excited for all the new awesome features the community has landed, but I am a bit sad that I don’t have time to put more effort into the video this time around. A busy time schedule collided with technical difficulties in recording some of the apps. When I was staring at my weekly schedule Monday there didn’t seem much chance for a release video to be published at all..

However, in the midst of all that I decided to take this up as a challenge and see what I could come up with given the 2-3 days time. In the end, I identified some time/energy demanding issues I need to find solutions to:

  1. Building GNOME Apps before release and recording them is painful and prone to error and frustration. I hit errors when upgrading Fedora Rawhide, and even after updating many apps were not on the latest version. Flatpak applications are fortunately super easy to deal with for me, but not all applications are available as flatpaks. And ideally I will need to setup a completely clean environment since many apps draw on content in the home folder. Also, currently I need to post-process all the raw material to get the transparent window films.
  2. I run out of (8GB) memory several times and it’s almost faster to hold power button down and boot again, than to wait for Linux memory handling to deal with it.. Will definitely need to find a solution to this – it builds up a lot of frustration for me.

I am already working on a strategy for the first problem. A few awesome developers have helped me record some of the apps in the past and this has been really helpful to deal with this. I’m trying to make a list of contacts I need to get in touch with to get these recordings done, and I need to send out emails in time with the freezes in the release cycle. It makes my work and the musician’s work much easier if we know exactly what will go in the video and for how long. I also had a chat with Felipe about maybe making a gnome shell extension tool which could take care of setting wallpaper, recording in the right resolution and uploading to a repository somewhere. As for the second problem, I think I’m going to need a new laptop or upgrade my current one. I definitely have motivation to look into that based on this experience now, hehe..

“Do you have time for the next release video?” You might ask and that is a valid question. I don’t see the problem to be time, but more a problem of spending my contribution energy effectively. I really like making these videos – but mainly the animation and video editing parts of it. Building apps, working around errors and bugs, post-processing and all that just to get the recording assets I need, that’s the part that I currently feel takes up the most of my contribution energy. If I can minimize that, I think I will have much more creative energy to spend on the video itself. Honestly, all the awesome contributions in our GNOME Apps and components really deserve that much extra polish.

Thanks everyone for helping with the video this time around!

Builder Nightly

One of the great aspects of the Flatpak model, apart from separating apps from the OS, is that you can have multiple versions of the same app installed concurrently. You can rely on the stable release while trying things out in the development or nightly built version. This creates a need to easily identify the two versions apart when launching it with the shell.

I think Mozilla has set a great precendent on how to manage multiple version identities.

Thus came the desire to spend a couple of nights working on the Builder nightly app icon. While we’ve generally tried to simplify app icons to match what’s happening on the mobile platforms and trickling down to the older desktop OSes, I’ve decided to retain the 3D wokflow for the builder icon. Mainly because I want to get better at it, but also because it’s a perfect platform for kit bashing.

For Builder specifically I’ve identified some properties I think should describe the ‘nightly’ icon:

  • Dark (nightly)
  • Modern (new stuff)
  • Not as polished – dangling cables, open panels, dirty
  • Unstable / indicating it can move (wheels, legs …)

Next up is giving a stab at a few more apps and then it’s time to develop some guidelines for these nightly app icons and emphasize it with some Shell styling. Overlaid emblems haven’t particularly worked in the past, but perhaps some tag style for the label could do.

Latency in Digital Audio

We've come a long way since Alexander Graham Bell, and everything's turned digital.

Compared to analog audio, digital audio processing is extremely versatile, is much easier to design and implement than analog processing, and also adds effectively zero noise along the way. With rising computing power and dropping costs, every operating system has had drivers, engines, and libraries to record, process, playback, transmit, and store audio for over 20 years.

Today we'll talk about the some of the differences between analog and digital audio, and how the widespread use of digital audio adds a new challenge: latency.

Analog vs Digital

Analog data flows like water through an empty pipe. You open the tap, and the time it takes for the first drop of water to reach you is the latency. When analog audio is transmitted through, say, an RCA cable, the transmission happens at the speed of electricity and your latency is:

wire length/speed of electricity

This number is ridiculously smallespecially when compared to the speed of sound. An electrical signal takes 0.001 milliseconds to travel 300 metres (984 feet). Sound takes 874 milliseconds (almost a second).

All analog effects and filters obey similar equations. If you're using, say, an analog pedal with an electric guitar, the signal is transformed continuously by an electrical circuit, so the latency is a function of the wire length (plus capacitors/transistors/etc), and is almost always negligible.

Digital audio is transmitted in "packets" (buffers) of a particular size, like a bucket brigade, but at the speed of electricity. Since the real world is analog, this means to record audio, you must use an Analog-Digital Converter. The ADC quantizes the signal into digital measurements (samples), packs multiple samples into a buffer, and sends it forward. This means your latency is now:

(wire length/speed of electricity) + buffer size

We saw above that the first part is insignificant, what about the second part?

Latency is measured in time, but buffer size is measured in bytes. For 16-bit integer audio, each measurement (sample) is stored as a 16-bit integer, which is 2 bytes. That's the theoretical lower limit on the buffer size. The sample rate defines how often measurements are made, and these days, is usually 48KHz. This means each sample contains 0.021ms of audio. To go lower, you need to increase the sample rate to 96KHz or 192KHz.

However, when general-purpose computers are involved, the buffer size is almost never lower than 32 bytes, and is usually 128 bytes or larger. For 16-bit integer audio at 48KHz, a 32 byte buffer is 0.67ms, and a 128 byte buffer is 2.67ms. This is our buffer size and hence the base latency while recording (or playing) digital audio.

Digital effects operate on individual buffers, and will add an additional amount of latency depending on the delay added by the CPU processing required by the effect. Such effects may also add latency if the algorithm used requires that, but that's the same with analog effects.

The Digital Age

So everyone's using digital. But isn't 2.67ms a lot of additional latency?

It might seem that way till you think about it in real-world terms. Sound travels less than a meter (3 feet) in that time, and that sort of delay is completely unnoticeable by humansotherwise we'd notice people's lips moving before we heard their words.

In fact, 2.67ms is too small for the majority of audio applications!

To process such small buffer sizes, you'd have to wake the CPU up 375 times a second, just for audio. This is highly inefficient, and wastes a lot of power. You really don't want that on your phone or your laptop, and is completely unnecessary in most cases anyway.

For instance, your music player will usually use a buffer size of ~200ms, which is just 5 CPU wakeups per second. Note that this doesn't mean that you will hear sound 200ms after hitting "play". The audio player will just send 200ms of audio to the sound card at once, and playback will begin immediately.

Of course, you can't do that with live playback such as video calls—you can't "read-ahead" data you don't have. You'd have to invent a time machine first. As a result, apps that use real-time communication have to use smaller buffer sizes because that directly affects the latency of live playback.

That brings us back to efficiency. These apps also need to conserve power, and 2.67ms buffers are really wasteful. Most consumer apps that require low latency use 10-15ms buffers, and that's good enough for things like voice/video calling, video games, notification sounds, and so on.

Ultra Low Latency

There's one category left: musicians, sound engineers, and other folk that work in the pro-audio business. For them, 10ms of latency is much too high!

You usually can't notice a 10ms delay between an event and the sound for it, but when making music, you can hear it when two instruments are out-of-sync by 10ms or if the sound for an instrument you're playing is delayed. Instruments such as drum snare are more susceptible to this problem than others, which is why the stage monitors used in live concerts must not add any latency.

The standard in the music business is to use buffers that are 5ms or lower, down to the 0.67ms number that we talked about above.

Power consumption is absolutely no concern, and the real problems are the accumulation of small amounts of latencies everywhere in your stack, and ensuring that you're able to read buffers from the hardware or write buffers to the hardware fast enough.

Let's say you're using an app on your computer to apply digital effects to a guitar that you're playing. This involves capturing audio from the line-in port, sending it to the application for processing, and playing it from the sound card to your amp.

The latency while capturing and outputting audio are both multiples of the buffer size, so it adds up very quickly. The effects app itself will also add a variable amount of latency, and at 2.67ms buffer sizes you will find yourself quickly approaching a 10ms latency from line-in to amp-out. The only way to lower this is to use a smaller buffer size, which is precisely what pro-audio hardware and software enables.

The second problem is that of CPU scheduling. You need to ensure that the threads that are fetching/sending audio data to the hardware and processing the audio have the highest priority, so that nothing else will steal CPU-time away from them and cause glitching due to buffers arriving late.

This gets harder as you lower the buffer size because the audio stack has to do more work for each bit of audio. The fact that we're doing this on a general-purpose operating system makes it even harder, and requires implementing real-time scheduling features across several layers. But that's a story for another time!

I hope you found this dive into digital audio interesting! My next post will be about my journey in implementing ultra low latency capture and render on Windows in the WASAPI plugin for GStreamer. This was already possible on Linux with the JACK GStreamer plugin and on macOS with the CoreAudio GStreamer plugin, so it will be interesting to see how the same problems are solved on Windows. Tune in!

March 13, 2018

Network Stats Makes Its Way to Libgtop

Hey there if you are reading this then probably network stats might be of some interest to you , but still if it doesn’t, just recall that while requesting this page you had your share of packets being transferred over the vast network and delivered to your system. I guess now you’d like to check out the work which has been going on in Libgtop and exploit the network stats details to your personal use.

This post is going to be a brief update about what’s new in Libgtop

Crux of the NetStats Implementation

The implementation which I’ve used requires intensive use of pcap handles to start a capture on your system and segregate the packets into the process they belong to. The following part is a detailed explanation of this , so in case you feel to just skip the details jump to the next part.

Flow of the setup is as follows:

  • Initialize pcap handles for different interfaces on a system .
  • Start packet dispatching on each pcap handle

    Note: These handles have been set to capture packets in a non-blocking mode

  • As we get any packet start processing it.
  • This capture is repeated every time period which is set in the DBus Interface.

Assigning Packets to their Respective Processes

In my opinion this was the coolest part of the entire project which gave me the liberty to filter the packets until I’d finally atomized them. This felt like a recursive butchering of the packets without having a formal medical science qualification. So bear in mind you can flaunt that you too can operate like doctors but only on inanimate objects 😜 . Well, jokes aside coming to technical aspect -

What the packet says is , keep discarding the headers prepended to it until you reach the desired header, in our case reaching the TCP headers. The flow of parsing was somewhat simple :

It required checking the following headers in the sequence as below-

  • Linktype: Ethernet
  • EtherType:
    • IP
    • IPv6
  • TCP

A bit of Network Stats design dosage

Just so that you are in sync with how my implementation seeks to do what it should do , the design is such :

We know that every process creates various connections. Any general socket detail look like

src ip (fixed): src port (variable) - dest ip (fixed) : dest port (variable)

These sockets are listed in the /proc/net/tcp. You’d like to have a more detailed explanation about this. The packet headers will give us the connection details , we’ll just have to assign it to the correct process. This means each process has a list of connections , and each connection has a list of packets which in turn has all the relevant packets. So getting network stats for a process is as simple as summing up the packet length details in the packet list for each connection in the connection list for that process.

Note: While summing up stale packets aren’t considered.

This is what the design looks like:


TCP parsing

The parameters passed to this callback are:

  • TCP header has information like the source port and the destination port .
  • pcap packet header has the length of the packet
  • The source and destination ip is stored as a member of the packet_handle struct .

Given that we have all the necessary details, now the packet is initialized and all its fields are assigned values using the above mentioned parameters passed to the TCP parsing callback. Now we check which connection to put this new packet into. For checking this we make use of a reference packet bound to each connection. After adding the packet to the connection, in case we had to create a new connection itself then we even need to add it to the relevant process . For doing this we use the Inode to PID mapping expalined in the earlier post.

Getting stats

In every refresh cycle getting stats is as simple as just summing up packet lengths for a given process.

Choosing interface to expose the libgtop API

Next thing of concern was how do we make this functionality available to other applications.

  • Daemon already in libgtop for other OSs

    Given the fact that Libgtop already had a daemon to get details which required escalated previleges although it wasn’t configured to be run on Linux, but to start the packet capture we needed root permissions , which meant requirement of something like that to enable us to do so. After discussions with Felipe and Robert it was decided that we start the daemon and access the stats over DBus.

  • Using DBus

    But we encountered some issues launching the capture as a root process , hence decided to switch over to run a systemd service and launch a DBus interface in which not only did we get the stats but the invocation to the capture was also exposed to the application . But later we’ll move on to using the daemon provided by libgtop on Linux so that we are able to get historical stats too.

API exposed through DBus

dbus interface

  • GetStats This returns a dictionary of PID with the corresponding bytes sent and received
  • InitCapture
    • This will setup the pcap handles before starting the capture and then invoke the refresh of capture every second but this is subject to change .


    • During every refresh cycle the stats are logged into a singleton GPtrArray which is used by GetStats to send the stats over DBus.


  • Set and Reset capture This is just to start or stop the capture initiated by InitCapture. On the Usage end this function is invoked based on whether the network subview is active or not. While the network subview is active the capture is kept ongoing otherwise we break out of the capture.

Setting up the system bus

To setup the System bus these two files had to be added to the following paths

  • org.gnome.GTopNetStats.service in /usr/share/dbus-1/system-services


  • org.gnome.GTop.NetStats.conf in /etc/dbus-1/system.d


Inspecting using D-Feet

Here’s what we see on inspecting the interface using d-feet


GetStats output : {pid:(bytes sent, bytes recv)}


You might be done with reading all these implementational details but the most important thing I haven’t mentioned until now is everyone whose mind has been behind helping me do all this.

(Keeping it to be in lexical order)

I’m extremely grateful to Felipe Borges and Robert Roth for their constant support and reviews.

Felipe’s design related corrections of switching implementation to say singletons and on the Libgtop end Robert Roth helping me with those quick hacks with the daemon and DBus even after his working hours and from working on weekends to finally getting things done is what makes me indebted to them .

I’ve a tint of guilt for pinging you all on weekends too .

Did I just forget to mention the entire community in general , because members like Aberto and Carlos were also the ones I sought help from.

If you did reach this fag end without getting bored let me tell you that I’m yet to post the details about the Usage integration.

Feel free to check the work.

Stay tuned !🙂

March 12, 2018

2018-03-12 Monday

  • Practice with E. scales decaying over the weekend; hmm. Mail chew, sync with Miklos. Lunch with J. Sync with kendy, call with Jeroen. TDF call. Stories in the evening - Asimov for E. Max Lucado for H.

Karton 1.0

After more than a year using Karton regularly, I released version 1.0 with the last few features I was missing for my use case.

Karton is a tool which can transparently run Linux programs on a different Linux distribution, on macOS, or on a different architecture.
By using Docker, Karton manages semi-persistent containers with easy to use automatic folder sharing and lots of small details which make the experience smooth. You shouldn’t notice you are using command line programs from a different OS or distro.

Karton logo

If you are interested, check the Karton website.

March 11, 2018

2018-03-11 Sunday

  • Up, cooked breakfast in bed for J. with M. - presents, cards, a banner, etc. for the sweetheart. Out to All Saints, played bass; enjoyed Max's sermon. Home for roast lunch.
  • Slugged; digested; played guitar much of the afternoon while babes played lego (a break from minetest), and Smash-Up. Julie over for tea.

March 10, 2018

webkitgtk in Debian Stretch: Report Card

webkitgtk is the GTK+ port of WebKit. webkitgtk provides web functionality for many things including GNOME Online Accounts’ login panels; Evolution’s HTML email editor and viewer; and the engine for the Epiphany web browser (also known as GNOME Web).

Last year, I announced here that Debian 9 “Stretch” included the latest version of webkitgtk (Debian’s package is named webkit2gtk). At the time, I hoped that Debian 9 would get periodic security and bugfix updates. Nine months later, let’s see how we’ve been doing.

Release History

Debian 9.0, released June 17, 2017, included webkit2gtk 2.16.3 (up to date).

Debian 9.1 was released July 22, 2017 with no webkit2gtk update (2.16.5 was the current release at the time).

Debian 9.2, released October 8, 2017, included 2.16.6 (There was a 2.18.0 release available then but for the first stable update, we kept it simple by not taking the brand new series.)

Debian 9.3 was released December 9, 2017 with no webkit2gtk update (2.18.3 was the current release at the time).

Debian 9.4 released March 10, 2018 (today!), includes 2.18.6 (up to date).

Release Schedule

webkitgtk development follows the GNOME release schedule and produces new major updates every March and September. Only the current stable series is supported (although sometimes there can be a short overlap; 2.14.6 was released at the same time as 2.16.1). Distros need to adopt the new series every six months.

Like GNOME, webkitgtk uses even numbers for stable releases (2.16 is a stable series, 2.16.3 is a point release in that series, but 2.17.3 is a development release leading up to 2.18, the next stable series).

There are webkitgtk bugfix releases, approximately monthly. Debian stable point releases happen approximately every two or three months (the first point release was quicker).

In a few days, webkitgtk 2.20 will be released. Debian 9.5 will need to include 2.20.1 (or 2.20.2) to keep users on a supported release.

Report Card

From five Debian 9 releases, we have been up to date in 2 or 3 of them (depending on how you count the 9.2 release).

Using a letter grade scale, I think I’d give Debian a B or B- so far. But this is significantly better than Debian 8 which offered no webkitgtk updates at all except through backports. In my grading, Debian could get a A- if we consistently updated webkitgtk in these point releases.

To get a full A, I think Debian would need to push the new webkitgtk updates (after a brief delay for regression testing) directly as security updates without waiting for point releases. Although that proposal has been rejected for Debian 9, I think it is reasonable for Debian 10 to use this model.

If you are a Debian Developer or Maintainer and would like to help with webkitgtk updates, please get in touch with Berto or me. I, um, actually don’t even run Debian (except briefly in virtual machines for testing), so I’d really like to turn over this responsibility to someone else in Debian.


I find the Repology webkitgtk tracker to be fascinating. For one thing, I find it humorous how the same package can have so many different names in different distros.

March 09, 2018

Flathub Experience: Adding an App

Flathub is a new distribution channel for Linux desktop apps. Truly distro-agnostic, unifying across abundance of Linux distributions. I was planning for a long time to add an application to Flathub and see what the experience is, especially compared to traditional distro packaging (I’m a Fedora packager). And I finally got to it this last week.


In Fedora I maintain PhotoQt, a very fast image viewer with very unorthodox UI. Its developer is very responsive and open to feedback. Already back in 2016 I suggested he provides PhotoQt as a flatpak. He did so and found making a flatpak really easy. However it was in the time before Flathub, so he had to host its own repo.

Last week I was notified about a new release of PhotoQt, so I prepared updates for Fedora and noticed that the Flatpak support became “Coming soon” again. So I was like “hey, let’s get it back and to Flathub”. I picked up the two-year-old flatpak manifest, and started rewriting it to successfully build with the latest Flatpak and meet Flathub requirements.

First I updated dependencies. You add dependencies to the manifest in a pretty elegant way, but what’s really time consuming is getting checksums of official archives. Most projects don’t offer them at all, so you have to download the archive and generate it yourself. And you have to do it with every update of that dependency. I’d love to see some repository of modules. Many apps share the same dependencies, so why to do the same work again and again with every manifest?

Need to bundle the latest LibRaw? Go to the repository and pick the module info for your manifest:

"name": "libraw",
"cmake": false,
"builddir": true,
"sources": [ { "type": "archive", "url": "", "sha256":"56aca4fd97038923d57d2d17d90aa11d827f1f3d3f1d97e9f5a0d52ff87420e2" } ]

And on the top of such a repo you can actually build a really nice tooling. You can let the authors of apps add dependencies simply by picking them from the list and you can generate the starting manifest for them. And you could also check for dependency updates for them. LibRaw has a new version, wanna bundle it, and see how your app builds with it? And the LibRaw module section of your manifest would be replaced by the new one and a build triggered.

Of course such a repo of modules would have to be curated because one could easily sneak in a malicious module. But it would make writing manifests even easier.

Besides updating dependencies I also had to change the required runtime. Back in 2016 KDE only had a testing runtime without any versioning. Flathub now includes KDE runtime 5.10, so I used it. PhotoQt also uses “photoqt” in all file names and Flatpak/Flathub now requires it in the reverse-DNS format: org.qt.photoqt. Fortunately flatpak-builder can rename it for you, you just need to state it in the manifest:

"rename-desktop-file": "photoqt.desktop",
"rename-appdata-file": "photoqt.appdata.xml",
"rename-icon": "photoqt",

Once I was done with the manifest, I looked at the appdata file. PhotoQt has it in a pretty good shape. It was submitted by me when I packaged it for Fedora. But there were still a couple of things missing which are required by Flathub: OASR and release info. So I added it.

I proposed all the changes upstream and at this point PhotoQt was pretty much ready for submitting to Flathub. I never intended to maintain PhotoQt in Flathub myself. There should be a direct line between the app author and users, so apps should be maintained by app authors if possible. I knew that upstream was interested in adding PhotoQt to Flathub, so I contacted the upstream maintainer and asked him whether he wanted to pick it up and go through the Flathub review process himself or whether I should do it and then hand over the maintainership to him. He preferred the former.

The review was pretty quick and it only took 2 days between submitting the app and accepting it to Flathub. There were three minor issues: 1. the reviewer asked if it’s really necessary to give the app access to the whole host, 2. app-id didn’t match the app name in the manifest (case sensitivity), 3. by copy-pasting I added some spaces which broke the appdata file and of course I was too lazy to run validation before submitting it.

And that was it. Now PhotoQt is available in Flathub. I don’t remember how much time exactly it took me to get PhotoQt to Fedora, but I think it was definitely more and also the spec file is more complex than the flatpak manifest although I prefer the format of spec files to json.

Is not your favorite app available in Flathub? Just go ahead, flatpak it, and then talk to upstream, and try to hand the maintainership over to them.

Device Integration

I’ve been working on some groundwork features to sneak into Builder 3.28 so that we can build upon them for 3.30. In particular, I’ve started to land device abstractions. The goal around this is to make it easier to do cross-architecture development as well as support devices like phones, tablets, and IoT.

For every class and kind of device we want to support, some level of integration work will be required for a great experience. But a lot of it shares some common abstractions. Builder 3.28 will include some of that plumbing.

For example, we can detect various Qemu configurations and provide cross-architecture building for Flatpak. This isn’t using a cross-compiler, but rather a GCC compiled for aarch64 as part of the Flatpak SDK. This is much slower than a proper cross-compiler toolchain, but it does help prove some of our abstractions are working correctly (and that was the goal for this early stage).

Some of the big things we need to address as we head towards 3.30 include running the app on a remote device as well as hooking up gdb. I’d like to see Flatpak gain the support for providing a cross-compiler via an SDK extension too.

I’d like to gain support for simulators in 3.30 as well. My personal effort is going to be based around GNU-based systems. However, if others join in and want to provide support for other sorts of devices, I’d be happy to get the contributions.

Anyway, Builder 3.28 can do some interesting things if you know where to find them. So for 3.30 we’ll polish them up and make them useful to vastly more people.

March 08, 2018

Fedora Atomic Workstation: Trying things out the easy way

If you’ve followed my posts about my first steps with Fedora Atomic Workstation, you may have seen that I’ve had to figure out how to make codecs work in the firefox that is included in the OS image. And then I had to work around some issues with the layered packages that I used for that when I rebased to a newer OS.

But maybe there is a better way to go about this?

Flatpak is the preferred way to install applications on the Atomic Workstation, so maybe that is the way to go. So far, all the flatpaks I’ve installed have come from Flathub, which is a convenient place to find all sorts of desktop apps. But firefox is not there (yet).

Thankfully, flatpak is fundamentally decentralized by design. There is nothing special about Flathub, flatpak can easily install apps from many different sources. A quick search yielded this unofficial firefox flatpak repository, with easy-to-follow instructions. Just clicking on this link will also work. If you open it with a sandboxed firefox, you will see the URI portal in action:

One nice aspect of this is that the nightly firefox is not just a one-off package that I’ve installed, it will receive updates just like any other app on my system. An important detail when it comes to web browsers!

Another nice aspect is that I did not have to remove the firefox that is already installed (I couldn’t anyway, since it is part of the immutable OS image). The two can peacefully coexist.

This is true not just for firefox, but in general: by isolating apps and their dependencies from each other, flatpak makes it very easy to try out different versions of apps without pain.

Want to try some bleeding-edge GNOME apps instead of the stable versions that are available on Flathub? No problem,  just enable the GNOME nightly repository like this:

flatpak remote-add --user --from gnome-apps-nightly

And then the nightly apps will be available for installation next to their stable counterparts. Note that GNOME software will indicate the source for each app in the search results.

Trying out new software has never been safer and easier!

Sorry for the wrong-colored screenshots in this post – there is still a price to pay for living on the bleeding edge of rawhide, even if Atomic Workstation makes it much safer.

What 3 Words?

I dig online maps like everyone else, but it is somewhat clumsy sharing a location. The W3W service addesses the issue by chunking up the whole world into 3x3m squares and assigning each a name (Supposedly around 57 trillion). Sometimes it’s a bit of a tongue twister, but most of the time it’s fun to say to meet at a “massive message chuckle” for some fpv flying. I’m really surprised this didn’t take off.

March 07, 2018

Stop trying to guess display language based on keyboard layout

A very common setup among non-English speaking computer power users is to have display language in English but have a country specific keyboard layout. For example I'm using the Finnish keyboard because I need to be able to easily type our special letters ö and ä. If your computer comes from someone else (such as your employer) there is not even the possibility to have a non-Finnish keyboard layout.

All of this has worked for tens of years flawlessly. However recently I have noticed that many programs on multiple platforms seem to alter their display language based on keyboard layout, which is just plain wrong. Display language should be chosen based on the display language choice and nothing else.

I first noticed this in the output of ls, which one would imagine to have reached stability ages ago.

Here we see that ls has chosen to print months in Finnish. Why? I have no idea. This was weird on its own, but then it spread to other operating systems as well. For no reason at all the existing Gimp install switched its display language to Finnish.

Let me reiterate: no setting was changed and the version of Gimp was exactly the same. One day it just decided to change its language to Finnish.

Then the issue spread to Windows.

VLC on Windows has chosen on my behalf to show its menus in Finnish in a completely English Windows 7 install. The only things it could use for language detection are geolocation and keyboard settings and both of these are terrible ideas. The OS has a language. It is very clearly specified. All other applications obey it, VLC should too.

The real kicker here is that Gimp on Windows displays English text correctly, as does VLC on macOS.

The newest case is the new Gnome-ified Ubuntu, whose lock screen stubbornly displays dates in the wrong language. It also does not conjugate the words correctly and has that weird american month/date date ordering which is wrong for Finnish.

What is causing this?

I don't know. But whoever is behind this: please stop doing that.

March Madness brackets in PHP

It's March, and that means I've written yet another article about a program that helps you fill in your NCAA "March Madness" game brackets. This article is a bit different. Where my previous articles described how to write a Bash script to "predict" March Madness brackets, this year's article walks through a PHP script that generates a full bracket.

For those of you who aren't in the U.S., we have a game here called Basketball. And every year in March, the university level (called the National Collegiate Athletic Association, or NCAA) holds a single-elimination championship of the different university teams. If you follow basketball, you can fill out a bracket to "predict" the championship outcomes at each round.

Some people really get into creating their brackets, and they closely follow the teams over the season to see how they are doing. I used to work in an office that cared a lot about March Madness (although in honesty, the office I'm in now doesn't really follow the games that much—but forgive me that I didn't update my Linux Journal article to say so.) I didn't want to miss out on the fun in following March Madness.

I don't follow basketball, but one year I decided to write a little random-number generator to predict the games for me. I used this program to fill out my brackets. And a few years ago, I started writing about it.

Read the article, and use the PHP program to fill out your own March Madness bracket! Let me know how your bracket fared in this year's games.
image: Wikimedia (public domain)

March 06, 2018

Rewriting Git Commit Messages

So this week I started submitting a seventy-odd commits long branch where every commit was machine generated (but hand reviewed) with the amazing commit message of "component: refresh patches". Whilst this was easy to automate the message isn't acceptable to merge and I was facing the prospect of copy/pasting the same commit message over and over during an interactive rebase. That did not sound like fun. I ended up writing a tiny tool to do this and thought I'd do my annual blog post about it, mainly so I can find it again when I need to do it again next year...

Wise readers will know that Git can rewrite all sorts of things in commits programatically using git-filter-branch and this has a --msg-filter argument which sounds like just what I need. But first a note: git-filter-branch can destroy your branches if you're not careful!

git filter-branch --msg-filter has a simple behaviour: give it a command to be executed by the shell, the old commit message is piped in via standard input, and whatever appears on standard output is the new commit message. Sounds simple but in a way it's too simple, as even the example in the documentation has a glaring problem.

Anyway, this should work. I have a commit message in a predictable format (: refresh patches) and a text editor containing a longer message suitable for submission. I could write a bundle of shell/sed/awk to munge from one to the other but I decided to simply glue a few pieces of Python together instead:

import sys, re

input_re = re.compile(open(sys.argv[1]).read())
template = open(sys.argv[2]).read()

original_message =
match = input_re.match(original_message)
if match:

Invoke this with two filenames: a regular expression to match on the input, and a template for the new commit message. If the regular expression matches then any named groups are extracted and passed to the template which is output using the new-style format() operation. If it doesn't match then the input is simply output to preserve commit messages.

This is my input regular expression:

^(?P<recipe>.+): refresh patches

And this is my output template:

{recipe}: refresh patches

The patch tool will apply patches by default with "fuzz", which is where if the
hunk context isn't present but what is there is close enough, it will force the
patch in.

Whilst this is useful when there's just whitespace changes, when applied to
source it is possible for a patch applied with fuzz to produce broken code which
still compiles (see #10450).  This is obviously bad.

We'd like to eventually have do_patch() rejecting any fuzz on these grounds. For
that to be realistic the existing patches with fuzz need to be rebased and

Signed-off-by: Ross Burton <>

A quick run through filter-branch and I'm ready to send:

git filter-branch --msg-filter ' input output' origin/master...HEAD

Input methods in GTK+ 4

GTK’s support for loadable modules dates back to the beginning of time, which is why GTK has a lot of code to deal with GTypeModules and with search paths, etc. Much later on, Alex revisited this topic for GVfs, and came up with the concept of extension points and GIO modules, which implement them.  This is a much nicer framework, and GTK 4 is the perfect opportunity for us to switch to using it.

Changes in GTK+ 4

Therefore, I’ve recently spent some time on the module support in GTK. The major changes here are the following:

  • We no longer support general-purpose loadable modules. One of the few remaining users of this facility is libcanberra, and we will look at implementing ‘event sound’ functionality directly in GTK+ instead of relying on a module for it. If you rely on loading GTK+ modules, please come and talk to us about other ways to achieve what you are doing.
  • Print backends are now defined using an extension point named “gtk-print-backend”, which requires the type GtkPrintBackend.  The existing print backends have been converted to GIO modules implementing this extension point. Since we have never supported out-of-tree print backends, this should not affect anybody else.
  • Input methods are also defined using an extension point, named “gtk-im-module”, which requires the GtkIMContext type.  We have dropped all the non-platform IM modules, and moved the platform IM modules into GTK+ proper, while also implementing the extension point.

Adapting existing input methods

Since we still support out-of-tree IM modules, I want to use the rest of this post to give a quick sketch of how an out-of-tree IM module for GTK+ 4 has to look.

There are a few steps to convert a traditional GTypeModule-based IM module to the new extension point. The example code below is taken from the Broadway input method.


We are going to load a type from a module, and G_DEFINE_DYNAMIC_TYPE is the proper way to define such types:


Note that this macro defines a gtk_im_context_broadway_register_type() function, which we will use in the next step.

Note that dynamic types are expected to have a class_finalize function in addition to the more common class_init, which can be trivial:

static void
               (GtkIMContextBroadwayClass *class)

Implement the GIO module API

In order to be usable as a GIOModule, a module must implement three functions: g_io_module_load(), g_io_module_unload() and g_io_module_query() (strictly speaking, the last one is optional, but we’ll implement it here anyway).

g_io_module_load (GIOModule *module)
  g_type_module_use (G_TYPE_MODULE (module));
                        (G_TYPE_MODULE (module));
g_io_module_unload (GIOModule *module)
char **
g_io_module_query (void)
  char *eps[] = {
  return g_strdupv (eps);

Install your module properly

GTK+ will still look in $LIBDIR/gtk-4.0/immodules/ for input methods to load, but GIO only looks at shared objects whose name starts with “lib”, so make sure you follow that convention.


And thats it!

Now GTK+ 4 should load your input method, and if you run a GTK+ 4 application with GTK_DEBUG=modules, you should see your module show up in the debug output.


March 05, 2018

Emojis in VTE

It’s been one of those weeks when gnome-terminal and vte keep stumbling on some really weird edge cases, so it was a happy moment when I saw this on Fedora 27 Workstation.

Emoji rendered in gnome-terminal

March 04, 2018

Compiling Cargo crates natively with Meson

Recently we have been having discussions about how Rust and Meson should work together, especially for mixed language projects. One thing which multiple people have told me (over a time span of several years, actually) is that Rust is Special in that everyone uses crates for everything. Thus there is no point in having any sort of Rust support, the only true way is to blindly call Cargo and have it do everything exactly the way it wants to.

This seems like a reasonable recommendation so I did what every reasonable person would do and accepted this as is.

David Addison wearing cardboard x-ray goggles with caption "This is me being completely reasonable".

But then curiosity takes hold of you and you start to wonder. Is that really the case?

Converting Cargo manifests to Meson projects

The basic setup of a Cargo project is fairly straightforward. The file format is mostly declarative and most crates are simple libraries that have a few source files and a bunch of other crates they link against. Meson has the same primitives, so could they be automatically converted.

It turns out that for simple examples you can. Here is a sample repo that downloads the Itoa crate from github, converts the Cargo build definition into a Meson project and builds it as a subproject (with Meson, not Cargo) of the main project that uses it. This prototype turned out to require 71 lines of Python.

What about dependencies other than itoa?

The script currently only works for itoa, because does not seem to provide a web API for queries and the entire site is created with JavaScript so you can't even do web scraping easily. To get this working properly the only thing you'd need is a function to get from (crate name, version) to the git repo.

What about dependencies of dependencies?

They can be easily grabbed from Cargo.toml file. Combined with the above they could be downloaded and converted in the same fashion.

What doesn't work?

A lot. Unit tests are not built nor run, but they could be added fairly easily. This would require adding compile options so the actual source could be built with the unittest flags. This is some amount of work but Meson already supports a similar feature for D so adding it should not be a huge amount of work. Similarly docs are not generated.

What about

Cargo provides a fairly simple project model and everything more complex should be handled by writing a program that does everything else necessary. This suffers from the same disadvantages as every Turing complete build system ever has, and these scripts are not in general possible to convert automatically.

However based on documentation the common case seems to be to call into external build tools to build dependency libraries in other languages. In a build system that builds both parts at the same time it would be possible to create a better UX for this (but again would obviously not be something you can convert automatically).

Could this actually work in practice with real world projects?

It might. It might not. Ignoring the previous segment no immediate showstopper has presented itself thus far. It might in the future. But you never know until you try.

One Widget to Adapt Them All and to The Librem 5 Port Them

In my previous article I shared my plans to help porting existing GTK+ applications to Purism's upcoming Librem 5 phone without having to fork them. This article will present the GTK+ widget I developed for Purism to make this happen.

For more information on what Purism is working on for the Librem 5, please check Nicole Faerber's latest article.

C'est pas sorcier

The underlying idea is to allow applications to dynamically switch between the two main GNOME application layouts: a row of panels — each panel being the view of an element from the previous one — and a stack of panels. The goal isn't to changes applications using the stack paradigm but the ones using the row one, allowing them to reach smaller sizes and to be usable on constrained sizes while keeping their initial paradigm and design when the screen space is sufficient. The development cost to port the applications to this adaptive design should be as low as possible.

To achieve that, I wrote a GTK+ widget which acts similarly to a GtkBox when there is plenty of space for its children, but which adapts automatically and acts similarly to a GtkStack when there isn't, displaying only one of its children. I called this widget HdyStackableBox and it is being developed for the libhandy library.

The widget's minimum size is the minimum size of its visible child, matching the minimum size of a GtkStack. The widget's natural size is the sum of the natural sizes of its children, matching size of a GtkBox.

The threshold below which the widget will turn into a stack is the sum of the natural sizes of its children — hence, when allocated less than its natural size the widget will turn into a stack. In box mode, the minimum size allocated to the children is their natural size, combined to the previously defined threshold this allows a collapsing priority property to naturally emerge as, when nesting HdyStackableBoxes, the wrapping one will collapse into a stack before the nested one. This is what I consider the biggest feature of this widget: this widget offers an adaptive behavior with no magic number involved.

Features of HdyStackableBox

HdyStackableBox is still in development, here is a list of already implemented features.

  • Mode: box or stack.
  • Adaptive design.
  • Box homogeneity.
  • Stack homogeneity.
  • Mode transition:
    • None and slide animations.
    • Its duration is settable up to 250ms (like GtkRevealer).
  • Child transition:
    • None animation.
    • Its duration is settable up to 200ms (like GtkStack).
  • Named children.
  • Unlimited children.

Here is a list of features that may or may not be implemented later.

  • Draw on its own windows: the animations currently causes the widget to draw out of its allocated space.
  • Mode transition: slide over animation, similarly to hiding the left panel of DzlBin.
  • Child transition: slide animation.
  • Orientation: this is partially implemented.
  • Text direction: right-to-left languages should be supported.

Gimme! Gimme! Gimme!

The code is currently hosted at my fork of libhandy. You can test the widget by cloning the repo in GNOME Builder, moving to the wip/aplazas/stackablebox branch, building and running the project.

To test the widget in real world applications, I quickly tested it on GNOME Contacts. It is mostly ported and should work just fine, if not don't hesitate to report any strange behavior to me but only regarding the adaptive port.

GNOME Contacts with adaptive design and full navigation in stack mode

Adaptive GNOME Contacts running on an i.MX 6 development board and a phone screen.

I also tested it on Geary, there is no navigation implemented yet but it's a good test case of a complex UI with a HdyStackableBox nested into another.

Geary and its nested stackable boxes.

Adaptive design is useful even on the desktop for Geary.

Sorry for the graphical glitches, I suspect they are caused by a mix of out-of-bound drawings and improper/missing resizing and redraw queries, they will be fixed. You can test these by yourself by cloning, building and running these branches in GNOME Builder.

Keep in mind that my libhandy, GNOME Contacts and Geary forks are expected to disappear at some point in the future, and that I don't guaranty their quality as this is all in development.

/* TODO */

As you can see in the list of unimplemented features and in the videos, there is still a lot to do, polishing the widget to make it featureful and bug free will take time, probably a few more weeks of focused development. In the meantime, feedback from designers on what this widget could be used for and — perhaps more importantly — what it shouldn't be used for is more than welcome!

Once ready, what about turning it into GtkStackableBox? That would help its adoption by important applications like Settings, which probably don't want an extra dependency just for a widget but which could then work on the Librem 5 without yet another major redesign.