August 16, 2022

Status update, 16/08/2022

Building Wheels

For the first time this year I got to spend a little paid time on open source work, in this case putting some icing on the delicious and nourishing cake that we call BuildStream 2.

If you’ve tried the 1.9x pre-releases you’ll have seen it depends on a set of C++ tools under the name BuildBox. Some hot codepaths that were part of the Python core are now outsourced to helper tools, specifically data storage (buildbox-casd, buildbox-fuse) and container creation (buildbox-run-bubblewrap). These tools implement remote-apis standards and are useful to other build tools, the only catch is that they are not yet widely available in distros, and neither are the BuildStream 2 prereleases.

Separately, BuildStream 2 has some other hot codepaths written with Cython. If you’ve ever tried to install a Python package from PyPI and wondered why a package manager is running GCC, the answer is usually that it’s installing a source package and has to build some Cython code with your system’s C compiler.

The way to avoid requiring GCC in this case is to ship prebuilt binary packages known as wheels. So that’s what we implemented for BuildStream 2 – and as a bonus, we can bundle prebuilt BuildBox binaries into these packages. The wheels have a platform compatibility tag of “manylinux_2_28.x86_64” so they should work on any x86_64 host with GLIBC 2.28 or later.

Connecting Gnomes

I didn’t participate in GUADEC this year for various reasons, I’m very glad to see it was a success. I was surprised to see six writeups of the Berlin satellite event, and only two of the main event in Mexico (plus one talk transcript and some excellent coverage in LWN) – are Europeans better at blogging? 🙂

I again saw folk mention that connections *between* the local, online and satellite events are lacking – I felt this at LAS this year – and I still think we should take inspiration from 2007 Twitter and put up a few TVs in the venue with chat and microblog windows open.

The story is, that Twitter launched in 2006 to nobody, and it became a hit only after putting up screens at a US music festival in 2007 displaying their site where folk could live-blog their activities.

Hey old people, remember this?! Via

I’d love to see something similar at conferences next year where online participants can “write on the walls” of the venue (avoiding the ethically dubious website that Twitter has become).

On that note, I just released a new version of my Twitter Without Infinite Scroll extension for Firefox.

Riding Trains

I made it from Galicia to the UK overland, actually for the first time. (I did do the reverse journey already by boat + van in 2018). It was about 27 hours of travel spread across 5 days, including a slow train from Barcelona into the Pyrenees, and a night train onwards to Paris, and I guess cost around 350€. The trip went fine, in comparison to the plane I had booked to return which was cancelled by the airline without notification, so the return flight+bus ended up costing a similar amount and taking nearly 15 hours. No further comment on that but I can recommend the train ride!

High speed main line through the French Pyrenees

Adventure game graphics with DALL-E 2

I recently got access to OpenAI's DALL-E 2 instance. It's a lot of fun, but beyond its obvious application as a cornucopia of funny cat avatars, I think it's now fit to use in certain kinds of creative work.

There are already plenty of good articles out there on the model's strengths and weaknesses, so I won't go over that here other than to note that it's not a threat to high art. It's got an idea of what things look like and how they can visually fit together, but it's very vague on how they work (e.g. anatomy, architecture, the finer points of Victorian-era dinner etiquette, art critics), and object inpainting aside, it doesn't rise to the level of realism where I'd worry too much about the fake news potential either.

However, with human guidance and a carefully chosen domain, it can still do some very impressive things. I've suspected that adventure game graphics in the point-and-click vein could be one of those domains, and since I'm helping someone dear to me realize such a game, I had the excuse I needed to explore it a little and write this case study.


Point-and-click adventures make up a fairly broad genre with many different art styles. I've focused my attention on a sub-genre that hews close to the style of early 1990s Sierra and LucasArts adventure games. These would typically run at a screen resolution of 320×200 and appear pixelized, especially so on a modern display:

Space Quest IV (Sierra On-Line, 1991)
Indiana Jones and the Fate of Atlantis (LucasArts, 1992)

Contemporary game developers sometimes work at low resolutions, producing a similar effect:

The Last Door (The Game Kitchen, 2013-2017)
Kathy Rain (Clifftop Games, 2016)
Milkmaid of the Milky Way (Machineboy, 2017)

At first glance this seems restrictive (just ask H.R. Giger), but from a certain point of view, it's actually quite forgiving and confers lots of artistic license:

  • The perspective doesn't need to be realistic or even consistent, and is often tweaked for practical reasons, such as eliminating visual clutter, providing more space for the action or aligning better with the pixel grid.
  • Speaking of pixels, pixelization helps work around the fact that DALL-E can produce odd smudges and sometimes struggles with details. It also helps with manual retouching, since there aren't very fine details or textures to be precisely propagated.
  • Does your art look weird? Uncanny valley anxiety? Take a free tour courtesy of the entire genre. Feel your troubles float away as it throws an arm around you. And another arm. And another.

Ahem. What I'm trying to say is, this is a wonderful, fun genre with many degrees of freedom. We'll need them!

How to into the pixels

While you can tell DALL-E to generate pixel art directly, it's not even remotely up to the task; it just doesn't know how a pixel grid works. The result will tend to have some typical pixel art properties (flattened perspective, right angles, restricted palette with colors that "pop") wrapped in a mess of smudged rectangles of all sizes:

"hotel entrance in mexico, in the style of pixel art"

It's impressive in a "holy guacamole, it kind of understood what I meant" way, but even if you clean up the grid you don't stand a chance of getting a consistent style, and you have no control over the grid size.

Fortunately, pixelization can be easily split off from the creative task and turned over to a specialist tool. I used magick in my scripts:

$ magick -adaptive-resize 25% -scale 400% in.png out.png

It's worth trying different resampling filters. ImageMagick's -adaptive-resize operator produces nice and crisp output, but when downsampling by this much there may be even better options.

You could also experiment with color reduction and dithering. The images I generated for this article have been postprocessed like this…

$ magick -adaptive-resize 25% -ordered-dither checks,32,32,32 \
    -scale 800% in.png out.png

…which pixelizes to a 1:4 ratio, restricts the output to a color cube with 32 levels per channel (i.e. 15-bit color) and applies subtle — but not too subtle — checker-pattern dithering. It also upscales to twice the original size for easy viewing in a web browser.

Style prompts and selection

After some trial and error, I settled on a range of prompts involving techniques, styles and authors of fine art: oil on canvas, high renaissance, modernism, precisionism. This gave me a good chance of output in a handful of repeatable styles with sufficient but not overwhelming detail:

"mexican hacienda facade on a sunny day, 2.5d modernist painting"
"mexican hacienda on a sunny day, surrounded by plains, color painting by charles sheeler"

Beyond important details ("sunny day"), vague modifiers like "atmospheric", "dramatic" and "high quality" can have huge effects on lighting, camera angles and embellishment. They're also very unreliable, and I have the feeling they can crowd out more important parts of the prompt from the model's tiny mind and cause them to be overlooked. It's better to use compact, specific prompts until you're close, and then, well, watch it fall apart as you add a single modifier.

Which brings us to the second human-intensive part of this task: selection. Since the OpenAI UI produces four variants for each prompt, this is mandatory. It's also very necessary, as most of the output falls far short of the mark. With the right prompt, you might get a distribution where roughly 1/20 images is good (with minor defects) and 5/20 are potentially salvageable. The remainder will be obviously unusable for various reasons (major defects, stylistic and framing issues, photobombed by anthropomorphic utility pole).

I think it's the same way with the impressive DALL-E mashups being shared. By the time you're seeing them, they've been curated at least twice; once at the source, and one or more times by the chain of media that brought them to you. You won't see the hundreds of images that came out funny but not ha-ha funny.

Since each image takes only a second to generate and a few seconds to evaluate, this wild inconsistency isn't disqualifying. It just means DALL-E isn't magical or even very intelligent.

Setting the stage

An adventure game location is a bit like a theatre stage; it's a good idea to have an ample area close to the camera for the player to walk around in. It's also a good idea to avoid scenes where the player can get far away from the camera, as you'd be forced to choose between a comical perspective mismatch and a really tiny player character that'd be hard to follow and control. Obviously a real game won't want to follow these rules strictly, but it's important to be able to implement them when needed.

Fortunately it can be done, and it's not too hard:

"building entrance in mexican town and street outside, in the style of an atmospheric high renaissance oil on canvas painting"
"hotel entrance in mexican town and street outside, in the style of a high quality, atmospheric high renaissance oil on canvas painting"

To control the perspective and make it more flat, adding "facade" seemed to be effective. Ditto "diorama" and "miniature", although they tended to produce a more clinical look. Specifying a typical ground-level detail to focus on, e.g. "entrance", was also helpful. I'm not sure "2d" and "2.5d" actually made any difference. Bringing it all together:

  • Specify the era, time of day and lighting conditions (e.g. "on a sunny day in the 2000s").
  • Be specific about the overall location ("town", "city", or a named geographical location), the focus ("facade", "hotel entrance") and the immediate surroundings ("houses", "streets", "plains").
  • You can explicitly ask for open space, e.g. "…and street in front" or "plaza surrounded by…".
  • Sometimes it's necessary to ask for the space to be empty, otherwise DALL-E can paint in objects and people that you'd rather add as overlays later on.
  • You can also specify camera placement, e.g. "seen from second-floor balcony", but you risk ground-level details becoming too small.
  • Some combinations will have the model drawing blanks, resulting in ignoring much of your prompt or horking up non sequitur macro shots of blades of grass and the like. Be prepared to rephrase or compromise. Think about what might be well represented in the training set.
  • Do not under any circumstance mention "video game", unless you want blue neon lights on everything.

Retouching and editing

This is easy to do using the in-browser UI. Just erase part of the image, optionally edit the prompt and off you go. Very useful for those times you've got something great, except there's a pine growing out of a church tower or an impromptu invasion of sea swine. Adding objects works too. Here's a villain's preferred mode of transportation, quite believable (if out of place) on the first try:

"rustic mexican mansion with a grassy area sports car parked in front surrounded by small houses on a sunny day, high quality atmospheric high renaissance oil on canvas"

You can also upload PNG images with an alpha channel, although I had to click somewhere with the eraser before it would accept that there were indeed transparent areas. I suspect you could use this to seed your images with spots of color in order to get a more consistent palette.

Extending the images

DALL-E generates 1024×1024-pixel postage stamps. To fill a modern display you want something closer to a 19:10 ratio. Transparency edits come in handy here. The idea is to split the original image into left-right halves and use those to seed two new images with transparency to fill in:

This is easily scriptable. Note that you have to erase the DALL-E signature from the right half to prevent it bleeding into the result. Something like this can work:

$ magick in.png -background none -extent 512x0 -splice 512x0 left.png
$ magick in.png \( +clone -fill white -colorize 100 -size 80x16 xc:black \
    -gravity southeast -composite \) -alpha off -compose copy_opacity \
    -composite -compose copy -background none -gravity east -extent 512x0 \
    -splice 512x0 right.png

Upload left.png and right.png, reenter the prompt and generate a couple of variants for each. Since there's lots of context, the results turn out pretty good for the most part. Then stitch the halves together like this:

$ magick +append left.png right.png out.png

With a little more scripting, you can generate all possible permutations and apply your big brain judgement to them, e.g:

…and so on. You can also tweak the side prompts. Put in a pool or whatever:

I wouldn't be surprised if this kind of image extension made it into the standard toolbox at some point.

Other things it can do, and some that it can't

I had some success with interiors too. "Cutaway" was a handy cue to knock out walls and avoid claustrophobic camera placement, and it handled decorations and furniture fairly well (e.g. "opulent living room with a table and two chairs"). It could also generate icons for inventory items after a fashion ("mail envelope on black background"). I didn't delve very deeply into that, though.

You've probably noticed that the generated images all contain defects. Some can be fixed by erasing them and having DALL-E fill in the blanks, but others are too numerous, stubborn or minute for that to be practical. This means you'll have to go over each image manually before pixelization (for rough edits) and after (for the final touch). You'll also need to adjust colors and levels for consistency.

DALL-E can't write. In fact it will rarely be able to arrange more than three letters in a correct sequence, so if you want words and signage, you'll have to draw it yourself. Maps and other items that convey specific information by virtue of their geometry can probably also be ruled out, although you may get lucky using a mostly transparent cue sketch.

You won't get much help with animations, especially complex multi-frame ones like walk cycles.

If you want to convert an existing daylight scene into a night scene, that's probably best done manually or with the help of a style transfer model.

I realize I've barely scratched the surface here, and there's bound to be a lot more that I haven't thought of.

The economics of AI jackpot

OpenAI controls usage through a credit system. Currently, one credit can be used to generate four images from a single prompt, or three edits/variants from a single image and prompt. I got some free welcome credits (50 or so), and they're promising another 15 each month. When you spend a credit, it takes 4-5 seconds to get results, which means about a second per image. You can buy 115 credits for $15 + tax, which in my case works out to a total of $18.75. That's $0.163 per credit, or at most $0.0543 per image (batch of three).

Let's say you use this to generate locations for a point-and-click game. How many will you need? Well, one very successful such game, The Blackwell Epiphany (made entirely by the fine humans at Wadjet Eye Games), has about 70 locations. If you're considering AI-generated images for your game, you're probably not trying to compete with one of the industry's most accomplished developers, so let's lower that to 50.

50 locations is still a lot, and as I mentioned before, only 1/20 images come out adequate. For each location, you can probably get by with 10 adequate candidates to choose from. That means you'll generate 200 images per location, or 10,000 images total. Let's double that to account for some additional curation, edits, horizontal extensions, late changes to the script and plain old mistakes. Then, 20,000 * $0.0543 = $1,087. Since most of the images will be generated in batches of four, not three, it's fair to round that down to an even $1,000. It's probably not your biggest expense, anyway.

How about time investment? I mean, evaluating that many images seems kind of crazy, but let's do the math and see. If an image takes about 1s to generate and you spend about 5s deciding whether to keep it (recalling that 95% is quickly recognizable as dross and you'll be looking at batches of four), that's 20,000 * 6s = 120,000s or about 33 hours. Even if you can only stand to do it for two hours a day, you should be done in three to four weeks.

Throughout this you should be able to generate 10 candidates and 10 edits for each location. Further manual editing will likely take much longer than three weeks, but that's not something I'm experienced with, so I've really no idea. It also presupposes that you're starting out with a detailed list of locations.

Legal considerations

In addition to their API policies, OpenAI have public content policy and terms of use documents that appear to be specific to DALL-E. I'm not trained in law, but the gist of the content policy appears to be "don't be mean, sneaky or disgusting", which is easy for us to abide by with only landscapes and architecture. Some of the restrictions seem unfortunate from the perspective of entertainment fiction: Could I generate a bloodied handkerchief, a car wreck or something even worse? Probably not. Anything containing a gun? Certainly not. However, they're also understandable given the stakes (see below).

The most concerning thing, and likely a showstopper for some creative enterprises, is point 6 of the terms of use: Ownership of Generations. My interpretation of this is that generated images are the property of OpenAI, but that they promise not to assert the copyright if you observe their other policies (which may, presumably, change). If you're making a long-lived creative work, especially something like a game that may include adult topics alongside the generations, this seems like a risky proposition. I wouldn't embark on it without seeking clarification or some kind of written release.

Ow, my ethics!

So, yeah, ethics. An obvious line of questioning concerns misuse, but OpenAI is erring on the side of caution (or realistically, trying to keep the lid on just a little longer), and anyway, our use case isn't nefarious.

What's more relevant to us is the closed training dataset and how it might contain tons of formerly "open" but copyrighted material, or simply pictures whose author didn't want them used this way. We're talking half a billion images, and the relevant research and blog posts either allude to web scraping or mention it outright. A search for reassurance didn't turn up much, but I did discover an interesting open issue. So, could this be disrespectful or even abusive?

A common defense claims that the model learns from the training set the same way a human student would, implying human rules (presumably with human exceptions) should apply to its output. This can seem like a reasonable argument in passing, but besides being plain wrong, it's too facile since DALL-E is not human-like. It can't own the output (or, as the case would be, sign its ownership over to OpenAI) any more than a relational database could.

A better argument is that the training process munges the input so thoroughly that there's no way to reconstruct an original image. You don't have to understand the process deeply to see that this makes sense: there's terabytes of training data and only gigabytes of model. Then the implication becomes that this is transformative remixing and potentially fair/ethical use.

Thinking about this kind of hurts my head, particularly as it's also playing out in my own field. I haven't definitely concluded, but in general I think it's important to focus on the net good that technology and sharing can bring and how the benefits (and obligations) can be distributed equitably.

So is this going to upend everything?

Well, not everything. But some things, for sure. Neural networks have evolved very quickly over the past couple of years, and it looks like there's plenty of low-hanging fruit left. Current research leaves the impression that DALL-E 2 is already old news. There are also open efforts that seem to be completely caught up, at least for those with some elbow grease and compute time to spare.

A dear friend of mine joked that we've been privileged to live in a truly global era with minimal blank spots on the map and a constant flow of reasonably accurate information, the implication being that the not too distant past had mostly blank spots and the not too distant future will be saturated with extremely plausible-looking gibberish. We had a laugh about that, but you have to wonder.

"spaceship with 'copyleft' painted on the side, piloted by cats, realistic photo"

August 15, 2022

scikit-survival 0.18.0 released

I’m pleased to announce the release of scikit-survival 0.18.0, which adds support for scikit-learn 1.1.

In addition, this release adds the return_array argument to all models providing predict_survival_function and predict_cumulative_hazard_function. That means you can now choose, whether you want to have the survival (cumulative hazard function) automatically evaluated at the unique event times. This is particular useful for plotting. Previously, you would have to evaluate each survival function before plotting:

estimator = CoxPHSurvivalAnalysis(), y_train)
pred_surv = estimator.predict_survival_function(
times = pred_surv[0].x
for surv_func in pred_surv:
plt.step(times, surv_func(times), where="post")

Now, you can pass return_array=True and directly get probabilities of the survival function:

estimator = CoxPHSurvivalAnalysis(), y_train)
pred_surv_probs = estimator.predict_survival_function(
X_test, return_array=True
times = estimator.event_times_
for probs in pred_surv_probs:
plt.step(times, probs, where="post")

Finally, support for Python 3.7 has been dropped and the minimal required version of the following dependencies are raised:

  • numpy 1.17.3
  • pandas 1.0.5
  • scikit-learn 1.1.0
  • scipy 1.3.2

For a full list of changes in scikit-survival 0.18.0, please see the release notes.


Pre-built conda packages are available for Linux, macOS (Intel), and Windows, either

via pip:

pip install scikit-survival

or via conda

 conda install -c sebp scikit-survival

August 14, 2022

New Alert Sounds

In the classic analog synthesizer, a sound is created by a simple oscillator and then carving/shaping the sound with filters. That’s why it’s sometimes called subtractive synthesis.

The sound effects in the Settings’ panel have been recreated with a method called Frequency Modulation where some oscillators aren’t used to generate the sound itself (carrier), but modulate it futher. Even here the significant role of filtering remains. In addition more complex sounds are achieved with so called layereing which is exactly what the name hints to, just using a bunch of sounds together in a mix.

Sounds created for GNOME 43 were generated on a mini-computer called Teensy (currently unavailable due to the global chip shortage), running software called Dirtywave Headless written by Timothy Lamb. The software includes other synthesizer engines, but majority of the sounds were made using the 4 operator FM engine. To further complicate things, my favorite algorithm is No.16 where all of the 4 oscillators are carriers, effectively being equivalent to a 4 oscillator analog synth.

FM Synth Engine on the Dirtywave Headless. Image taken from the Dirtywave M8 Tracker manual.

Finally everything was cleaned up in Audacity.

To form a complete circle, and to my genuine surprise, my old friend Noggin from the Jeskola Buzz days has composed a great track using only samples from the gitlab issue (my involvement with music trackers predates GNOME or Free software in general. An old friend indeed).

Take a listen.

I wish I had published the project bundle to allow for easier adjustments, but better late than never.

August 13, 2022

Making decision without all the information is tricky, a case study

In a recent blog post, Michal Catanzaro wrote about choosing proper configurations for your build, especially the buildtype attribute. As noted in the text, Meson's build type setup is not the greatest in the world., so I figured I'd write why that is, what would a better design look like and why we don't use that (and probably won't for the foreseeable future).

The concept of build types was copied almost directly from CMake. The main thing that they do is to set compiler flags like -g and -O2. Quite early in the development process of Meson I planned on adding top level options for debug info and optimization but the actual implementation for those was done much later. I copied the build types and flags almost directly except for build types RelWithDebInfo and Release. Having these two as separate build types did not make sense to me, because you always need debug info for releases. If you don't have it, you can't debug crash dumps coming from users. Thus I renamed them to debugoptimized and release.

So far so good, except there was one major piece of information I was missing. The word "debug" has two different meaning. On most platforms it means "debug info" but on Windows (or, specifically, with the MSVC toolchain) "debug" means a special build type that uses the "debug runtime" that has additional runtime checks that are useful during development. More info can be found e.g. here. This made the word "debug" doubly problematic. Not only do people on Windows want it to refer to the debug runtime but then some (but not all) people on Linux think that "debugoptimized" means that it should only be used during development. Originally that was not the case, it was supposed to mean "build a binary with the default optimizations and debug info". What I originally wanted was that distros would build packages with buildtype set to debugoptimized as opposed to living in the roaring 90s, passing a random collection of flags via CFLAGS and hoping for the best.

How it should have been done?

With the benefit of hindsight a better design is fairly easy to see. Given that Meson already has toggles for all individual bits, buildtype should describe the "intent", that is, what the end result will be used for. Its possible values should have the following:

  • development
  • releaseperf (maximize performance)
  • releasesize (minimize size)
It might also contain the following:

  • distro (when building distro packages)
  • fuzzing
Note that the word "debug" does not appear. This is intentional, all words are chosen so that they are unambiguous. If they are not, then they would need to be changed. The value of this option would be orthogonal to other flags. For example you might want to have a build type that minimizes build size but still uses -O2, because sometimes it produces smaller code than -Os or -Oz. Suppose you have two implementations of some algorithm: one that has maximal performance and another that yields less code. With this setup you could select between the two based on what the end result will be used rather than trying to guess it from optimization flags. (Some of this you can already do, but due to issues listed in Michael's blog it is not as simple.)

Can we switch to this model?

This is very difficult due to backwards compatibility. There are a ton of existing projects out there that depend on the way things are currently set up and breaking them would lead to a ton of unhappy users. If Meson had only a few dozen users I would have done this change already rather than writing this blog post.

August 12, 2022

On a road to Prizren with a Free Software Phone

Since people are sometimes slightly surprised that you can go onto a multi week trip with a smartphone running free sofware so only I wanted to share some impressions from my recent trip to Prizren/Kosovo to attend Debconf 22 using a Librem 5. It's a mix of things that happend and bits that got improved to hopefully make things more fun to use. And, yes, there won't be any big surprises like being stranded without the ability to do phone calls in this read because there weren't and there shouldn't be.

After two online versions Debconf 22 (the annual Debian Conference) took place in Prizren / Kosovo this year and I sure wanted to go. Looking for options I settled for a train trip to Vienna, to meet there with friends and continue the trip via bus to Zagreb, then switching to a final 11h direct bus to Prizren.

When preparing for the trip and making sure my Librem 5 phone has all the needed documents I noticed that there will be quite some PDFs to show until I arrive in Kosovo: train ticket, bus ticket, hotel reservation, and so on. While that works by tapping unlocking the phone, opening the file browser, navigating to the folder with the PDFs and showing it via evince this looked like a lot of steps to repeat. Can't we have that information on the Phone Shell's lockscreen?

This was a good opportunity to see if the upcoming plugin infrastructure for the lock screen (initially meant to allow for a plugin to show upcoming events) was flexible enough, so I used some leisure time on the train to poke at this and just before I reached Vienna I was able to use it for the first time. It was the very last check of that ticket, it also was a bit of cheating since I didn't present the ticket on the phone itself but from phosh (the phones graphical shell) running on my laptop but still.

PDF barcode on phosh's lockscreen List of tickets on phosh's lockscreen

This was possible since phosh is written in GTK and so I could just leverage evince's EvView. Unfortunately the hotel check in didn't want to see any documents ☹.

For the next day I moved the code over to the Librem 5 and (being a bit nervous as the queue to get on the bus was quite long) could happily check into the Flixbus by presenting the barcode to the barcode reader via the Librem 5's lockscreen.

When switching to the bus to Prizren I didn't get to use that feature again as we bought the tickets at a counter but we got a nice krem banana after entering the bus - they're not filled with jelly, but krem - a real Kosovo must eat!).

Although it was a rather long trip we had frequent breaks and I'd certainly take the same route again. Here's a photo of Prizren taken on the Librem 5 without any additional postprocessing:


What about seeing the conference schedule on the phone? Confy(a conferences schedule viewer using GTK and libhandy) to the rescue:

Confy with Debconf's schedule

Since Debian's confy maintainer was around too, confy saw a bunch of improvements over the conference.

For getting around Puremaps(an application to display maps and show routing instructions) was very helpful, here geolocating me in Prizren via GPS:


Puremaps currently isn't packaged in Debian but there's work onging to fix that (I used the flatpak for the moment).

We got ourselves sim cards for the local phone network. For some reason mine wouldn't work (other sim cards from the same operator worked in my phone but this one just wouldn't). So we went to the sim card shop and the guy there was perfectly able to operate the Librem 5 without further explanation (including making calls, sending USSD codes to query balance, …). The sim card problem turned out to be a problem on the operator side and after a couple of days they got it working.

We had nice, sunny weather about all the time. That made me switch between high contrast mode (to read things in bright sunlight) and normal mode (e.g. in conference rooms) on the phone quite often. Thankfully we have a ambient light sensor in the phone so we can make that automatic.

Phosh in HighContrast

See here for a video.

Jathan kicked off a DebianOnMobile sprint during the conference where we were able to improve several aspects of mobile support in Debian and on Friday I had the chance to give a talk about the state of Debian on smartphones. pdf-presenter-console is a great tool for this as it can display the current slide together with additional notes. I needed some hacks to make it fit the phone screen but hopefully we figure out a way to have this by default.

Debconf talk Pdf presenter console on a phone

I had two great weeks in Prizren. Many thanks to the organizers of Debconf 22 - I really enjoyed the conference.

RTKit, portals, and Pipewire

Peeling an onion1, or how a bug report in a flatpaked2 application lead to fixes in three different part of the stack, but no change in the application itself.

How it started

It started with a bug report for the Flatpak of Ardour.

Pipewire needs to request realtime priorities for threads. Inside Flatpak pipewire is provided by the freedesktop-sdk, and what that bug report show is that pipewire can't find the module it uses to handle realtime priorities. This was a bug in the SDK, where it was too eager in removing unused files ­ pipewire-module-rt is the new name of the module. I submitted a fix for the 22.08 release, learning how to build the SDK in the process. freedesktop-sdk 22.08, to be released later this month, should become the base of GNOME 44 (and I think the current nightly), as well as a future KDE SDK release.

Realtime is not real

Now, to request the RT priority, pipewire does so by calling into RTKit, or, if it is not available, by calling the POSIX thread API.

RTKit allow requesting real time priorities for threads using D-Bus. It works by having the requester calling a D-Bus methods passing the process ID and thread ID.

Problem: Inside the Flatpak sandbox, the processes are in a namespace, therefore the process ID and thread ID are different. Passing them to RTKit doesn't work, so we need to use the portal API. One of the components is xdg-portal-desktop, that sits on the host side (outside the sandbox), but that get called via D-Bus from inside. It does provide an interface to proxy the RTKit calls. It does so by remapping the PID before forwarding it to RTKit. But this is not enough, the D-Bus call return a "not found" error. Notice how I said it remapped the process ID? What about the thread ID? Indeed, that's the source of the not-found-error.

Side note: Bustle was a great helper in helping diagnosing and testing this.

I then submitted a patch for the newly released "unstable" 1.15.0. And backported to 1.12.6 and 1.14.6.

Through the portal

As I said previously, Pipewire calls into RTKit. This still doesn't work for the reason given above. So I also submitted a patch for Pipewire to call the portal, and, if not found, RTKit directly. Note that the portal is also available to non flatpak applications3, and in that case it will directly forward the calls to RTKit. I hope this to be in 0.3.57 (it is on master as I write this).

On more thing...

Meanwhile, while testing how the build was going using the freedesktop-sdk 22.08beta I was still disappointed that pipewire-jack4 still didn't ship the JACK headers, forcing Flatpak packages to build JACK first. So I submitted a fix to the freedesktop-sdk.

And then I found out that the .pc file generated by Pipewire was incorrect, so I submitted a fix for that too, fix that is in 0.3.56.

Bottom line

With all of this we have better support for requesting realtime priorities and for Pipewire being a true drop in replacement in 22.08. Still pending is a release of pipewire 0.3.57 for the realtime portal support.

And there are probably a other applications to fix.


The onion peeling metaphor is adequate as there were a lot of tears.


Flatpak is a verb.


Unless you stripped down your system and there is no portal, which is handled anyway.


An important note here: JACK, cannot work from inside a Flatpak sandbox. Its API/ABI is guaranteed by using the system provided shared library that will communicate with the sound daemon. That also means using a version inside the sandbox wouldn't work as there is no guarantee it matches JACK installed on the host. On the other hand, pipewire-jack is designed to offer a JACK API drop in replacement, and that's what we are using.

August 11, 2022

Common GLib Programming Errors, Part Two: Weak Pointers

This post is a sequel to Common GLib Programming Errors, where I covered four common errors: failure to disconnect a signal handler, misuse of a GSource handle ID, failure to cancel an asynchronous function, and misuse of main contexts in library or threaded code. Although there are many ways to mess up when writing programs that use GLib, I believe the first post covered the most likely and most pernicious… except I missed weak pointers. Sébastien pointed out that these should be covered too, so here we are.

Mistake #5: Failure to Disconnect Weak Pointer

In object-oriented languages, weak pointers are a safety improvement. The idea is to hold a non-owning pointer to an object that gets automatically set to NULL when that object is destroyed to prevent use-after-free vulnerabilities. However, this only works well because object-oriented languages have destructors. Without destructors, we have to deregister the weak pointer manually, and failure to do so is a disaster that will result in memory corruption that’s extremely difficult to track down. For example:

static void
a_start_watching_b (A *self,
                    B *b)
  // Keep a weak reference to b. When b is destroyed,
  // self->b will automatically be set to NULL.
  self->b = b;
  g_object_add_weak_pointer (b, &self->b);

static void
a_do_something_with_b (Foo *self)
  if (self->b) {
    // Do something safely here, knowing that b
    // is assuredly still alive. This avoids a
    // use-after-free vulnerability if b is destroyed,
    // i.e. self->b cannot be dangling.

Let’s say that the Bar in this example outlives the Foo, but Foo failed to call g_object_remove_weak_pointer() . Then when Bar is destroyed later, the memory that used to be occupied by self->bar will get clobbered with NULL. Hopefully that will result in an immediate crash. If not, good luck trying to debug what’s going wrong when some innocent variable elsewhere in your program gets randomly clobbered. This is often results in a frustrating wild goose chase when trying to track down what is going wrong (example).

The solution is to always disconnect your weak pointer. In most cases, your dispose function is the best place to do this:

static void
a_dispose (GObject *object)
  A *a = (A *)object;
  g_clear_weak_pointer (&a->b);
  G_OBJECT_CLASS (a_parent_class)->dispose (object);

Note that g_clear_weak_pointer() is equivalent to:

if (a->b) {
  g_object_remove_weak_pointer (a->b, &a->b);
  a->b = NULL;

but you probably guessed that, because it follows the same pattern as the other clear functions that we’ve used so far.

The new XWAYLAND extension is available

As of xorgproto 2022.2, we have a new X11 protocol extension. First, you may rightly say "whaaaat? why add new extensions to the X protocol?" in a rather unnecessarily accusing way, followed up by "that's like adding lipstick to a dodo!". And that's not completely wrong, but nevertheless, we have a new protocol extension to the ... [checks calendar] almost 40 year old X protocol. And that extension is, ever creatively, named "XWAYLAND".

If you recall, Xwayland is a different X server than Xorg. It doesn't try to render directly to the hardware, instead it's a translation layer between the X protocol and the Wayland protocol so that X clients can continue to function on a Wayland compositor. The X application is generally unaware that it isn't running on Xorg and Xwayland (and the compositor) will do their best to accommodate for all the quirks that the application expects because it only speaks X. In a way, it's like calling a restaurant and ordering a burger because the person answering speaks American English. Without realising that you just called the local fancy French joint and now the chefs will have to make a burger for you, totally without avec.

Anyway, sometimes it is necessary for a client (or a user) to know whether the X server is indeed Xwayland. Previously, this was done through heuristics: the xisxwayland tool checks for XRandR properties, the xinput tool checks for input device names, and so on. These heuristics are just that, though, so they can become unreliable as Xwayland gets closer to emulating Xorg or things just change. And properties in general are problematic since they could be set by other clients. To solve this, we now have a new extension.

The XWAYLAND extension doesn't actually do anything, it's the bare minimum required for an extension. It just needs to exist and clients only need to XQueryExtension or check for it in XListExtensions (the equivalent to xdpyinfo | grep XWAYLAND). Hence, no support for Xlib or libxcb is planned. So of all the nightmares you've had in the last 2 years, the one of misidentifying Xwayland will soon be in the past.

August 10, 2022


I spent a week at GUADEC 2022 in Guadalajara, Mexico. It was an excellent conference, with some good talks, good people, and a delightful hallway track. I think everyone was excited to see each other in person after so long, and for many attendees, this was closer to home than GUADEC has ever been.

For this event, I was sponsored by the GNOME Foundation, so many thanks to them as well as my employer the Endless OS Foundation for both encouraging me to submit a talk and for giving me the opportunity to take off and drink tequila for the week.

For me, the big themes this GUADEC were information resilience, scaling our community, and how these topics fit together.


Stepping into the Guadalajara Connectory for the first time, I couldn’t help but feel a little out of place. Everyone was incredibly welcoming, but this was still my first GUADEC, and my first real in-person event with the desktop Linux community in ages.

So, I was happy to come across Jona Azizaj and Justin Flory’s series of thoughtful and inviting workshops on Wednesday morning. These were Icebreakers & Community Social, followed by Unconscious bias & imposter syndrome workshop. They eased my anxiety enough that I wandered off and missed the follow-up (Exploring privilege dynamics workshop), but it looked like a cool session. It was a brilliant idea to put these kinds of sessions right at the start.

The workshop about unconscious bias inspired me to consciously mix up who I was going out for lunch with throughout the week, as I realized how easy it is to create bubbles without thinking about it.

Beyond that, I attended quite a few interesting sessions. It is always fun hearing about bits of the software stack I’m unfamiliar with, so some standouts were Matthias Clasen’s Font rendering in GNOME (YouTube), and David King’s Cheese strings: Webcams, PipeWire and portals (YouTube). Both highly recommended if you are interested in those components, or in learning about some clever things!

But for the most part, this wasn’t a very code-oriented conference for me.

Accessibility, diversity, remote attendance

This was the first hybrid GUADEC after two years of running a virtual-only conference, and I think the format worked very well. The remote-related stuff was smoothly handled in the background. The volunteers in each room did a great job relaying questions from chat so remote attendees were represented during Q&As.

I did wish that those remote attendees — especially the Berlin Mini-GUADEC — were more visible in other contexts. If this format sticks, it would be nice to have a device or two set up so people in different venues can see and interact with each other during the event. After all, it is unlikely that in-person attendees will spend much time looking at chat rooms on their own.

But I definitely like how this looks. I think having good representation for remote attendees is important for accessibility. Pandemic or otherwise. So with that in mind, Robin Tafel’s Keynote: Peeling Vegetables and the Craft of (Software) Inclusivity (YouTube), struck a chord for me. She elegantly explains how making anything more accessible — from vegetable peelers to sidewalks to software — comes back to help all of us in a variety of ways: increased diversity, better designs in general, and — let’s face it — a huge number of people will need accessibility tools at some point in their lives.

“We are temporarily abled.”

Community, ecosystems, and offline content

I especially enjoyed Sri Ramkrishna’s thoughtful talk, GNOME and Sustainability – Ecosystem Management (YouTube). I came away from his session thinking how we don’t just need to recruit GNOME contributors; we need to connect free software ecosystems horizontally. Find those like-minded people in other projects and find places where we can collaborate, even if we aren’t all using GNOME as a desktop environment. For instance, I think we’re doing a great job of this across the freedesktop world, but it’s something we could think about more widely, too.

Who else benefits, or could benefit, from Meson, BuildStream, Flatpak, GJS, and the many other technologies GNOME champions? How can we advocate for these technologies in other communities and use those as bridges for each other’s benefit? How do we get their voices at events like GUADEC, and what stops us from lending our voices to theirs?

“We need to grow and feed our ecosystem, and build relations with other ecosystems.”

So I was pretty excited (mostly anxious, since I needed to use printed notes and there were no podiums, but also excited) to be doing a session with Manuel Quiñones a few hours later: Offline learning with GNOME and Kolibri (YouTube). I’ll write a more detailed blog post about it later on, but I didn’t anticipate quite how neatly our session would fit in with what other people were talking about.

At Endless, we have been working with offline content for a long time. We build custom Endless OS images designed for different contexts, with massive libraries of pre-installed educational resources. Resources like Wikipedia, books, educational games, and more: all selected to empower people with limited connectivity. The trick with offline content is it involves a whole lot of very large files, it needs to be possible to update it, and it needs to be easy to rapidly customize it for different deployments.

That becomes expensive to maintain, which is why we have started working with Kolibri.

Kolibri is an open source platform for offline-first teaching and learning, with a powerful local application and a huge library of freely licensed educational content. Like Endless OS, it is designed for difficult use cases. For example, a community with sporadic internet access can use Kolibri to share Khan Academy videos and exercises, as well as assignments for individual learners, between devices.

Using Kolibri instead of our older in-house solution means we can collaborate with an existing free software project that is dedicated to offline content. In turn, we are learning many interesting lessons as we build the Kolibri desktop app for GNOME. We hope those lessons will feed back into the Kolibri project to improve how it works on other platforms, too.

Giving our talk at GUADEC made me think about how there is a lot to gain when we bring these types of projects together.

The hallway track

Like I wrote earlier, this wasn’t a particularly code-oriented conference for me. I did sit down and poke at Break Timer for a while — in particular, reviving a branch with a GTK 4 port — and I had some nice chats about various other projects people are doing. (GNOME Crosswords was the silent star of the show). But I didn’t find many opportunities to actively collaborate on things. Something to aim for with my next GUADEC.

I wonder if the early 3pm stop each day was a bit of a contributor there, but it did make for some excellent outings, so I’m not complaining. The pictures say a lot!

Everyone here is amazing, humble and kind. I really cannot recommend enough, if you are interested in GNOME, check out GUADEC, or LAS, or another such event. It was tremendously valuable to be here and meet such a wide range of GNOME users and contributors. I came away with a better understanding of what I can do to contribute, and a renewed appreciation for this community.

GSoC 2022: Third Update!

Hello everyone! 😄 

In my previous blog post, I explained why we use a dialog box and its advantages over the GtkPopoverMenu for the templates submenu.

Since the last update, I'm glad to announce that the dialog box implementation is finally complete :D
The visuals are improved, we can now search for templates and new templates can be created with ease!

Visual changes:

As shown in the above comparison, the font and icon size are increased, making it even easier to read and recognize the type of files. Also, the size of the row is increased, to make the overall GtkListBox look cleaner. The padding between the search bar and the list box; and the padding between the list box and import button is made equal as well! 
This, of course, isn't the final design, and suggestions from the designers and other members of the community are more than welcome, which would help in making this dialog box look even better. 😄

Searching for templates is as easy as ever! Just type in the template name in the search bar, and it will find it for you :D

The cpp_file.cpp is in the coding directory and is not visible without expanding the directory, but with the help of a search, it's extremely straightforward to find it. 

Template creation:
The most important feature of all, obviously, is the creation of templates. The create button is used to create new templates through the dialog box. 

A text_file template is selected, the create button is clicked, and the template is created 🎉

Future plans:
Although the dialog box is pretty much complete, there might be a few minor bugs that might need sorting out. Also, the visuals could be worked on to make it look even better. 
And finally, it has to be tested for it its usability. 
I've created a draft merge request for the same, and I'll be working on improving it!

I would like to thank my mentor @antoniof for helping me throughout this project! It has been a really fun experience working with him. 😄

Thanks for reading,
See you in a few weeks! 😉

August 08, 2022


GNOME Birthday Cake

It was really lovely to get back to GUADEC. I loved being around old friends and meeting the new faces within the project. The venue was stellar and I thoroughly enjoyed a lot of the talks this year.

For me, my favorite talks were the progressive webapps talk by Phaedrus Leeds, Federico’s meme-filled talk on accessibility, and Rob’s talk about the Endless deployment to Oaxaca, Mexico.

[Note: I hope someone goes back to the youtube videos and adds timestamps / links to all the talks. It would be easier to find and browse  them. ]

On my part, I gave a talk on GNOME Crosswords and participated on a panel on how to get a Job with Free Software. The crosswords talk in particular seemed pretty successful. It had a great article written about it in (thanks Jake!), which lead to an increase in bug reports and crossword contributions.

One observation: it felt that attendance was down this year. I don’t know if it was Covid or Mexico, but some combination led to a smaller crowd than usual. I saw that there was a mini-GUADEC in Berlin as well (which I’ll assume was a lagging indicator of the above and not a cause.)

If we continue to have remote GUADECs in the future, I hope we can find a way to do a better way of connecting people across the conferences. One of the real advantages of GUADEC is it is a place for the project to unify and decide on big things collectively, and I’d hate for us to develop schisms in direction. I didn’t feel particularly connected to the folks in Berlin, which is a shame. I’d love to see them again.

Hopefully a few more of us can get together in Riga next year! But if we can’t, I hope we can find better ways to be more inclusive for remote events and individuals.

Crosswords Bof

Crosswords BoF
Crosswords BoF at GUADEC

There was enough interest from the Crosswords talk that we were able to hold an impromptu BoF on Saturday. Seven people showed up to work on various aspects of the game. We tested the existing puzzles, wrote some new puzzles, and added some additional features. Great progress was made towards UTF-8 support for the WordList, and some initial Printing support was coded.

More importantly, we were able to clean up a bunch of small papercuts. We tweaked the colors and controls, and also added acrostic support. The end result is a lot of visual tweaks to the game that improves the playability and appearance.

Acrostic puzzle featuring the new colors and crossed-out clues

I took a bunch of these improvements and packaged it up into a new release. It’s now available on Flathub.

Thanks to Rosanna, Neil, Federico, Heather, Jess, Dave, and Caroline for their contributions!
Download on FLATHUB


Mexico was lovely as always. I’ve always  wanted to visit Guadalajara, and it definitely met my (already high) expectations. It was a walkable city with lots of hidden spots and surprises in store. I’d love to go back sometime with a bit more time, and really explore it more thoroughly.

Guadalajara Cathedral and environs
Guadalajara Cathedral and environs
Arcos Vallarta near the venue
Arcos Vallarta, near the venue
Agave field in Tequila, Jalisco
Agave field in Tequila, Jalisco
GNOME Birthday party at Bariachi – a mariachi bar extrordinaire!
GNOME Birthday party at Bariachi – an amazing mariachi bar!
Cozy little bookstore
A cozy little bookstore
Guadalajara at night
Guadalajara at night

August 07, 2022

GNOME 43 Wallpapers

GNOME 43 Wallpapers

Evolution and design can co-exist happily in the world of desktop wallpapers. It’s desirable to evolve within a set of constraints to create a theme in time, set up a visual brand that doesn’t rely on putting a logo on everything. At the same time it’s healthy to stop once in a while, do a small reflection on what’s perhaps a little dated and do a fresh redesign.

I took extra time this release to focus on refreshing the whole wallpaper set for 43. While the default wallpaper isn’t a big departure from 3.38 hexagon theme, most of the supplemental wallpapers have been refreshed from the ground up. The video above shows a few glimpses of all the one way streets it took for the default to land back in the hexagons.

GNOME 42 was the first release to ship a bunch of SVG based wallpapers that embraced the flat colors and gradients and benefited from being just a few kilobytes in file size. It was also the first release to ship dark preference variants. All of that continues into 43.

Pixels in Blender

Major change comes from addressing a wide range of display aspect ratios with one wallpaper. 43 wallpapers should work fine on your ultrawides just as well as portrait displays. We also rely on WebP as the file format getting a much better quality with a nice compression ratio (albeit lossy).

What’s still missing are photographic wallpapers captured under different lighting. Hopefully next release.

Blender’s geometry nodes is an amazing tool to do generative art, yet I feel like I’ve already forgotten the small fraction of what can be done that I’ve learned during this cycle. Luckily there’s always the next release to do some repetition. Thanks to everyone following my struggles on the twitch streams.

The release is dedicated to Thomas Wood, a long time maintainer of all things visual in GNOME.

Unsettled by Unison’s Fadeaway from Fedora

This is in part a rallying cry for packagers, but also a story illustrating how fragile user workflows can be, and how some seemingly inconsequential decisions at the distro level can have disastrous consequences on the ability of individuals to continue running your FLOSS platform.

I’m hoping this 12-minutes read will prove entertaining and insightful to you. This article is the third and (hopefully?!) last part of my Pullitzer-winning “Software upgrade treadmill” série. Go read part two if you haven’t already, I’ll wait. I guarantee you’re going to laugh much harder when reading the beginning of part three below.

The year is 2021. After the two-years war against the Machine, and the two-years in the software carbon freezer, I had finally been enjoying two years of blissful efficiency, where I could stay on top of the latest software versions with the Fedora Advanced Pragmatic Geek Workstation Operating System. Certainly I could continue like this unimpeded. As the protagonists said in la Cité de la peur, “Nothing bad can happen to us now 🎶”

But karma had other plans. As I wanted to upgrade from Fedora 33 to Fedora 34 (and newer) in the spring of 2021, I found out that Unison, another mission-critical app I’ve been depending on everyday for the last 15+ years, has been orphaned from Fedora. Déjà-vu, anyone?

Pictured: Phil the groundhog was disapproving of my springtime plans.

Apparently, Unison was not meant to come back into Fedora until someone fixes it upstream, which I assumed to mean this, this and this.

The core issue for downstream packagers was that Unison was very fragile and incompatible with itself across different versions of OCaml, the language it is programmed in. Personally, for my own usecase, I don’t care about compatibility with “other distros’ versions of Unison” or even “across different releases of Fedora”—both are nice to have, but apparently hellish to manage—I am happy enough if I can already be compatible with myself, within the same version of Fedora with all my machines upgraded in lockstep. But when I started drafting this blog post at the end of 2021, I couldn’t even consider that option, because the package no longer existed in Fedora.

In practice, I was once more stranded on an old unsupported version of Fedora (in this case, F33) across all my personal computers, because of a single package that my daily life depends on.

F33 was declared dead and buried at the end of 2021, and as I would no longer be able to receive any updates for it, I spent the better part of the 2022 winter season on a system with wildly outdated packages (I was still on Firefox 95 until recently, for example). Yes, I did hear thousands of infosec voices cry out in terror as I wrote the above.

Here is what sitting in the cockpit of a Fedora battleworkstation feels like when the cord gets pulled again:

I couldn’t be waiting in suspense for such a long time anymore with only the vague hope that maybe someday the situation might come to a resolution. I’ve been there, done that, got the t-shirt and the grey hair to prove it, as you’ve seen in part two of this trilogy. At some point, above-and-beyond loyalty becomes “acharnement thérapeutique“, and you start questioning your life choices.

Back then I wrote this in my journal:

I could survive like that a little longer, but there comes a time where the unsupported, time-frozen Fedora 33 on my machines becomes untenable—too bitrotten or insecure to use—and I don’t know what I will do then. This might, very regretfully, force me to switch away from Fedora—which I had been using every day for over 11 years by now—to another Linux distribution entirely in 2022-2023, and I am really not looking forward to that distro-hopping nightmare, I would really like to avoid that outcome. None of the alternative distros/OSes excite me.

I want to keep using Fedora, but I’m backed against the wall here, and unless someone repackages Unison as a flatpak, or some new folks fixes the issues upstream so that it can be repackaged in Fedora, then I eventually will have to abandon Fedora Workstation as a platform for myself—and doing so for one remaining damned application breaks my heart.

– Me, some months ago

Throughout the winter, I spent countless hours researching and discussing this with potential collaborators (I did do some lobbying to encourage folks to review and merge the existing work too, I think). I believe I have rewritten this blog post at least four times as the situation kept evolving quite a bit during the span of two months this winter:

  • Back when I originally drafted this blog post near the end of January 2022, the situation looked quite bleak: there was an attempt at doing an OCaml-agnostic version of Unison in late 2020, but it seemed to have stalled for over a year, so I thought the situation might take years to “naturally resolve” in the then-current state of things;
  • Unison’s compatibility issues arguably were both an upstream and downstream problem: if it bites Fedora, it’s likely to bite other distros at some point in time, and the impact of that on users is quite significant. Unison is this kind of non-glorious software that has quietly been used by many people around the world for decades, and I can’t possibly be the only one out there using it regularly (even Debian’s incomplete opt-in statistics show thousands of users);
  • Hubert Figuière valiantly made a first attempt at flatpaking OCaml + Unison as a personal challenge, but the effort stalled (I presume because of technical difficulties of some kind). I’m not even sure it is possible for Unison to work in a flatpak, this remains an open theoretical question (see further below).
  • Thankfully, sometime during the late winter of 2022, in the upstream Unison project, the code was reviewed, merged and bugfixed, so those three upstream issues (linked above near the start of this article) have been resolved, leading to the highest activity level in years (if not decades), and a new release bundling all those improvements. The ball is now back into the court of the Linux distros.
  • Later on, I discovered a third-party repository that provided packages for Fedora (see further below), which would allow me to keep my systems working.

So nowadays we are missing two things: someone to package Unison back into the main Fedora repositories, and ideally someone to potentially package Unison as a flatpak (if that is even possible) in Flathub as well.

Help needed to bring back the Unison packages in Fedora

In recent months, I discovered a third-party Unison repository for Fedora by Chris Roadfeldt, where he packages the latest version of Unison. This is the main reason why I’m not running an insecure operating system right now. I asked why this was in a COPR, rather than Fedora’s main repositories, and I was told this:

“I package unison for fedora for my own reasons, which I suspect are like yours, I use it and use it often. I would not be opposed to submitting my edits to the spec file to the fedora main repo, after all it was fork from there years ago.
There is a decent amount of work needed to clean up the package for general consumption and maintainability though, and I am not in a position to do that right now. For the time being I will continue to maintain my copr repo and let others make use of it as they want. Should it be of use in the main fedora repo, the fedora project is welcome to include it.” […]

Looks like we’re already “mostly there”. If you are an experienced Fedora/RPM packager, your help would be very welcome in mainstreaming this (or reviving the previous packages from the main repos). Will you pick up that guitar?

Could there be a Flatpak package for Unison too?

Currently, there is no such thing as a Unison flatpak package. Upstream does not want to maintain or support packages (their policy is to only ship source code), and I don’t even know if it is conceptually possible for a networked system app like unison—which must act as a client and server, commandline and GUI—to work as a flatpak.

The way Unison works is… sorcery. It somehow requires Unison to be installed on both sides (historically with the exact same versions of everything, otherwise things tend to break, but with the 2.52 release this problem may finally be gone), yet it does not run as a system service/daemon, so… how does it behave like a client & server without being a server daemon listening? 🤔 I don’t know how such arcane magic works. But it sure brings up the question: is it actually possible for it this to work while inside Flatpak on both sides? I have no idea. I would love it if someone proves it is possible and publishes it on flathub, thereby solving this interoperability madness once and for all, across all Linux distros (Fedora is not the only one struggling with this, Debian has also frequently been encountering this problem, and therefore, so are all its derivatives)… though arguably, with the new OCaml-version-independent line of Unison releases (2.52.x), this may no longer be an issue (I hope).

“Why are you blogging about this instead of fixing it yourself?”

There are couple of reasons. In no particular order:

  • I think it’s interesting to raise awareness about these type of struggles users face, and how they can influence the choice of a platform. I was this close to leaving Fedora, and yet I’m someone with inordinate amounts of patience and geekiness; that should tell you something. Furthermore, I would argue that general users shouldn’t have to learn code surgery and packaging (or containers or some other exotic OS mechanisms) to be able to continue using their everyday productivity software. To have mass appeal in the market, Linux-based OSes need to be able to solve everyday productivity problems for users who are not necessarily developers or packagers.
  • I don’t have the technical skill required at all; I don’t know OCaml or any of that fancy advanced coding they use in Unison, nor do I have the time/skill/inclination in becoming a packager of any kind. It is really not supposed to be my area of expertise & focus as a management & marketing professional.
  • I really don’t have the time for hands-on deep technical wrangling anymore, other than filing bug reports; even just keeping on top of the thousands of bug reports I have filed in recent years (in the GNOME & freedesktop stack) is challenging enough, as the picture below illustrates:
Pictured: my “bug reports” email notifications/comments folder. This does not include the GTG or Pitivi bug reports, which are filtered into separate folders.

After all, one of the reasons I haven’t blogged much in recent years is that I am trying to build and scale not one, not two, but three businesses. Good bug reports take time and effort, and beyond testing & feedback, I can’t be fixing every piece of software myself at the same time. The reason I have spent much time and effort writing this longform article here is because I find it interesting as a case study on the vulnerability of user workflows to modern operating systems’ fast-paced changes.

“Why don’t you just use The SyncThing?”

Syncthing is not solving the same problem as Unison—and never will—as the Syncthing developers say themselves in various discussion/support threads (for example here, here, here, etc.).

I do love and admire Syncthing, it is the best of breed in distributed continuous file synchronization (according to my own casual testing and the many recommendations I’ve heard from people around me). Back in the 2005-2010 era, I would have killed to have this kind of software available to me. Syncthing is, since 2013, fulfilling the promise of the long-lost and buried peer-to-peer version of iFolder (from the Novell Linux desktop era) that I was never able to run except on some of the very early versions of Ubuntu (4.10 to 5.04, maybe?). I wish someone had told me about Syncthing’s existence earlier than 2021! This month, I have successfully deployed it for a friend of mine who was struggling with horrible proprietary slow-synchronizing NAS bullshit.

Syncthing is a technological marvel, and I’d want to use it, but I can’t, at least not for my most important files and application data. My needs have evolved, especially as I want to be able to selectively sync my Liferea data folder, my GTG data folder, my LibreOffice profiles, GNote, SSH configuration files, fonts, critical business documents, and all that while my home partitions/drives are always 99.9% full:

Through some miracle of stupendous efficiency in digital packratting, I am still using the same 2TB home drive from 2012 and can just barely fit everything in it. This is criminal, I know.

There is no other open-source alternatives to Unison, which is not only peer-to-peer and resources-efficient, but also based on the concept of manual-controlled batches (you can review/approve/override each file/directory being synchronized). I am not the only one out there to have evaluated Syncthing, and gone right back to Unison. It gives you the fine-grained surgical precision you cannot have with other software.

There are many cases where Unison, with its ability to let me override the direction of a sync for any file or folder, has saved my ass, letting me revert mistakes from another computer.

“Sounds like a lot of community management work. Why don’t you just switch distros?”

Pretty much no other distro does exactly what Fedora does (and listing the possible alternative distros here would be beyond the scope of this blog post).

For me, the whole point of using a Linux distribution like Fedora is for it to “Just Work™” and for me to move on with other problems in my life, while still being able to run the latest versions of software so that I am not burdening developers with outdated bugs when reporting issues (so no, unless I decide to stop contributing to GNOME, I cannot really run Debian Stable, Ubuntu LTS, or elementary, sorry!). Under normal circumstances, Fedora provides the perfect “Goldilocks” balance between freshness and stability. If I wanted to constantly break and fix my computer and to mess with packages, I would probably be running Slackware, Gentoo, Arch btw, or whatever übergeek distro du jour, but I don’t want that. I want simple things, and I want the perfect balance that Fedora had struck for me for the past ten years:

  • it provides the latest (and vanilla) versions of all software, while still being stable and tested enough to not constantly pull the rug from under my feet; I want the latest GNOME every six months (so that I don’t get yelled at for using ancient software versions when reporting issues) but I want it pre-tested and not shipped on “release day zero” (rolling distros scare the hell out of me in terms of productivity).
  • it does not force me to stay on top of 1000-1500 updates per week like you often see in rolling distros (my average uptime on my laptop and desktop workstation is 30 to 45 days on average) nor require ostree and BTRFS to have a semblance of safety (because those two bring problems of their own that I am not ready to accept yet);
  • it gives me the ability to wait 1-2 months after each Fedora release before upgrading, so that I can let Fedora early-upgraders (along with Arch and OpenSUSE Tumbleweed users), take the first wave of bugs in their face 😉

Conclusion to the Upgrade Trilogy

When I originally wrote the first few drafts of this third article in the winter, the jury was still out on whether I actually would have to switch away from Fedora, because, from a timing perspective, I assuredly would not be meddling with my computers in the winter (you do not mess with your productivity tools during income tax season). I vowed to leave the problem on ice until spring/summer at the earliest, and thought, “Perhaps in the meantime someone will have come up with a solution by then.”, but not knowing if I would still be able to use my favorite operating system six months down the road was a big concern of mine.

Other than writing this educational editorial piece, I did not know what to do while I was sitting again on a ticking timebomb. As a user, I felt trapped and cornered, and that’s speaking as someone who has been around the freedesktop software platform for 19 years and knows how to fix most common Linux desktop usage problems! Imagine what happens when users are not so unreasonably persistent. What is the path forward for users left between a rock and a hard place?

With this trilogy, you now have a poignantly detailed picture of how rough of a ride our FLOSS platforms can be when a seemingly “tiny piece” of the system breaks down. I hope you have enjoyed my writing and found it insightful in some way.

If you’ve read through the whole three parts of this series, I commend you for your dedication to reading my epics. Feel free to share this trilogy or leave a comment so that I can give you a high-five whenever we meet someday.

coarse or lazy?

sweeping, coarse and lazy

One of the things that had perplexed me about the Immix collector was how to effectively defragment the heap via evacuation while keeping just 2-3% of space as free blocks for an evacuation reserve. The original Immix paper states:

To evacuate the object, the collector uses the same allocator as the mutator, continuing allocation right where the mutator left off. Once it exhausts any unused recyclable blocks, it uses any completely free blocks. By default, immix sets aside a small number of free blocks that it never returns to the global allocator and only ever uses for evacuating. This headroom eases defragmentation and is counted against immix's overall heap budget. By default immix reserves 2.5% of the heap as compaction headroom, but [...] is fairly insensitive to values ranging between 1 and 3%.

To Immix, a "recyclable" block is partially full: it contains surviving data from a previous collection, but also some holes in which to allocate. But when would you have recyclable blocks at evacuation-time? Evacuation occurs as part of collection. Collection usually occurs when there's no more memory in which to allocate. At that point any recyclable block would have been allocated into already, and won't become recyclable again until the next trace of the heap identifies the block's surviving data. Of course after the next trace they could become "empty", if no object survives, or "full", if all lines have survivor objects.

In general, after a full allocation cycle, you don't know much about the heap. If you could easily know where the live data and the holes were, a garbage collector's job would be much easier :) Any algorithm that starts from the assumption that you know where the holes are can't be used before a heap trace. So, I was not sure what the Immix paper is meaning here about allocating into recyclable blocks.

Thinking on it again, I realized that Immix might trigger collection early sometimes, before it has exhausted the previous cycle's set of blocks in which to allocate. As we discussed earlier, there is a case in which you might want to trigger an early compaction: when a large object allocator runs out of blocks to decommission from the immix space. And if one evacuating collection didn't yield enough free blocks, you might trigger the next one early, reserving some recyclable and empty blocks as evacuation targets.

when do you know what you know: lazy and eager

Consider a basic question, such as "how many bytes in the heap are used by live objects". In general you don't know! Indeed you often never know precisely. For example, concurrent collectors often have some amount of "floating garbage" which is unreachable data but which survives across a collection. And of course you don't know the difference between floating garbage and precious data: if you did, you would have collected the garbage.

Even the idea of "when" is tricky in systems that allow parallel mutator threads. Unless the program has a total ordering of mutations of the object graph, there's no one timeline with respect to which you can measure the heap. Still, Immix is a stop-the-world collector, and since such collectors synchronously trace the heap while mutators are stopped, these are times when you can exactly compute properties about the heap.

Let's retake the question of measuring live bytes. For an evacuating semi-space, knowing the number of live bytes after a collection is trivial: all survivors are packed into to-space. But for a mark-sweep space, you would have to compute this information. You could compute it at mark-time, while tracing the graph, but doing so takes time, which means delaying the time at which mutators can start again.

Alternately, for a mark-sweep collector, you can compute free bytes at sweep-time. This is the phase in which you go through the whole heap and return any space that wasn't marked in the last collection to the allocator, allowing it to be used for fresh allocations. This is the point in the garbage collection cycle in which you can answer questions such as "what is the set of recyclable blocks": you know what is garbage and you know what is not.

Though you could sweep during the stop-the-world pause, you don't have to; sweeping only touches dead objects, so it is correct to allow mutators to continue and then sweep as the mutators run. There are two general strategies: spawn a thread that sweeps as fast as it can (concurrent sweeping), or make mutators sweep as needed, just before they allocate (lazy sweeping). But this introduces a lag between when you know and what you know—your count of total live heap bytes describes a time in the past, not the present, because mutators have moved on since then.

For most collectors with a sweep phase, deciding between eager (during the stop-the-world phase) and deferred (concurrent or lazy) sweeping is very easy. You don't immediately need the information that sweeping allows you to compute; it's quite sufficient to wait until the next cycle. Moving work out of the stop-the-world phase is a win for mutator responsiveness (latency). Usually people implement lazy sweeping, as it is naturally incremental with the mutator, naturally parallel for parallel mutators, and any sweeping overhead due to cache misses can be mitigated by immediately using swept space for allocation. The case for concurrent sweeping is less clear to me, but if you have cores that would otherwise be idle, sure.

eager coarse sweeping

Immix is interesting in that it chooses to sweep eagerly, during the stop-the-world phase. Instead of sweeping irregularly-sized objects, however, it sweeps over its "line mark" array: one byte for each 128-byte "line" in the mark space. For 32 kB blocks, there will be 256 bytes per block, and line mark bytes in each 4 MB slab of the heap are packed contiguously. Therefore you get relatively good locality, but this just mitigates a cost that other collectors don't have to pay. So what does eager marking over these coarse 128-byte regions buy Immix?

Firstly, eager sweeping buys you eager identification of empty blocks. If your large object space needs to steal blocks from the mark space, but the mark space doesn't have enough empties, it can just trigger collection and then it knows if enough blocks are available. If no blocks are available, you can grow the heap or signal out-of-memory. If the lospace (large object space) runs out of blocks before the mark space has used all recyclable blocks, that's no problem: evacuation can move the survivors of fragmented blocks into these recyclable blocks, which have also already been identified by the eager coarse sweep.

Without eager empty block identification, if the lospace runs out of blocks, firstly you don't know how many empty blocks the mark space has. Sweeping is a kind of wavefront that moves through the whole heap; empty blocks behind the wavefront will be identified, but those ahead of the wavefront will not. Such a lospace allocation would then have to either wait for a concurrent sweeper to advance, or perform some lazy sweeping work. The expected latency of a lospace allocation would thus be higher, without eager identification of empty blocks.

Secondly, eager sweeping might reduce allocation overhead for mutators. If allocation just has to identify holes and not compute information or decide on what to do with a block, maybe it go brr? Not sure.

lines, lines, lines

The original Immix paper also notes a relative insensitivity of the collector to line size: 64 or 256 bytes could have worked just as well. This was a somewhat surprising result to me but I think I didn't appreciate all the roles that lines play in Immix.

Obviously line size affect the worst-case fragmentation, though this is mitigated by evacuation (which evacuates objects, not lines). This I got from the paper. In this case, smaller lines are better.

Line size affects allocation-time overhead for mutators, though which way I don't know: scanning for holes will be easier with fewer lines in a block, but smaller lines would contain more free space and thus result in fewer collections. I can only imagine though that with smaller line sizes, average hole size would decrease and thus medium-sized allocations would be harder to service. Something of a wash, perhaps.

However if we ask ourselves the thought experiment, why not just have 16-byte lines? How crazy would that be? I think the impediment to having such a precise line size would mainly be Immix's eager sweep, as a fine-grained traversal of the heap would process much more data and incur possibly-unacceptable pause time overheads. But, in such a design you would do away with some other downsides of coarse-grained lines: a side table of mark bytes would make the line mark table redundant, and you eliminate much possible "dark matter" hidden by internal fragmentation in lines. You'd need to defer sweeping. But then you lose eager identification of empty blocks, and perhaps also the ability to evacuate into recyclable blocks. What would such a system look like?

Readers that have gotten this far will be pleased to hear that I have made some investigations in this area. But, this post is already long, so let's revisit this in another dispatch. Until then, happy allocations in all regions.

August 05, 2022

Fitting Endless OS images on small disks

Last week I read Jorge Castro’s article On “Wasting disk space” with interest, and not only because it cites one of my own articles 😉. Jorge encouraged me to write up the conversation we had off the back of it, so here we go!

People like to fixate on the disk space used by installing a calculator app as a Flatpak when you don’t have any other Flatpak apps installed. For example, on my system GNOME Calculator takes up 9.3 MB for itself, plus 803.1 MB for the GNOME 42 runtime it depends on. Regular readers will not be surprised when I say that that 803.1 MB figure looks rather different when you realise that Calculator is just one of 70 apps on my system that use that runtime; 11.5 MB of runtime per app feels a lot more reasonable.

But I do have one app installed which depends on the GNOME 3.34 runtime, which has been unsupported since August 2020, and the GNOME 3.34 runtime only shares 102 MB of its files with the GNOME 42 runtime, leaving 769 MB installed solely for this one 11 MB app. Not such a big deal on my laptop with a half-terabyte drive, but this gets to a point Jorge makes near the end of his article:

Yes, they take up more room, but it’s not by much, and unless you’re on an extremely size-restrained system (like say a 64GB Chromebook you are repurposing) then for most systems it’s a wash.

Part of the insight behind Endless OS is that it can be more practical and cost-effective to fill a large hard disk with apps and educational resources than to get access to high-bandwidth connectivity in remote or disadvantaged communities, and the devices we and our deployment partners use generally have plentiful storage. But inexpensive, lower-end devices with 64 GB storage are still quite common, and having runtimes installed for one or two apps consumes valuable space that could be used for more offline content. So this can be a real problem for us, if a small handful of apps are bloating an image to the point where it doesn’t fit, or leaves little space for documents & updates.

So, our OS image builder has a mode to tabulate the apps that will be preinstalled in a given image configuration, grouped by the runtime they use, with their approximate sizes. I added this mode when I was trying to make an Endless OS ISO fit on a 4.7 GB DVD a few years back, packing in as much content as possible. It’s a bit rough and ready, and tends to overestimate the disk space needed (because it does not take into account deduplication identical files between different apps and runtimes, or installing only a subset of translations for an app or runtime) but it is still useful to get a sense of where the space is going.

I ran this for the downloadable English configuration of Endless OS earlier today. I’ll spare you the full output here but if you are curious it’s in this gist. It shows that, like on my system, TurtleBlocks is the only reason the GNOME 3.34 runtime is preinstalled, and so removing that app or updating it to a more modern runtime would probably save somewhere between 1 GB and 2 GB in the image! Normally after seeing this I would go and see if I can take a few minutes to send an update to the app, but in this case someone has beaten me to it. (If you are a Python expert interested in block-based programming tools for kids, why not lend a hand getting this over the line?) It also shows that a couple of our unmaintained and sadly closed-source first-party apps, Resumé and My Budget, are stuck on an even more prehistoric runtime.

Running it on a deployment partner’s custom image that they reported is too big for the 64 GB target hardware showed that the Othman Quran browser is also using an older runtime; once more, someone else has already noticed in the last couple of days, and if you are an expert on GTK’s font selection your input would be welcome on that pull request.

This tool is how it came to pass that I updated the runtimes of gbrainy and Genius, two apps I have never used, last month, and Klavaro back in May, among others.

There tends to be a flurry of community activity every 6-12 months to update all the apps that depend on newly end-of-lifed runtimes, and many of these turn out not to be rocket science, though it is not (yet!) something that flatpak-external-data-checker can do automatically. Then we are left with a long tail of more lightly-maintained apps where the update is more cumbersome to do, as in the case of gbrainy, where it took a bit of staring at the unfamiliar error messages produced by the Mono C# compiler to figure out where the problem might lie. If you are good at debugging build failures, this kind of thing is a great way to contribute to Flathub; and if someone can remind me of the URL of the live-updating TODO list of apps on obsolete runtimes I once saw, I’ll add the link here.

While I’ve focused on one of the problems that apps depending on obsolete runtimes can cause, it’s not all bad news. If you really love, or need, an app that is abandoned or has not been updated in a while, the Flatpak model you can still install & use that app and its old runtime without your distribution having to keep around some obsolete version of the libraries, or you having to stay on an old version of the distro. As and when that app does get an update, the unused and end-of-lifed runtime on your system will be uninstalled automatically by modern versions of Flatpak.

Mini-GUADEC 2022 Berlin: retrospective

I’m really pleased with how the mini-GUADEC in Berlin turned out. We had a really productive conference, with various collaborations speeding up progress on Files, display colour handling, Shell, adaptive apps, and a host of other things. We watched and gave talks, and that seemed to work really well. The conference ran from 15:00 to 22:00 in Berlin time, and breaks in the schedule matched when people got hungry in Berlin, so I’d say the timings worked nicely. It helped to be in a city where things are open late.

c-base provided us with a cool inside space, a nice outdoor seating area next to the river, reliable internet, quality A/V equipment and support for using it, and a big beamer for watching talks. They also had a bar open later in the day, and there were several food options nearby.

At least from our end, GUADEC 2022 managed to be an effective hybrid conference. Hopefully people in Guadalajara had a similarly good experience?

Tobias and I spent a productive half a day working through a load of UI papercuts in GNOME Software, closing a number of open issues, including some where we’d failed to make progress for months. The benefits of in-person discussion!

Sadly despite organising the mini-GUADEC, Sonny couldn’t join us due to catching COVID. So far it looks like others avoided getting ill.


Allan wrote up how he got to Berlin, for general reference and posterity, so I should do the same.

I took the train from north-west England to London one evening and stayed the night with friends in London. This would normally have worked fine, but that was the second-hottest day of the heatwave, and the UK’s rails aren’t designed for air temperatures above 30°C. So the train was 2.5 hours delayed. Thankfully I had time in the plan to accommodate this.

The following morning, I took the 09:01 Eurostar to Brussels, and then an ICE at 14:25 to Berlin (via Köln). This worked well — rails on the continent are designed for higher temperatures than those in the UK.

The journey was the same in reverse, leaving Berlin at 08:34 in time for a 18:52 Eurostar. It should have been possible to then get the last train from London to the north-west of England on the same day, but in the end I changed plans and visited friends near London for the weekend.

I took 2 litres of water with me each way, and grabbed some food beforehand and at Köln, rather than trying to get food on the train. This worked well.

Within Berlin, I used a single 9EUR monatskarte for all travel. This is an amazing policy by the German government, and subjectively it seemed like it was being widely used. It would be interesting to see how it has affected car usage vs public transport usage over several months.


Overall, I estimate the return train trip to Berlin emitted 52kgCO2e, compared to 2610kgCO2e from flying Manchester to Guadalajara (via Houston). That’s an impact 50× lower. 52kgCO2e is about the same emissions as 2 weeks of a vegetarian diet; 2610kgCO2e is about the same as an entire year of eating a meat-heavy diet.

(Train emissions calculated one-way as 14.8kgCO2e to London, 4.3kgCO2e to Brussels, 6.5kgCO2e to Berlin.)

Tobias gave an impactful talk on climate action, and one of his key points was that significant change can now only happen as a result of government policy changes. Individual consumer choices can’t easily bring about the systemic change needed to prevent new oil and coal extraction, trigger modal shift in transport use, or rethink land allocation to provide sufficient food while allowing rewilding.

That’s very true. One of the exceptions, though, is flying: the choices each of the ~20 people at mini-GUADEC made resulted in not emitting up to 50 tonnes of CO2e in flights. That’s because flights each have a significant emissions cost, and are largely avoidable. (Doing emissions calculations for counterfactuals is a slippery business, but hopefully the 50 tonne figure is illustrative even if it can’t be precise.)

So it’s pretty excellent that the GNOME community supports satellite conferences, and I strongly hope this is something which we can continue to do for our big conferences in future.


After the conference, I had a few days in Berlin. On the recommendation of Zeeshan, I spent a day in the Berlin technical museum, and another day exploring several of the palaces at Potsdam.

It’s easy to spend an entire day at the technical museum. One of the train sheds was closed while I was there, which is a shame, but at least that freed up a few hours which I could spend looking at the printing and the jewellery making exhibits.

One of the nice things about the technical museum is that their displays of old machinery are largely functional: they regularly run demonstrations of entire paper making processes or linotype printing using the original machinery. In most other technical museums I’ve been to, the functioning equipment is limited to a steam engine or two and everything else is a static display.

The palaces in Potsdam were impressive, and look like a maintenance nightmare. In particular, the Grotto Hall in the Neues Palais was one of the most fantastical rooms I’ve ever seen. It’s quite a ridiculous display of wealth from the 18th century. The whole of Sanssouci Park made another nice day out, though taking a picnic would have been a good idea.


Thanks again to everyone who organised GUADEC in Guadalajara, Sonny and Tobias for organising the mini-GUADEC, the people at c-base for hosting us and providing A/V support, and the GNOME Foundation for sponsoring several of us to go to mini-GUADEC.

Sponsored by GNOME Foundation

August 04, 2022

Aarch64 for GNOME Nightly apps

We had aarch64 builds of the runtime since the very early days of Flatpak (long before Flathub), and you could manually build your applications for aarch64 natively or by using qemu. Now you will also be able to download aarch64 builds of GNOME applications straight from the Nightly repository so all 3 of you out there with such machines can finally rejoice.

The person mostly responsible for this is my good friend Julian Sparber who got around shorting through all the infrastructure needed and baited me into fixing the edge cases. Special thanks also to Bart for taking care of the GitLab Runners as usual.

We’ve also updated the CI guide to include the aarch64 builds, here is an example Merge Request for gnome-weather. In short this is what you need to have in your .gitlab-ci.yml to test and push the builds into the repository.

include: ''

    MANIFEST_PATH: "build-aux/flatpak/org.gnome.NautilusDevel.yml"
    FLATPAK_MODULE: "nautilus"
    APP_ID: "org.gnome.NautilusDevel"
    BUNDLE: "nautilus-dev.flatpak"

  extends: ['.flatpak@x86_64', '.vars-devel']

  extends: ['.flatpak@aarch64', '.vars-devel']

  extends: '.publish_nightly'
  needs: ['flatpak@x86_64']

  extends: '.publish_nightly'
  needs: ['flatpak@aach64']

The main difference from the existing x86_64 build is the template job you extend, as well as the needs: of the added nightly job.

And that’s it. Enjoy!

Toolbx @ Community Central

At 15:00 UTC today, I will be talking about Toolbx on a new episode of Community Central. It will be broadcast live on BlueJeans Events (formerly Primetime) and the recording will be available on YouTube. I am looking forward to seeing some friendly faces in the audience.

August 03, 2022

Paying technical debt in our accessibility infrastructure - Transcript from my GUADEC talk

At GUADEC 2022 in Guadalajara I gave a talk, Paying technical debt in our accessibility infrastructure. This is a transcript for that talk.

Talk video on YouTube

The video for the talk starts at 2:25:06 and ends at 3:07:18; you can click on the image above and it will take you to the correct timestamp.

Title page of the presentation, with my email and Mastodon accounts

Hi there! I'm Federico Mena Quintero, pronouns he/him. I have been working on GNOME since its beginning. Over the years, our accessibility infrastructure has acquired a lot of technical debt, and I would like to help with that.

Photo of sidewalk with a strip of tactile paving; some tiles are missing and it is generally dirty and unmaintained

For people who come to GUADEC from richer countries, you may have noticed that the sidewalks here are pretty rubbish. This is a photo from one of the sidewalks in my town. The city government decided to install a bit of tactile paving, for use by blind people with canes. But as you can see, some of the tiles are already missing. The whole thing feels lacking in maintenance and unloved. This is a metaphor for the state of accessibility in many places, including GNOME.

Diagram of the accessibility infrastructure in GNOME.  Description in text.

This is a diagram of GNOME's accessibility infrastructure, which is also the one used on Linux at large, regardless of desktop. Even KDE and other desktop environments use "atspi", the Assistive Technology Service Provider Interface.

The diagram shows the user-visible stuff at the top, and the infrastructure at the bottom. In subsequent slides I'll explain what each component does. In the diagram I have grouped things in vertical bands like this:

  • gnome-shell, GTK3, Firefox, and LibreOffice ("old toolkits") all use atk and atk-adaptor, to talk via DBus, to at-spi-registryd and assistive technologies like screen readers.

  • More modern toolkits like GTK4, Qt5, and WebKit talk DBus directly instead of going through atk's intermediary layer.

  • Orca and Accerciser (and Dogtail, which is not in the diagram) are the counterpart to the applications; they are the assistive tech that is used to perceive applications. They use libatspi and pyatspi2 to talk DBus, and to keep a representation of the accessible objects in apps.

  • Odilia is a newcomer; it is a screen reader written in Rust, that talks DBus directly.

The diagram has red bands to show where context switches happen when applications and screen readers communicate. For example, whenever something happens in gnome-shell, there is a context switch to dbus-daemon, and another context switch to Orca. The accessibility protocol is very chatty, with a lot of going back and forth, so these context switches probably add up — but we don't have profiling information just yet.

There are many layers of glue in the accessibility stack: atk, atk-adaptor, libatspi, pyatspi2, and dbus-daemon are things that we could probably remove. We'll explore that soon.

Now, let's look at each component separately.

Diagram of the communications path between gnome-shell and Orca.  Description in the text.

For simplicity, let's look just at the path of communication between gnome-shell and Orca. We'll have these components involved: gnome-shell, atk, atk-adaptor, dbus-daemon, libatspi, pyatspi2, and finally Orca.

Diagram highlighting just gnome-shell and atk.

Gnome-shell implements its own toolkit, St, which stands for "shell toolkit". It is made accessible by implementing the GObject interfaces in atk. To make a toolkit accessible means adding a way to extract information from it in a standard way; you don't want screen readers to have separate implementations for GTK, Qt, St, Firefox, etc. For every window, regardless of toolkit, you want to have a "list children" method. For every widget you want "get accessible name", so for a button it may tell you "OK button", and for an image it may tell you "thumbnail of file.jpg". For widgets that you can interact with, you want "list actions" and "run action X", so a button may present an "activate" action, and a check button may present a "toggle" action.

Diagram highlighting just atk and atk-adaptor.

However, ATK is just abstract interfaces for the benefit of toolkits. We need a way to ship the information extracted from toolkits to assistive tech like screen readers. The atspi protocol is a set of DBus interfaces that an application must implement; atk-adaptor is an implementation of those DBus interfaces that works by calling atk's slightly different interfaces, which in turn are implemented by toolkits. Atk-adaptor also caches some things that it already asked to the toolkit, so it doesn't have to ask again unless the toolkit notifies about a change.

Does this seem like too much translation going on? It is! We will see the reasons behind that when we talk about how accessibility was implemented many years ago in GNOME.

Diagram highlighting just atk-adaptor, dbus-daemon, and libatspi.

So, atk-adaptor ships the information via the DBus daemon. What's on the other side? In the case of Orca it is libatspi, a hand-written binding to the DBus interfaces for accessibility. It also keeps an internal representation of the information that it got shipped from the toolkit. When Orca asks, "what's the name of this widget?", libatspi may already have that information cached. Of course, the first time it does that, it actually goes and asks the toolkit via DBus for that information.

Diagram highlighting just libatspi and pyatspi2.

But Orca is written in Python, and libatspi is a C library. Pyatspi2 is a Python binding for libatspi. Many years ago we didn't have an automatic way to create language bindings, so there is a hand-writtten "old API" implemented in terms of the "new API" that is auto-generated via GObject Introspection from libatspi.

Pyatspi2 also has a bit of logic which should probably not be there, but rather in Orca itself or in libatspi.

Diagram highlighting just pyatspi2 and finally Orca.

Finally we get to Orca. It is a screen reader written in 120,000 lines of Python; I was surprised to see how big it is! It uses the "old API" in pyatspi2.

Orca uses speech synthesis to read out loud the names of widgets, their available actions, and generally any information that widgets want to present to the user. It also implements hotkeys to navigate between elements in the user interface, or a "where am I" function that tells you where the current focus is in the widget hierarchy.

Screenshot of tweet by Sarah Mei; description in the text.

Sarah Mei tweeted "We think awful code is written by awful devs. But in reality, it's written by reasonable devs in awful circumstances."

What were those awful circumstances?

Timeline with some events relevant to GNOME's history, described in the text.

Here I want to show you some important events surrounding the infrastructure for development of GNOME.

We got a CVS server for revision control in 1997, and a Bugzilla bug tracker in 1998 when Netscape freed its source code.

Also around 1998, Tara Hernandez basically invented Continuous Integration while at Mozilla/Netscape, in the form of Tinderbox. It was a build server for Netscape Navigator in all its variations and platforms; they needed a way to ensure that the build was working on Windows, Mac, and about 7 flavors of Unix that still existed back then.

In 2001-2002, Sun Microsystems contributed the accessibility code for GNOME 2.0. See Emmanuele Bassi's talk from GUADEC 2020, "Archaeology of Accessibility" for a much more detailed description of that history (LWN article, talk video).

Sun Microsystems sold their products to big government customers, who often have requirements about accessibility in software. Sun's operating system for workstations used GNOME, so it needed to be accessible. They modeled the architecture of GNOME's accessibility code on what they already had working for Java's Swing toolkit. This is why GNOME's accessibility code is full of acronyms like atspi and atk, and vocabulary like adapters, interfaces, and factories.

Then in 2006, we moved from CVS to Subversion (svn).

Then in 2007, we get gtestutils, the unit testing framework in Glib. GNOME started in 1996; this means that for a full 11 years we did not have a standard infrastructure for writing tests!

Also, we did not have continuous integration nor continuous builds, nor reproducible environments in which to run those builds. Every developer was responsible for massaging their favorite distro into having the correct dependencies for compiling their programs, and running whatever manual tests they could on their code.

2008 comes and GNOME switches from svn to git.

In 2010-2011, Oracle acquires Sun Microsystems and fires all the people who were working on accessibility. GNOME ends up with approximately no one working on accessibility full-time, when it had about 10 people doing so before.

GNOME 3 happens, and the accessibility code has to be ported in emergency mode from CORBA to DBus.

GitHub appears in 2008, and Travis CI, probably the first generally-available CI infrastructure for free software, appears in 2011. GNOME of course is not developed there, but in its own self-hosted infrastructure (git and cgit back then, with no CI).

Jessie Frazelle invents usable containers 2013-2015 (Docker). Finally there is a non-onerous way of getting a reproducible environment set up. Before that, who had the expertise to use Yocto to set up a chroot? In my mind, that seemed like a thing people used only if they were working on embedded systems.

But it is until 2016 that rootless containers become available.

And it is only until 2018 that we get - a Git-based forge that makes it easy to contribute and review code, and have a continuous integration infrastructure. That's 21 years after GNOME started, and 16 years after accessibility first got implemented.

Before that, tooling is very primitive.

Typical state of legacy code: few tests which don't really work, no CI, no reproducible environment

In 2015 I took over the maintainership of librsvg, and in 2016 I started porting it to Rust. A couple of years later, we got, and Jordan Petridis and myself added the initial CI. Years later, Dunja Lalic would make it awesome.

When I took over librsvg's maintainership, it had few tests which didn't really work, no CI, and no reproducible environment for compilation. The book by Michael Feathers, "Working effectively with legacy code" describes "legacy code is code without tests".

When I started working on accessibility at the beginning of this year, it had few tests which didn't really work, no CI, and no reproducible environment.

Right now, Yelp, our help system, has few tests which don't really work, no CI, and no reproducible environment.

Gnome-session right now has few tests which don't really work, no CI, and no reproducible environment.

I think you can start to see a pattern here...

Git-of-theseus chart of gtk's lines of code over time

This is a chart generated by the git-of-theseus tool. It shows how many lines of code got added each year, and how much of that code remained or got displaced over time.

For a project with constant maintenance, like GTK, you get a pattern like in the chart above: the number of lines of code increases steadily, and older code gradually diminishes as it is replaced by newer code.

Git-of-theseus chart of librsvg's lines of code over time

For librsvg the picture is different. It was mostly unmaintained for a few years, so the code didn't change very much. But when it got gradually ported to Rust over the course of three or four years, what the chart shows is that all the old code shrinks to zero while new code replaces it completely. That new code has constant maintainenance, and it follows the same pattern as GTK's.

Git-of-theseus chart of Orca's lines of code over time

Orca is more or less the same as GTK, although with much slower replacement of old code. More accretion, less replacement. That big jump before 2012 is when it got ported from the old CORBA interfaces for accessibility to the new DBus ones.

Git-of-theseus chart of at-spi2-core's lines of code over time

This is an interesting chart for at-spi2-core. During the GNOME2 era, when accessibility was under constant maintenance, you can see the same "constant growth" pattern. Then there is a lot of removal and turmoil in 2009-2010 as DBus replaces CORBA, followed by quick growth early in the GNOME 3 era, and then just stagnation as the accessibility team disappeared.

How do we start fixing this?

First, add continuous integration

The first thing is to add continuous integration infrastructure (CI). Basically, tell a robot to compile the code and run the test suite every time you "git push".

I copied the initial CI pipeline from libgweather, because Emmanuele Bassi had recently updated it there, and it was full of good toys for keeping C code under control: static analysis, address sanitizer, code coverage reports, documentation generation. It was also a CI pipeline for a Meson-based project; Emmanuele had also ported most of the accessibility modules to Meson while he was working for the GNOME Foundation. Having libgweather's CI scripts as a reference was really valuable.

Later, I replaced that hand-written setup for a base Fedora container image with Freedesktop CI templates, which are AWESOME. I copied that setup from librsvg, where Jordan Petridis had introduced it.

Diagram of at-spi2-core's CI pipeline, explained in the text

The CI pipeline for at-spi2core has five stages:

  • Build container images so that we can have a reproducible environment for compiling and running the tests.

  • Build the code and run the test suite.

  • Run static analysis, dynamic analysis, and get a test coverage report.

  • Generate the documentation.

  • Publish the documentation, and publish other things that end up as web pages.

Let's go through each stage in detail.

Build a reproducible environment, first stage

First we build a reproducible environment in which to compile the code and run the tests.

Using Freedesktop CI templates, we start with two base images for "empty distros", one for openSUSE (because that's what I use), and Fedora (because it provides a different build configuration).

CI templates are nice because they build the container, install the build dependencies, finalize the container image, and upload it to gitlab's container registry all in a single, automated step. The maintainer does not have to generate container images by hand in their own computer, nor upload them. The templates infrastructure is smart enough not to regenerate the images if they haven't changed between runs.

CI templates are very flexible. They can deal with containerized builds, or builds in virtual machines. They were developed by the libinput people, who need to test all sorts of varied configurations. Give them a try!

Compile and run tests, second stage

Basically, "meson setup", "meson compile", "meson install", "meson test", but with extra detail to account for the particular testing setup for the accessibility code.

One interesting thing is that for example, openSUSE uses dbus-daemon for the accessibility bus, which is different from Fedora, which uses dbus-broker instead.

The launcher for the accessibility bus thus has different code paths and configuration options for dbus-daemon vs. dbus-broker. We can test both configurations in the CI pipeline.

HELP WANTED: Unfortunately, the Fedora test job doesn't run the tests yet! This is because I haven't learned how to run that job in a VM instead of a container — dbus-broker for the session really wants to be launched by systemd, and it may just be easier to have a full systemd setup inside a VM rather than trying to run it inside a "normal" containerized job.

If you know how to work with VM jobs in Gitlab CI, we'd love a hand!

Modern compilers are awesome, third stage

The third stage is thanks to the awesomeness of modern compilers. The low-level accessibility infrastructure is written in C, so we need all the help we can get from our tools!

We run static analysis to catch many bugs at compilation time. Uninitialized variables, trivial memory leaks, that sort of thing.

Also, address-sanitizer. C is a memory unsafe language, so catching pointer mishaps early is really important. Address-sanitizer doesn't catch everything, but it is better than nothing.

Finally, a test coverage job, to see which lines of code managed to get executed while running the test suite. We'll talk a lot more about code coverage in the following slides.

Generate HTML and publish it

At least two jobs generate HTML and have to publish it: the documentation job, and the code coverage reports. So, we do that, and publish the result with Gitlab pages. This "static web hosting inside Gitlab", which makes things very easy.

Gru presentation meme: Add a CI pipeline

Adding a CI pipeline is really powerful. You can automate all the things you want in there. This means that your whole arsenal of tools to keep code under control can run all the time, instead of only when you remember to run each tool individually, and without requiring each project member to bother with setting up the tools themselves.

Gru presentation meme: Add lots of little tests

The original accessibility code was written before we had a culture of ubiquitous unit tests. Refactoring the code to make it testable makes it a lot better!

Gru presentation meme: Become the CI person, happy

In a way, it is rewarding to become the CI person for a project and learn how to make the robots do the boring stuff. It is very rewarding to see other project members start using the tools that you took care to set up for them, because then they don't have to do the same kind of setup.

Gru presentation meme: Become the CI person, sad

It is also kind of a pain in the ass to keep the CI updated. But it's the same as keeping any other basic infrastructure running: you cannot think of going back to live without it.

Screenshot of the coverage report for at-spi2-core, with 58% coverage

Now let's talk about code coverage.

A code coverage report tells you which lines of code have been executed, and which ones haven't, after running your code. When you get a code coverage report while running the test suite, you see which code is actually exercised by the tests.

Getting to 100% test coverage is very hard, and that's not a useful goal - full coverage does not indicate the absence of bugs. However, knowing which code is not tested yet is very useful!

Code that didn't use to have a good test suite often has many code paths that are untested. You can see this in at-spi2-core. Each row in that toplevel report is for a directory in the source tree, and tells you the percentage of lines within the directory that are executed as a result of running the test suite. If you click on one row, you get taken to a list of files, from which you can then select an individual file to examine.

Screenshot of the coverage report for librsvg, with 88% coverage

As you can see here, librsvg has more extensive test coverage. This is because over the last years, we have made sure that every code path gets exercised by the test suite. It's not at 100% yet (and there are bugs in the way we obtain coverage for the c_api, for example, which is why it shows up almost uncovered), but it's getting there.

My goal is to make at-spi2-core's tests equally comprehensive.

Both at-spi2-core and librsvg use Mozilla's grcov tool to generate the coverage reports. Grcov can consume coverage data from LLVM, GCC, and others, and combine them into a single report.

Screenshot of the coverage report for glib, at 78%

Glib is a much more complex library, and it uses lcov instead of grcov. Lcov is an older tool, not as pretty, but still quite functional (in particular, it is very good at displaying branch coverage).

Portion of a coverage report for a C source file

This is what the coverage report looks for a single C file. Lines that were executed are in green; lines that were not executed are in red. Lines in white are not instrumented, because they produce no executable code.

The first column is the line number; the second column is the number of times each line got executed. The third column is of course the code itself.

In this extract, you can see that all the lines in the impl_GetChildren() function got executed, but none of the lines in impl_GetIndexInParent() got executed. We may need to write a test that will cause the second function to get executed.

Python code with a basic test for a chunk of C code, and the coverage report

The accessibility code needs to process a bunch of properties in DBus objects. For example, the Python code at the top of the slide queries a set of properties, and compares them against their expected values.

At the bottom, there is the coverage report. The C code that handles each property is indeed executed, but the code for the error path, that handles an invalid property name, is not covered yet; it is color-coded red. Let's add a test for that!

Python code adds a test for the error path; coverage report is all green now

So, we add another test, this time for the error path in the C code. Ask for the value of an unknown property, and assert that we get the correct DBus exception back.

With that test in place, the C code that handles that error case is covered, and we are all green.

What I am doing here is to characterize the behavior of the DBus API, that is, to mirror its current behavior in the tests because that is the "known good" behavior. Then I can start refactoring the code with confidence that I won't break it, because the tests will catch changes in behavior.

Coverage being shown along with Gitlab's diff view

By now you may be familiar with how Gitlab displays diffs in merge requests.

One somewhat hidden nugget is that you can also ask it to display the code coverage for each line as part of the diff view. Gitlab can display the coverage color-coding as a narrow gutter within the diff view.

This lets you answer the question, "this code changed, does it get executed by a test?". It also lets you catch code that changed but that is not yet exercised by the test suite. Maybe you can ask the submitter to add a test for it, or it can give you a clue on how to improve your testing strategy.

The incantation for coverage in Gitlab diffs

The trick to enable that is to use the artifacts:reports:coverage_report key in .gitlab-ci.yml. You have your tools create a coverage report in Cobertura XML format, and you give it to Gitlab as an artifact.

See the gitlab documentation on coverage reports for test coverage visualization.

Making the coverage report's HTML accessible

When grcov outputs an HTML report, it creates something that looks and feels like a <table>, but which is not an HTML table. It is just a bunch of nested <div> elements with styles that make them look like a table.

I was worried about how to make it possible for people who use screen readers to quickly navigate a coverage report. As a sighted person, I can just look at the color-coding, but a blind person has to navigate each source line until they find one that was executed zero times.

Eitan Isaacson kindly explained the basics of ARIA tags to me, and suggested how to fix the bunch of <div> elements. First, give them roles like table, row, cell. This tells the browser that the elements are to be navigated and presented to accessibility tools as if they were in fact a <table>.

Then, generate an aria-label for each cell where the report shows the number of times a line of code was executed. For lines not executed, sighted people can see that this cell is just blank, but has color coding; for blind people the aria-label can be "no coverage" or "zero" instead, so that they can perceive that information.

We need to make our development tools accessible, too!

You can see the pull request to make grcov's HTML more accessible.

Bug in Gitlab's badges: they have no meaningful alt text

Speaking of making development tools accessible, Mike Gorse found a bug in how Gitlab shows its project badges. All of them have an alt text of "Project badge", so for someone who uses a screen reader, it is impossible to tell whether the badge is for a build pipeline, or a coverage report, etc. This is as bad as an unlabelled image.

You can see the bug about this in

Important detail to get code coverage information, described in the text

One important detail: if you want code coverage information, your processes must exit cleanly!!!. If they die with a signal (SIGTERM, SIGSEGV, etc.), then no coverage information will be written for them and it will look as if your code got executed zero times.

This is because gcc and clang's runtime library writes out the coverage info during program termination. If your program dies before main() exits, the runtime library won't have a chance to write the coverage report.

Fixing coverage information for the accessibility daemons

During a normal user session, the lifetime of the accessibilty daemons (at-spi-bus-launcher and at-spi-registryd) is controlled by the session manager.

However, while running those daemons inside the test suite, there is no user session! The daemons would get killed when the tests terminate, so they wouldn't write out their coverage information.

I learned to use Martin Pitt's python-dbusmock to write a minimal mock of gnome-session's DBus interfaces. With this, the daemons think that they are in fact connected to the session manager, and can be told by the mock to exit appropriately. Boom, code coverage.

I want to stress how awesome python-dbusmock is. It took me 80 lines of Python to mock the necessary interfaces from gnome-session, which is pretty great, and can be reused by other projects that need to test session-related stuff.

Slide explaining the features of pytest

I am using pytest to write tests for the accessibility interfaces via DBus. Using DBus from Python is really pleasant.

For those tests, a test fixture is "an accessibility registry daemon tied to the session manager". This uses a "session manager fixture". I made the session manager fixture tear itself down by informing all session clients of a session Logout. This causes the daemons to exit cleanly, to get coverage information.

Python code that shows a session manager fixture's setup and teardown

The setup for the session manager fixture is very simple; it just connects to the session bus and acquires the org.gnome.SessionManager name there.

Then we yield mock_session. This makes the fixture present itself to whatever needs to call it.

When the yield comes back, we do the teardown stage. Here we just tell all session clients to terminate, by invoking the Logout method on the org.gnome.SessionManager interface. The mock session manager sends the appropriate singals to connected clients, and the clients (the daemons) terminate cleanly.

I'm amazed at how smoothly this works in pytest.

A bunch of verbose C code to handle a DBus method

The C code for accessibility was written by hand, before the time when we had code generators to implement DBus interfaces easily. It is extremely verbose and error-prone; it uses the old libdbus directly and has to piece out every argument to a DBus call by hand.

Sad face on the previous code code

This code is really hard to maintain. How do we fix it?

The same code, split out in three sections, explained in the text

What I am doing is to split out the DBus implementations:

  • First get all the arguments from DBus - marshaling goo.
  • Then, the actual logic that uses those arguments' values.
  • Last, construct the DBus result - marshaling goo.

Code with extracted function for the actual logic

If you know refactoring terminology, I "extract a function" with the actual logic and leave the marshalling code in place. The idea is to do that for the whole code, and then replace the DBus gunk with auto-generated code as much a possible.

Along the way, I am writing a test for every DBus method and property that the code handles. This will give me safety when the time comes to replace the marshaling code with auto-generated stuff.

Things we have learned over the years, described in the text

We need reproducible environments to build and test our code. It is not acceptable to say "works on my machine" anymore; you need to be able to reproduce things as much as possible.

Code coverage for tests is really useful! You can do many tricks with it. I am using it to improve the comprehensiveness of the test suite, to learn which code gets executed with various actions on the DBus interfaces, and as an exploratory tool in general while I learn how the accessibility code really works.

Automated builds on every push, with tests, serve us to keep the code from breaking.

Continuous integration is generally available if we choose to use it. Ask for help if your project needs CI! It can be overwhelming to add it the first time.

Let the robots do the boring work. Constructing environments reproducibly, building the code and running the tests, analyzing the code and extracting statistics from it — doing that is grunt work, and a computer should do it, not you.

"The Not Rocket Science Rule of Software Engineering" is to automatically maintain a repository of code that always passes all the tests. That, with monotonically increasing test coverage, lets you change things with confidence. The rule is described eloquently by Graydon Hoare, the original author of Rust and Monotone.

There is tooling to enforce this rule. For GitHub there is Homu; for Gitlab we use Marge-bot. You can ask the GNOME sysadmins if you would like to turn it on for your project. Librsvg and GStreamer use it very productively. I hope we can start using Marge-bot for the accessibility repositories soon.

Nyan cat zooming through space

The moral of the story is that we can make things better. We have much better tooling than we had in the early 2000s or 2010s. We can fix things and improve the basic infrastructure for our personal computing.

You may have noticed that I didn't talk much about accessibility. I talked mostly about preparing things to be able to work productively on learning the accessibility code and then improving it. That's the stage I'm at right now! I learn code by refactoring it, and all the CI stuff is to help me refactor with confidence. I hope you find some of these tools useful, too.

"Thank you" slide

(That is a photo of me and my dog, Mozzarello.)

I want to thank the people that have kept the accessibility code functioning over the years, even after the rest of their team disappeared: Joanmarie Diggs, Mike Gorse, Samuel Thibault, Emmanuele Bassi.

BuildStream 2 news

After a long time passed without any BuildStream updates, I’m proud to finally announce that unstable BuildStream 2 development phase is coming to a close.

As of the 1.95.0 beta release, we have now made the promise to remain stable and refrain from introducing any more API breaking changes.

At this time, we are encouraging people to try migrating to the BuildStream 2 API and to inform us of any issues you have via our issue tracker.

Installing BuildStream 2

At this time we recommend installing BuildStream 2 into a python venv, this allows parallel installability with BuildStream 1 in the case that you may need both installed on the same system.

First you need to install BuildBox using these instructions, and then you can install BuildStream 2 from source following these commands:

# Install BuildStream 2 in a virtual environment
cd buildstream
git checkout 1.95.0
python3 -m venv ~/buildstream2
~/buildstream2/bin/pip install .

# Activate the virtual environment to use `bst`
. ~/buildstream2/bin/activate

Porting projects to BuildStream 2

In order to assist in the effort of porting your projects, we have compiled a guide which should be helpful in updating your BuildStream YAML files.

The guide does not cover the python API. In case you have custom plugins which need to be ported to the new API, you can observe the API reference here and you are encouraged to reach out to us on our project mailing list where we will be eager to answer any questions and help out in the porting effort.

What’s new in BuildStream 2 ?

This would be a very exhaustive list, so I’ll try to summarize the main changes as succinctly as possible.

Remote Execution

BuildStream now supports building remotely using the Remote Execution API (REAPI), which is a standard protocol used by various applications and services such as Bazel, recc, BuildGrid, BuildBarn and Buildfarm.

As the specification allows for variations in setup and implementation of remote execution clusters, there are some limitations on what can be used by BuildStream documented here.

Load time performance

In order to support large projects (in the 100,000 elements range), the loading codepaths have undergone a massive overhaul. Various recursive algorithms have been eliminated in favor of iterative ones, and cython is used for the hot code paths.

This makes large projects manageable, and noticeably improves performance and responsiveness on the command line.

Artifact and Source caching

BuildStream now uses implementations of the Content Addressable Storage (CAS) and Remote Asset services which are a part of the Remote Execution API exclusively to store artifacts and sources in remote caches. As such, we no longer ship our own custom implementation of an artifact server, and currently recommend using BuildBarn.

Supported implementations of cache servers are documented here.

In addition to caching and sharing built artifacts on artifact servers, BuildStream now also caches source code in CAS servers. This will imply an initial performance degradation on the initial build while BuildStream populates the caches but should improve performance under regular operation, assuming that you have persistent local caches, or have decent bandwidth between your build machines and CAS storage services.

BuildBox Sandboxing

Instead of using bubblewrap directly, BuildStream now uses the BuildBox abstraction for its sandboxing implementation, which will use bubblewrap on Linux platforms.

On Linux build hosts, BuildBox will use a fuse filesystem to expose files in CAS directly to the containerized build environment. This results in some optimization of the build environment and also avoids the hard limit on hardlinks which we sometimes encounter with BuildStream 1.

BuildBox is also used for the worker implementation when building remotely using the BuildGrid remote execution cluster implementation.

Cache Key Stability

BuildStream 2 comes with a new promise to never inadvertently change artifact cache keys in future releases.

This is helpful to reduce unnecessary rebuilds and potential resulting validation of artifacts when upgrading BuildStream to new versions.

In the future, it is always possible that a new feature might require a change to the artifact content and format. Should such a scenario arise, our policy is to support the old format and make sure that new features which require artifact changes be opt-in.

Caching failed builds

With BuildStream 2, it is now possible to preserve and share the state of a failed build artifact.

This should be useful for local debugging of builds which have failed in CI or on another user’s machine.

YAML Format Enhancements

A variety of enhancements have been made to the YAML format:

  • Variable expansion is now performed unconditionally on all YAML. This means it is now possible to use variable substitution when declaring sources as well as elements.
  • The toplevel-root and project-root automatic variables have been added, allowing some additional flexibility for specifying sources which must be obtained in project relative locations on the build host.
  • Element names are more clearly specified, and it is now possible to refer to elements across multiple junction/project boundaries, both on the command line and also as dependencies.
  • It is now possible to override an element in a subproject, including the ability to override junction definitions in subprojects. This can be useful to resolve conflicts where multiple projects depend on the same junction (diamond shaped project dependency graphs). This can also be useful to override how a single element is built in a subproject, or cause subproject elements to depend on local dependencies.
  • The core link element has been introduced, which simply acts as a symbolic link to another element in the same project or in a subproject.
  • New errors are produced when loading multiple junctions to the same project, these errors can be explicitly avoided using the duplicates or internal keywords in your project.conf.
  • ScriptElement and BuildElement implementations now support extended configuration in dependencies, allowing one to stage dependencies in alternative locations at build time.
  • Loading pip plugins now supports versioning constraints, this offers a more reliable method for specifying what plugins your project depends on when loading plugins from pip packages installed in the host python environment.
  • Plugins can now be loaded across project boundaries using the new junction plugin origin. This is now the recommended way to load plugins, and plugin packages such as buildstream-plugins are accessible using this method. An example of creating a junction to a plugin package and using plugins from that package is included in the porting guide.

New Plugin Locations

Plugins which used to be a part of the BuildStream core, along with some additional plugins, have been migrated to the buildstream-plugins repository. A list of migrated core plugins and their new homes can be found in the porting guide.

Porting Progress

Abderrahim has been maintaining the BuildStream 2 ports of freedesktop-sdk and gnome-build-meta, and it is looking close to complete, as can be observed in the epic issue.

Special Thanks

We’ve come a long way in the last years, and I’d like to thank:

  • All of our contributors individually, whom are listed in our beta release announcement.
  • Our downstream projects and users who have continued to have faith in the project.
  • Our stakeholder Bloomberg, who funded a good portion of the 2.0 development, took us into the Remote Execution space and also organized the first Build Meetup events.
  • Our stakeholder Codethink, who continues to support and fund the project since founding the BuildStream project in 2016, organized several hackfests over the years, organized the 2021 chapter of the Build Meetup during lockdowns, and has granted us the opportunity to tie up the loose ends and bring BuildStream 2.0 to fruition.

Chromecast Protocol


After almost a month of reconnaissance through the study of Chromium’s code, VLC’s code and other people’s attempts, we finally have figured out the Chromecast protocol, and it works flawlessly and reliably!

Big Buck Bunny clip playing on my TV (Chromecast built-in) and the GNOME Network Displays app
Big Buck Bunny clip playing on my TV (Chromecast built-in) and the GNOME Network Displays app

Ignoring the previous blog posts, here’s a quick rundown of the Chromecast protocol:

  • There are two parties involved: the Sender and the Receiver
  • Typical sender apps are the Chrome browser, and the Android and iOS platforms
  • Chromecast devices consist of two different receivers (so to say):
    • The in-built receiver app that accepts connection requests and handles all other communications with the Sender
    • The Web Receiver app is optionally loaded and opened upon request by the Sender, again through messages handled by the former receiver
  • The first job is to make a Sender app
  • Now, we discover the available Chromecast devices in the local network through mDNS with the identifier _googlecast._tcp
  • Next step is to open a TCP and then a TLS connection over that to the Chromecast device on port 8009
  • After a successful TLS connection:
    • The first thing we do is check the authenticity of the Chromecast (we fake it in our app)
    • Next, a “virtual connection” is opened to the Chromecast
    • At this point, we are eligible to query the status of the Chromecast device (which includes information such as the opened app, volume, and supported namespaces, among other details)
    • We can send messages to open an app, Android apps such as the YouTube app or other “media player apps” either custom made or readily available from Google
    • Whenever any app is opened or closed, that is, if the state of the Chromecast changes, a broadcast message is sent out to all the connected senders regarding the present status (you may have already seen it on your Android device when a media controller notification pops up if something is being played on Chromecast)
    • The apps accept requests to play, pause, change to the next item in the queue, and other similar commands
    • In addition to those, we can send requests to play supported contents on the apps (or custom data for custom web receivers – the second type of receivers. These should be hosted on HTTPS domains and registered on Google Cast SDK Developer Console)
  • Lastly, we close the app, the virtual connection (or as I have recently taken to calling it, VC), and the TCP/TLS connection.
  • Important point to note here is that an app can keep running even if the VC and the TCP/TLS connections are closed. We need to be explicit about closing the app with a message.

Default Media Receiver App (developed and hosted by Google)
Default Media Receiver App (developed and hosted by Google)

Haah, that was a lot to take in! Fear not, there’s more.


This is best explained using logs.
Here are cleaned bits of logs from the GNOME Network Displays app interacting with Chromecast:
(Only the part inside “payload_utf8” is structured in JSON, all parameters outside that like “source_id” are fields in the protobuf schema and only spread out nicely here for presentation purposes – JSON key-pair highlighting)

Sent message:
 "source_id": "sender-gnd",
 "destination_id": "receiver-0",
 "namespace_": "",
 "payload_type": 1,
 "payload_utf8": (null)

Now, if you look closely, the first message sent has a sender_id as sender-gnd and receiver id as receiver-0. The sender_id can be anything prefixed with sender- but I doubt it allows more than six characters after the hyphen. It doesn’t really matter.
The receiver has a default id receiver-0 as found out through Chromium code, and again doesn’t matter since that is all the purpose it serves.

The first message we ever send is the “authentication challenge” (look for the further reading links below) that should ideally contain some binary information and has_binary_payload as 1.
We are not strict with the authenticity of the Chromecast device we are connecting to, so we send an empty request but on the correct namespace. The logs don’t print out the payload_binary field since it is only used once (here) and is empty with length 0 anyways.

Sent message:
 "source_id": "sender-gnd",
 "destination_id": "receiver-0",
 "namespace_": "",
 "payload_type": 0,
 "payload_utf8": {
 "type": "CONNECT",
 "userAgent": "GND/0.90.5 (X11; Linux x86_64)",
 "connType": 0,
 "origin": {},
 "senderInfo": {
 "sdkType": 2,
 "version": "X11; Linux x86_64",
 "browserVersion": "X11; Linux x86_64",
 "platform": 6,
 "connectionType": 1

The second message is the virtual connection message, where we include some random metadata and most importantly the type and connType keys. We indicate the connection type as 0 or strong. This comes from Chromium’s nomenclature for connection types: strong, weak and invisible, out of which weak is not used (don’t ask me why). The connection type must be strong if destination_id receiver-0 is used, as a comment says there. And it seems the default connection type is strong, but we choose not to take any chances.

Sent message:
 "source_id": "sender-gnd",
 "destination_id": "receiver-0",
 "namespace_": "",
 "payload_type": 0,
 "payload_utf8": {
 "type": "GET_STATUS",
 "requestId": 2

Onto the next request. We manually request a status report with type GET_STATUS on namespace With sharp eyes and a keen mind, you would have spotted a key called requestId in the payload. This is a unique identifier for every message we send, excluding those on special namespaces, such as the virtual connection or the ping message.
The easiest way to keep it unique is to initialize it to 1 and increment it for every outgoing message. Guess how Chromium and VLC do this.
The requestId field is also present in the received messages and is 0 for all the broadcast messages.

Received message:
 "source_id": "receiver-0",
 "destination_id": "sender-gnd",
 "namespace_": "",
 "payload_type": 0,
 "payload_utf8": {
 "requestId": 2,
 "status": {
 "applications": [
 "appId": "2C6A6E3D",
 "appType": "ANDROID_TV",
 "displayName": "YouTube",
 "iconUrl": "",
 "isIdleScreen": false,
 "launchedFromCloud": false,
 "namespaces": [
 { "name": "" },
 { "name": "" },
 { "name": "" },
 { "name": "" }
 "sessionId": "0167f70a-6430-45ab-9714-fd2f09d70b2b",
 "statusText": "Youtube",
 "transportId": "0167f70a-6430-45ab-9714-fd2f09d70b2b",
 "universalAppId": "233637DE"
 "isActiveInput": true,
 "isStandBy": false,
 "userEq": {},
 "volume": {
 "controlType": "master",
 "level": 0.10000000149011612,
 "muted": false,
 "stepInterval": 0.009999999776482582

Hey, we received a message. It has the type RECEIVER_STATUS and is not a broadcast one. We specifically requested this of the Chromecast.

Let’s break it down byte by byte.
The usual fields source_id, destination_id etc. are nothing special, but the payload sure packs some exciting data. We have received the state of the Chromecast where it reports the volume details, stand-by state, userEq (?), and if the Chromecast is the active input (more info), and the running applications.
Only one app is currently running - YouTube (without YouTube Premium). A bunch of namespaces are supported for this particular app, and we should not dive into that for this post.

Sent message:
 "source_id": "sender-gnd",
 "destination_id": "receiver-0",
 "namespace_": "",
 "payload_type": 0,
 "payload_utf8": {
 "type": "LAUNCH",
 "appId": "CC1AD845",
 "requestId": 3

Too bad we don’t need to watch YouTube videos now. We need to watch our Big Buck Bunny hosted on Google’s servers (not from Google’s CDNs, but Google’s hosted storage, makes sense?).
We launch the Default Media Receiver app (this is the same VLC and the CacTool use) by sending a LAUNCH message with its appId.

Received message:
 "source_id": "receiver-0",
 "destination_id": "*",
 "namespace_": "",
 "payload_type": 0,
 "payload_utf8": {
 "requestId": 0,
 "status": {
 "isActiveInput": true,
 "isStandBy": false,
 "userEq": {},
 "volume": {
 "controlType": "master",
 "level": 0.10000000149011612,
 "muted": false,
 "stepInterval": 0.009999999776482582

Cool, we received a broadcast message that YouTube has been closed.
(I have configured my Android TV to limit background processes to at most two at a time (makes the system very snappy). This might have caused the YouTube app to close even when it wasn’t needed/intended in this case.)

Received message:
 "source_id": "receiver-0",
 "destination_id": "*",
 "namespace_": "",
 "payload_type": 0,
 "payload_utf8": {
 "requestId": 3,
 "status": {
 "applications": [
 "appId": "CC1AD845",
 "appType": "WEB",
 "displayName": "Default Media Receiver",
 "iconUrl": "",
 "isIdleScreen": false,
 "launchedFromCloud": false,
 "namespaces": [
 { "name": "" },
 { "name": "" },
 { "name": "" },
 { "name": "" }
 "sessionId": "014f4bf2-2c29-481f-ab50-ef7d6ff210ac",
 "statusText": "Default Media Receiver",
 "transportId": "014f4bf2-2c29-481f-ab50-ef7d6ff210ac",
 "universalAppId": "CC1AD845"
 "isActiveInput": true,
 "isStandBy": false,
 "userEq": {},
 "volume": {
 "controlType": "master",
 "level": 0.10000000149011612,
 "muted": false,
 "stepInterval": 0.009999999776482582

Ah, our media app has taken the stage now. This is a WEB application and runs on Chromium (maybe bare-bones, but it is Chromium).
It accepts messages with the shown namespaces where you’d recognize namespace ending in cac. The one with debugoverlay probably relates to this.
I suspect the broadcast one is responsible for passing on the events to be broadcasted to the internal receiver.
Finally, we are here for the media namespace. We need to send the URL of the media to play on this namespace. But wait, if multiple apps were open, how would the Chromecast or we even figure out for which app a particular message was meant? transportId serves as the receiver_id for those messages.

Sent message:
 "source_id": "sender-gnd",
 "destination_id": "014f4bf2-2c29-481f-ab50-ef7d6ff210ac",
 "namespace_": "",
 "payload_type": 0,
 "payload_utf8": {
 "type": "CONNECT",
 "userAgent": "GND/0.90.5 (X11; Linux x86_64)",
 "connType": 0,
 "origin": {},
 "senderInfo": {
 "sdkType": 2,
 "version": "X11; Linux x86_64",
 "browserVersion": "X11; Linux x86_64",
 "platform": 6,
 "connectionType": 1

We need a second connection to the CC1AD845 app to talk to it (it is what it is).

Received message:
 "source_id": "014f4bf2-2c29-481f-ab50-ef7d6ff210ac",
 "destination_id": "*",
 "namespace_": "",
 "payload_type": 0,
 "payload_utf8": {
 "type": "MEDIA_STATUS",
 "status": [],
 "requestId": 0

Now we start receiving broadcast messages from the Default Media Receiver. For now, it’s doing nothing. See the dark image above.
Note that the type of message is MEDIA_STATUS and is on the namespace.

Sent message:
 "source_id": "sender-gnd",
 "destination_id": "014f4bf2-2c29-481f-ab50-ef7d6ff210ac",
 "namespace_": "",
 "payload_type": 0,
 "payload_utf8": {
 "type": "LOAD",
 "media": {
 "contentId": "",
 "streamType": "BUFFERED",
 "contentType": "video/mp4"
 "requestId": 4

Let’s send our Google-hosted media file’s URL to the media app. Specifying the mime type of the file and the stream type is mandatory. Another option for the stream type could be LIVE, where seek functionality can be enabled or disabled as per wish, and there would be a visual clue on the screen to show that (may not be present on custom-made web receivers).
contentId and contentUrl have some differences but would not matter in our case, Docs for media key.

Received message:
 "source_id": "014f4bf2-2c29-481f-ab50-ef7d6ff210ac",
 "destination_id": "*",
 "namespace_": "",
 "payload_type": 0,
 "payload_utf8": {
 "type": "MEDIA_STATUS",
 "status": [
 "mediaSessionId": 1,
 "playbackRate": 1,
 "playerState": "IDLE",
 "currentTime": 0,
 "supportedMediaCommands": 12303,
 "volume": { "level": 1, "muted": false },
 "media": {
 "contentId": "",
 "streamType": "BUFFERED",
 "contentType": "video/mp4",
 "mediaCategory": "VIDEO"
 "currentItemId": 1,
 "extendedStatus": {
 "playerState": "LOADING",
 "media": {
 "contentId": "",
 "streamType": "BUFFERED",
 "contentType": "video/mp4",
 "mediaCategory": "VIDEO"
 "mediaSessionId": 1
 "repeatMode": "REPEAT_OFF"
 "requestId": 0

Yay!! Now would you look at that – it is loading our Google-hosted media file encoded in mp4.

Most of the keys should be self-explanatory :)
For supportedMediaCommands, I just hope that it is not a count.

If you want, you can put a bunch of videos in a queue and let them play or control them using NEXT or PREVIOUS messages or using your hand-held remote or using your IR-equipped Android device or using your Android TV remote (don’t have a clue if this works with other Chromecast devices).
Queuing the videos is left as an assignment for the reader and should be trivial to implement.

Received message:
 "source_id": "014f4bf2-2c29-481f-ab50-ef7d6ff210ac",
 "destination_id": "*",
 "namespace_": "",
 "payload_type": 0,
 "payload_utf8": {
 "type": "MEDIA_STATUS",
 "status": [
 "mediaSessionId": 1,
 "playbackRate": 1,
 "playerState": "BUFFERING",
 "currentTime": 0,
 "supportedMediaCommands": 12303,
 "volume": { "level": 1, "muted": false },
 "currentItemId": 1,
 "repeatMode": "REPEAT_OFF"
 "requestId": 0

No idea why it would not send the complete status message here but whatever, it is buffering the video.

Received message:
 "source_id": "014f4bf2-2c29-481f-ab50-ef7d6ff210ac",
 "destination_id": "*",
 "namespace_": "",
 "payload_type": 0,
 "payload_utf8": {
 "type": "MEDIA_STATUS",
 "status": [
 "mediaSessionId": 1,
 "playbackRate": 1,
 "playerState": "PLAYING",
 "currentTime": 0,
 "supportedMediaCommands": 12303,
 "volume": { "level": 1, "muted": false },
 "activeTrackIds": [],
 "media": {
 "contentId": "",
 "streamType": "BUFFERED",
 "contentType": "video/mp4",
 "mediaCategory": "VIDEO",
 "duration": 596.474195,
 "tracks": [],
 "breakClips": [],
 "breaks": []
 "currentItemId": 1,
 "items": [
 "itemId": 1,
 "media": {
 "contentId": "",
 "streamType": "BUFFERED",
 "contentType": "video/mp4",
 "mediaCategory": "VIDEO",
 "duration": 596.474195
 "orderId": 0
 "repeatMode": "REPEAT_OFF"
 "requestId": 4

Spot the difference challenge. Look at the message two messages above, the one that reports that our Google-hosted media file is being loaded, and then this one we got here.

With sharp eyes and a keen mind, you would have spotted that there are some additional keys here:

  • activeTrackIds: No idea what this does. See if you can make sense of this.
  • duration: It inferred this itself. We may provide this in the LOAD request itself though.
  • tracks: List of track objects (track as in an audio track) – docs
  • orderId: This is related to queue, see

breakClips and breaks refer to ad breaks in between the video, not even going to pick that up.

One category of messages you can’t see being sent and received are PING and PONG. We send PING messages every 5 seconds to keep the connection alive. A sub-bullet point in our TODO list is to drop the connection if a PONG is not received for 6 seconds in response.


I presented my then latest findings and disclosed all of my plans for this month before all of the users and developers of GNOME located all over the world.
Would you believe this? Someone recorded that and called it Intern Lightning Talk?!
Since it is already sitting in the open internet, it wouldn’t hurt to show it to my dear readers as well (even if it’s not reader"s").

Timestamp to the Revelation
Timestamp to all the Intern Presentations

Follow up on the previous blog post

  • The bug that stood between a successful communication between the Chromecast and us turns out to have nothing to do with Chromecast itself but was a problem with C pointers. I would like to leave it at that, don’t wanna look stupid on the internet, right?
  • Maybe we don’t get to discuss Custom Web Receivers after all. In the very verbose logs of Chromium recorded for inspection of its communication with the Chromecast, where all we did was connect to Chromecast and cast a tab, we found out that it uses a receiver app named Chrome Mirroring with id 0F5096E8 and namespace This was a rather interesting find. We hopefully wouldn’t need to bother making, registering and hosting a Custom Web Receiver app.

Plan going forward

  • Clearing out some minor bugs
  • Handling of Chromecast states
  • Figuring out the WebRTC messages

Track the project’s progress here

My dear mentors

My mentors, Benjamin Berg and Claudio Wunder were ever present for me, day in and day out.
Stuck somewhere, need advice on some stuff about C, GObject, Chromecast, anything at all, they’d just jump in to my rescue.
This project couldn’t possibly be done without their consistent support.

* All the images used above were stripped off of all metadata using the Metadata Cleaner app
* Further reading on authentication of Chromecast devices: Blog post by Romain Picard
* A Custom Web Receiver in Action
* Chromium’s VirtualConnectionType enum
* Chromecast Web Receiver Docs
* Chromecast Web Sender Docs

July 31, 2022

Pitivi GSoC: 3rd Update

Namaste everyone :)

This is now the third blog, and this time I will like to keep it a bit different, owing to the suggestions I got :D


So, as usual, we will start with updates, we are still in the breaking phase, but most big errors are gone, there are some things that need to be sorted out, but we can do that once we can at least open the application.


In this blog, I want to show you all what my workflow for the project goes like.

To start with, I have a small notepad, in which I write various things, like

  • Delayed fixes -> Used this during GTK+3.x backport phase, to list things that can be done now, but optimal replacement is in GTK4

  • Errors -> Errors that I'm getting while trying to run the application, I list them and then comment out the responsible lines or do a temp solution to move on (Better discussed in my previous blog).

  • Check -> These are things that I have fixed but have the tendency to be broken or can look bad, thus they have to be checked when a workable version is ready.


These are just some example pages :)

In this the checkbox means check.

After that, I also sometimes write down code on the notepad, when I'm really stuck or can't wrap my head around some piece of code. What I do in these cases is that I do sort of a dry run, writing what the code will do line by line but in normal language, to be honest, I don't read it again, because what counts is you writing it down, making it easy to comprehend.

Apart from it, I also seek help from the community and my mentors from time to time, if you ask at the correct time then you won't waste your time, and would also not spam the help channels just because you missed a line in the documentation.

Documentation nightmare

At any moment of time, I usually have 150+ Tabs of documentations open on my browser, it includes the port guide page, other project's GTK4 port MRs ( These really help), and normal gtk4, and gtk3 guides.


GSoC involves tons and tons of reading. Documentations are like holy writing for us, as they tell us how to do something without breaking our peace.

How I choose what error to work on

Choosing what to work on takes quite a bit of consideration, and tbh it is very random. So here's how I do it ->

Most of the time, I like to go with the flow of the port guide, but I always end up going off track. What I do is, I try to run the app, I then get errors, I comment them out or do a silly temp fix, write down the cause in my notepad and move on.

ooftrack meme

Then I get another error, if this error is something that can't be bypassed, or is small enough, or is small but spans multiple files, then I usually work on fixing it, so it is just very random.


After going through multiple errors, I take a look at the notepad and check the root causes behind the errors, many times multiple errors are caused due to changes in the derivation hierarchy. For example, if c was derived from b, and b was derived from a then if either c is now directly derived from a, or b is just gone then all elements derived from b will lose it's functions and properties.

Thus, many times errors from seemingly completely different entities are caused by the same node in the hierarchy.


Once I know the root cause, I work on fixing them, some are simple name changes, some have easy enough replacements, while some are very problematic, needing refactoring of quite a lot of stuff.


This year's GUADEC was amazing, I got to learn many things and also got to present my work so far. I was quite nervous at first, but it all went smoothly, I hope to someday attend it physically.

My presentation is at - GUADEC Youtube


In the end, GSoC makes you -


I hope you enjoyed this blog, see you in the next one :)

Adopt a gedit plugin!

First a good news, gedit is back on the road. After doing a long break, I feel energized again to develop gedit and related projects.

The next version of gedit will be released when ready, like Debian.

And here comes where you can help! Especially by adopting the development and maintenance of a gedit plugin. Be it part of the main gedit repository, or gedit-plugins, or ... a removed plugin (the small bad news).

First and foremost, nothing is set in stone, a long-term project like gedit grows organically. A removed plugin doesn't mean the end of the world, it can be re-added later, or someone can give it a new life by developing it in its own repository. And, remember, the next version will be released when ready.

The list of removed plugins (all from the gedit-plugins repository):

  • Commander
  • Find in files
  • Translate

They all have a reason for having been removed:

  • Commander was broken, it was anyway not possible to load that plugin anymore. It also targets advanced users.
  • Find in files: I hesitated to remove it, it could be re-added. The main reason is more general (for all plugins), and will be explained below.
  • Translate depends on an external internet service, if that service is shut down or changes its way to make use of it (its API), the plugin will be broken. So it needs regular maintenance. There are also other desktop applications like Dialect.

There is a more general reason. Here is how it goes when a plugin is included in gedit:

  1. A contributor writes the plugin and proposes it for inclusion.
  2. A gedit maintainer looks at the purpose of the plugin, to see if it makes sense to include it. If so, a code review is done.
  3. Then ideally the original author maintains the code. I said ideally, in practice it regularly happens that the author no longer have time or budget to still work on it.

As a result, it is the main developers who have more and more work to do. At some point it becomes basically unfeasible for a few people or a single person to deal with such a big amount of code. It causes stress for those core contributors, so it's a vicious circle (some developers prefer not to contribute anymore because of that stress).

So my hope is that with the release-when-ready, it will give some time to at least revive the Find in files plugin (that plugin is written in Vala).

With plugins, it's easier for the code maintenance to be distributed and to scale. After all, the plugin engine is called libpeas. If you are interested to take care of a gedit peas, don't hesitate to get in touch ;) !

That's all for today, I intend to relate more news around gedit when there are interesting things to communicate.

Even though it's not possible to write comments on this blog, don't hesitate to drop me an email. I do read them, and like to have feedbacks.

New version of The GLib/GTK Development Platform - A Getting Started Guide

I've finally released a new small version of glib-gtk-book.

What triggered my motivation for releasing a new version is a contributor showing up. It stirred up my will to dust a bit the project.

An appendix will probably be written, in which case another new version will be released, once ready. So ... be ready!

If you read this and feel like contributing, don't hesitate ;-) !

C/GObject and Rust

While the "book"/document targets the C language, there is now Rust that can be recommended for new projects.

So I've added a new sub-section in the introduction, explaining why and when it makes sense to still use and learn C/GObject (basically for existing projects where a full rewrite is realistically not feasible).

Even though it's not possible to write comments on this blog, don't hesitate to drop me an email. I do read them, and like to have feedbacks.

Implementing a "mini-LaTeX" in ~2000 lines of code

A preliminary note

The previous blog post on this subject got posted to hackernews. The comments were roughly like the following (contains exaggeration, but only a little):

The author claims to obtain superior quality compared to LaTeX but after reading the first three sentences I can say that there is nothing like that in the text so I just stopped there. Worst! Blog! Post! Ever!

So to be absolutely clear I'm not claiming to "beat" LaTeX in any sort of way (nor did I make such a claim in the original post). The quality is probably not better and the code has only a fraction of the functionality of LaTeX. Missing features include things like images, tables, cross references and even fairly basic functionality like widow and orphan control, italic text or any sort of customizability. This is just an experiment I'm doing for my own amusement.

What does it do then?

It can take a simple plain text version of The War of the Worlds, parse it into chapters, lay the text out in justified paragraphs and write the whole thing out into a PDF file. The output looks like this:

Even though the implementation is quite simple, it does have some typographical niceties. All new chapters begin on a right-hand page, the text is hyphenated, there are page numbers (but not on an empty page immediately preceding a new chapter) and the first paragraph of each chapter is not indented. The curious among you can examine the actual PDF yourselves. Just be prepared that there are known bugs in it.

Thus we can reasonably say that the code does contain an implementation of a very limited and basic form of LaTeX. The code repository has approximately 3000 total lines of C++ code but if you remove the GUI application and other helper code the core implementation only has around 2000 lines. Most of the auxiliary "heavy lifting" code is handled by Pango and Cairo.


The input text file for War of the Worlds is about 332 kB in size and the final PDF contains 221 pages. The program generates the output in 7 seconds on a Ryzen 7 3700 using only one core. This problem is fairly easily parallelizable so if one were to use all 16 cores at the same time the whole operation would take less than a second. I did not do exact measurements but the processing speed seems to be within the same order of magnitude as plain LaTeX.

The really surprising thing was that according to Massif the peak memory consumption was 5 MB. I had not tried to save memory when coding and just made copies of strings and other objects without a care in the world and still managed to almost fit the entire workload in the 4 MB L2 cache of the processor. Goes to show that premature optimization really is the root of all evil (or wasted effort at least).

Most CPU cycles are spent inside Pango. This is not due to any perf problems in Pango, but because this algorithm has an atypical work load. It keeps on asking Pango to shape and measure short text segments that are almost but not entirely identical. For each line that does get rendered, Pango had to process ~10 similar blocks of text. The code caches the results so it should only ask for the size of any individual string once, but this is still the bottleneck. On the other hand since you can process a fairly hefty book in 10 seconds or so it is arguable whether further optimizations are even necessary,

The future

I don't have any idea what I'm going to do with this code, if anything. One blue sky idea that came to mind was that it would be kind of cool to have a modern, fully color managed version of LaTeX that goes from markdown to a final PDF and ebook. This is not really feasible with the current approach since Cairo can only produce RGB files. There has been talk of adding full color space support to Cairo but nothing has happened on that front in 10 years or so.

Cairo is not really on its own in this predicament. Creating PDF files that are suitable for "commercial grade" printing using only open source is surprisingly difficult. For example LibreOffice does output text in the proper grayscale colorspace but silently converts all grayscale images (even 1-bit ones) to RGB. The only software that seems to get everything right is Scribus.

July 30, 2022

Berlin Mini GUADEC 2022

Given the location of this year’s GUADEC many of us couldn’t make it to the real event (or didn’t want to because of the huge emissions), but since there’s a relatively large local community in Berlin and nearby central Europe, we decided to have a new edition of our satellite event, to watch the talks together remotely.

This year we were quite a few more people than last year (a bit more than 20 overall), so it almost had a real conference character, though the organization was a lot more barebones than a real event of course.

Thanks to Sonny Piers we had c-base as our venue this year, which was very cool. In addition to the space ship interior we also had a nice outside area near the river we could use for COVID-friendly hacking.

The main hacking area inside at c-base

We also had a number of local live talks streamed from Berlin. Thanks to the people from c-base for their professional help with the streaming setup!

On Thursday I gave my talk about post-collapse computing, i.e. why we need radical climate action now to prevent a total catastrophe, and failing that, what we could do to make our software more resilient in the face of an ever-worsening crisis.

Unfortunately I ran out of time towards the end so there wasn’t any room for questions/discussion, which is what I’d been looking forward to the most. I’ll write it up in blog post form soon though, so hopefully that can still happen asynchronously.

Hacking outside c-base on the river side

Since Allan, Jakub, and I were there we wanted to use the opportunity to work on some difficult design questions in person, particularly around tiling and window management. We made good progress in some of these areas, and I’m personally very excited about the shape this work is taking.

Because we had a number of app maintainers attending we ended up doing a lot of hallway design reviews and discussions, including about Files, Contacts, Software, Fractal, and Health. Of course, inevitably there were also a lot of the kinds of cross-discipline conversations that can only happen in these in-person settings, and which are often what sets the direction for big things to come.

One area I’m particularly interested in is local-first and better offline support across the stack, both from a resilience and UX point of view. We never quite found our footing in the cloud era (i.e. the past decade) because we’re not really set up to manage server infrastructure, but looking ahead at a local-first future, we’re actually in a much better position.

The Purism gang posing with the Librem 5: Julian, Adrien, and myself

For some more impressions, check out Jakub’s video from the event.

Thanks to everyone for joining, c-base for hosting, the GNOME Foundation for financial support for the event, and hopefully see you all next year!

July 29, 2022

Berlin mini-GUADEC

Photo courtesy of Jakub Steiner

As I write this post, I’m speeding through the German countryside, on a high speed train heading for Amsterdam, as I make my way home from the GUADEC satellite event that just took place in Berlin.

The event itself was notable for me, given that it was the first face-to-face GNOME event that I’ve participated in since the Covid pandemic set in. Given how long its been since I physically met with other contributors, I felt that it was important to me to do a GNOME event this summer, but I wasn’t prepared to travel to Mexico for various reasons (the environment, being away from family), so the Berlin event that sprang up was a great opportunity.

I’d like to thank the local Berlin organisers for making the event happen, C-Base for hosting us, and the GNOME Foundation for providing sponsorship so I could attend.

Pairing a main conference with regional satellite events seems like an effective approach, which can both widen access while managing our carbon footprint, and I think that this could be a good approach for other GUADECs in the future. It would be good to document the lessons that can be learned from this year’s GUADEC before we forget.

In order to reduce my own environmental impact, I traveled to and from the event over land and sea, using the Hull↔Rotterdam overnight ferry, followed by a train between Amsterdam and Berlin. This was a bit of an adventure, particularly due to the scary heatwave that was happening during my outward journey (see travel tips below).

The event itself had good attendance and had a relaxed hackfest feel to it. With two other members of the design team present, plus Florian Muellner and António Fernandes, it was a great opportunity to do some intensive work on new features that are coming in the GNOME 43 release.

I go home re-energised by the time spent with fellow collaborators – something that I’ve missed over the past couple of years – and satisfied by the rapid progress we’ve been able to make by working together in person.

Notes on Travel

I learnt some things with the travel on this trip, so I’m recording them here for future reference. Some of this is might be useful for those wanting to avoid air transport themselves.

Travel in the time of Covid

One obvious tip: check the local covid requirements before you travel, including vaccination and mask requirements. (Something I failed to fully do this trip.)

There was one point on this trip when I felt unwell and wasn’t entirely prepared to deal with it. Make sure you can handle this scenario:

  • Have travel insurance that covers Covid.
  • Note down any support numbers you might need.
  • Check local requirements for what to do if you contract Covid.
  • Take Covid tests with you. If you start to feel unwell, you need to be able to check if you’re positive or not.

Long-distance overland travel

This wasn’t the first time I’ve done long-distance overland travel in Europe, but the journey did present some challenges that I hadn’t encountered before. Some things I learned as a result:

  • Research each leg of your journey yourself, in order to see what options are available and to pick comfortable interchange times. (Background: I used to research my train tickets. This site promises to work out your full journey for you, but it turns out that it doesn’t do a great job. In particular, it assumes that you want the shortest interchange time possible between connecting services, but then it warns about the interchanges being too short. The result is that it appears that some journeys aren’t viable, when they are if you pick a different combination of services.)
  • Wherever possible, make sure that your travel schedule has ample contingency time. I had a couple of delays on my journey which could have easily caused me to miss a connection.
  • I typically book the cheapest ticket I can, which usually means buying non-flexible tickets. For this trip, this felt like a mistake, due to the aforementioned delays. Having flexible tickets would have made this much less stressful and would have avoided costly replacement tickets if I’d missed a connection.
  • Make sure you carry lots of water with you, particularly if it’s warm. I carried 2 litres, which was about right for me.
  • The boat

    The Hull↔Rotterdam ferry service is a potentially interesting option for those traveling between northern England and mainland Europe. It’s an overnight service, and you get a cabin included with the price of your ticket. This can potentially save you the cost of a night’s accommodation.

    A coach service provides a connection between the ferry terminal and Amsterdam and Rotterdam train stations, and there’s an equivalent taxi service on the UK side.

    I quite like the ferry, but it is also somewhat infuriating:

    • The timing of the coach to Amsterdam is a bit variable, and it’s hard to get an exact arrival time. If you have a particular train you need to catch in Amsterdam or Rotterdam, make sure to tell the coach driver what time it is. When I did this, the driver dropped me off close to the station to help me catch my train.
    • It can be hard to find the coach stop in Amsterdam. If you’ve been issued with a ticket for the coach, check the address that’s printed on it. Give yourself plenty of time.
    • The food on board the ferry is expensive and bad. My recommendation would be to not book an evening meal or breakfast, and take your own food with you.

Emulated host profiles in fwupd

As some as you may know, there might be firmware security support in the next versions of Plymouth, GNOME Control Center and GDM. This is a great thing, as most people are running terribly insecure hardware and have no idea. The great majority of these systems can be improved with a few settings changes, and the first step in my plan is showing people what’s wrong, giving some quick information, and perhaps how to change it. The next step will be a “fix the problem” button but that’s still being worked on, and will need some pretty involved testing for each OEM. For the bigger picture there’s the HSI documentation which is a heavy and technical read but the introduction might be interesting. For other 99.99% of the population here are some pretty screenshots:

To facilitate development of various UIs, fwupd now supports emulating different systems. This would allow someone to show dozens of supported devices in GNOME Firmware or to showcase the firmware security panel in the GNOME release video. Hint hint. :)

To do this, ensure you have fwupd 1.8.3 installed (or enable the COPR), and then you can do:

sudo FWUPD_HOST_EMULATE=thinkpad-p1-iommu.json.gz /usr/libexec/fwupd/fwupd

Emulation data files can be created with ./contrib/ file.json in the fwupd source tree and then can be manually modified if required. Hint: it’s mostly the same output as fwupdmgr get-devices --json and fwupdmgr security --json and you can run on any existing JSON output to minimize it.

To load a custom profile, you can do something like:

sudo FWUPD_HOST_EMULATE=/tmp/my-system.json /usr/libexec/fwupd/fwupd

As a precaution, the org.fwupd.hsi.HostEmulation attribute is added so we do not ask the user to upload the HSI report. The emulated devices are also not updatable for obvious reasons. Comments welcome!

#54 More Portings

Update on what happened across the GNOME project in the week from July 22 to July 29.

Core Apps and Libraries

Georges Stavracas (feaneron) announces

GNOME Initial Setup has been ported to GTK4 and libadwaita

GNOME Console

A simple user-friendly terminal emulator.

Alexander Mikhaylenko reports

Console has been ported to GTK4


GTK port of the WebKit rendering engine.

adrian reports

A new stable release of WebKitGTK is now available. Version 2.36.5 not only includes fixes for security issues, but also makes video playback work again in Yelp and fixes the WebKitWebView::context-menu signal in GTK4 builds. The first development release in the next series, 2.37.1, has been available for a couple of weeks, featuring the new WebRTC implementation based on GstWebRTC among many other improvements.

GNOME Builder

IDE for writing GNOME-based software.

Georges Stavracas (feaneron) reports

GNOME Builder has received more than a hundred commits since last update, and is quickly progressing towards feature parity after landing the massive port to GTK4. The highlights of the past weeks are:

  • In-file and project-wise searches are back
  • Large refactoring of how project and global settings are layered
  • Auto-hiding minimap
  • XML and C indenters are back
  • Introduction of a new action muxer and an alternative way to activate actions
  • A variety of internal rearchitecturing in preparation for future changes

Circle Apps and Libraries


Podcast app for GNOME.

Jordan Petridis says

After more than a year of development, the GTK 4 port of Podcasts has been merged. Huge thanks to Christopher Davis and Maximiliano for the joined effort of porting the codebase, but as well to Bilal Elmoussaoui and Julian Hofer for the exhaustive reviews.

We are putting the finishing touches still but you will be able to enjoy a new release o Podcasts in the month ahead.

Third Party Projects

Aaron Erhardt announces

The first beta of Relm4 0.5, an idiomatic GUI library based on gtk4-rs, was released with many improvements.

With this release, large parts of Relm4 were redesigned to be much more flexible and easier to use. Notable changes include better interoperability with gtk4-rs and many improvements to the view macro syntax which allows you to even use if-else expressions in the UI declaration! You can find more information about the release in the official blog post.


Sketch and take handwritten notes.

flxzt says

Rnote, the freehand note-taking and sketching app received a new update: v0.5.4! The app now has a new icon and symbolic (with the help of @bertob, thanks! ), finally added text input (with typewriter sounds), a new PDF import dialog with the added option for different PDF spacing preferences. Screenshots can now be pasted directly from the clipboard (thanks @RickStanley ) and there is now the possibility to enable input constraints when creating shapes (thanks to @sei0o ).

More new features: two new selector modes ( selecting individually, selecting by intersecting with a drawn path ). The workspace browser got a much needed redesign, and now has customizable workspaces (inspired by the Paper app ). Finally the Marker pen style now draws underneath other strokes, making it possible to mark text without obstructing it.


A native Twitter client for your Linux desktop.

CodedOre says

After a longer pause, I continue to work on the libadwaita rewrite of Cawbird. This week the following was added:

  • Support for Video and GIF playback
  • Using an redirect to automatically get the authentication code after authorization on the servers website.

We also now have automated Flatpak builds of the current development status, which you can get at


Easily run Windows software on Linux with Bottles!

Hari Rana (TheEvilSkeleton) announces

Bottles 2022.7.28 was released!

We are introducing a new versioning system that reliably lets you downgrade to previous states in case a recent change goes wrong.

Additionally, we have implemented covers in Library Mode to automatically add cover images in games. We can’t thank SteamGridDB enough for their amazing service and collaboration.

For more information about the new update, check out our release page!

GNOME Shell Extensions

lupantano announces

ReadingStrip is a extension for Gnome-Shell with an equivalent function to a reading guide on the computer, that’s really useful for people with dyslexia.

Major update: Vertical strip added. This is really useful for graphic designers who want to check if their margins and indentations line up properly when displayed on the screen.

That’s all for this week!

See you next week, and be sure to stop by with updates on your own projects!

July 28, 2022

UEFI rootkits and UEFI secure boot

Kaspersky describes a UEFI-implant used to attack Windows systems. Based on it appearing to require patching of the system firmware image, they hypothesise that it's propagated by manually dumping the contents of the system flash, modifying it, and then reflashing it back to the board. This probably requires physical access to the board, so it's not especially terrifying - if you're in a situation where someone's sufficiently enthusiastic about targeting you that they're reflashing your computer by hand, it's likely that you're going to have a bad time regardless.

But let's think about why this is in the firmware at all. Sophos previously discussed an implant that's sufficiently similar in some technical details that Kaspersky suggest they may be related to some degree. One notable difference is that the MyKings implant described by Sophos installs itself into the boot block of legacy MBR partitioned disks. This code will only be executed on old-style BIOS systems (or UEFI systems booting in BIOS compatibility mode), and they have no support for code signatures, so there's no need to be especially clever. Run malicious code in the boot block, patch the next stage loader, follow that chain all the way up to the kernel. Simple.

One notable distinction here is that the MBR boot block approach won't be persistent - if you reinstall the OS, the MBR will be rewritten[1] and the infection is gone. UEFI doesn't really change much here - if you reinstall Windows a new copy of the bootloader will be written out and the UEFI boot variables (that tell the firmware which bootloader to execute) will be updated to point at that. The implant may still be on disk somewhere, but it won't be run.

But there's a way to avoid this. UEFI supports loading firmware-level drivers from disk. If, rather than providing a backdoored bootloader, the implant takes the form of a UEFI driver, the attacker can set a different set of variables that tell the firmware to load that driver at boot time, before running the bootloader. OS reinstalls won't modify these variables, which means the implant will survive and can reinfect the new OS install. The only way to get rid of the implant is to either reformat the drive entirely (which most OS installers won't do by default) or replace the drive before installation.

This is much easier than patching the system firmware, and achieves similar outcomes - the number of infected users who are going to wipe their drives to reinstall is fairly low, and the kernel could be patched to hide the presence of the implant on the filesystem[2]. It's possible that the goal was to make identification as hard as possible, but there's a simpler argument here - if the firmware has UEFI Secure Boot enabled, the firmware will refuse to load such a driver, and the implant won't work. You could certainly just patch the firmware to disable secure boot and lie about it, but if you're at the point of patching the firmware anyway you may as well just do the extra work of installing your implant there.

I think there's a reasonable argument that the existence of firmware-level rootkits suggests that UEFI Secure Boot is doing its job and is pushing attackers into lower levels of the stack in order to obtain the same outcomes. Technologies like Intel's Boot Guard may (in their current form) tend to block user choice, but in theory should be effective in blocking attacks of this form and making things even harder for attackers. It should already be impossible to perform attacks like the one Kaspersky describes on more modern hardware (the system should identify that the firmware has been tampered with and fail to boot), which pushes things even further - attackers will have to take advantage of vulnerabilities in the specific firmware they're targeting. This obviously means there's an incentive to find more firmware vulnerabilities, which means the ability to apply security updates for system firmware as easily as security updates for OS components is vital (hint hint if your system firmware updates aren't available via LVFS you're probably doing it wrong).

We've known that UEFI rootkits have existed for a while (Hacking Team had one in 2015), but it's interesting to see a fairly widespread one out in the wild. Protecting against this kind of attack involves securing the entire boot chain, including the firmware itself. The industry has clearly been making progress in this respect, and it'll be interesting to see whether such attacks become more common (because Secure Boot works but firmware security is bad) or not.

[1] As we all remember from Windows installs overwriting Linux bootloaders
[2] Although this does run the risk of an infected user booting another OS instead, and being able to see the implant

comment count unavailable comments

2022-07-28 Thursday

  • Poked at some hackery, added some debugging bits to the forkit to chase a problem. Submitted a LibOCon paper really looking forward to meeting up with people in person. And continued working on our very awesome (but no longer co-located) COOL days which will be in Berlin immediately afterwards with Eloy.

July 27, 2022

2022-07-27 Wednesday

  • Mail chew, catch-up calls left & right; calls with interns & Pedro.
  • Band practice with H. in the evening ; chat with Thorsten.

Common GLib Programming Errors

Let’s examine four mistakes to avoid when writing programs that use GLib, or, alternatively, four mistakes to look for when reviewing code that uses GLib. Experienced GNOME developers will find the first three mistakes pretty simple and basic, but nevertheless they still cause too many crashes. The fourth mistake is more complicated.

These examples will use C, but the mistakes can happen in any language. In unsafe languages like C, C++, and Vala, these mistakes usually result in security issues, specifically use-after-free vulnerabilities.

Mistake #1: Failure to Disconnect Signal Handler

Every time you connect to a signal handler, you must think about when it should be disconnected to prevent the handler from running at an incorrect time. Let’s look at a contrived but very common example. Say you have an object A and wish to connect to a signal of object B. Your code might look like this:

static void
some_signal_cb (gpointer user_data)
  A *self = user_data;
  a_do_something (self);

static void
some_method_of_a (A *self)
  B *b = get_b_from_somewhere ();
  g_signal_connect (b, "some-signal", (GCallback)some_signal_cb, a);

Very simple. Now, consider what happens if the object B outlives object A, and Object B emits some-signal after object A has been destroyed. Then the line a_do_something (self) is a use-after-free, a serious security vulnerability. Drat!

If you think about when the signal should be disconnected, you won’t make this mistake. In many cases, you are implementing an object and just want to disconnect the signal when your object is disposed. If so, you can use g_signal_connect_object() instead of the vanilla g_signal_connect(). For example, this code is not vulnerable:

static void
some_method_of_a (A *self)
  B *b = get_b_from_somewhere ();
  g_signal_connect_object (b, "some-signal", (GCallback)some_signal_cb, a, 0);

g_signal_connect_object() will disconnect the signal handler whenever object A is destroyed, so there’s no longer any problem if object B outlives object A. This simple change is usually all it takes to avoid disaster. Use g_signal_connect_object() whenever the user data you wish to pass to the signal handler is a GObject. This will usually be true in object implementation code.

Sometimes you need to pass a data struct as your user data instead. If so, g_signal_connect_object() is not an option, and you will need to disconnect manually. If you’re implementing an object, this is normally done in the dispose function:

// Object A instance struct (or priv struct)
struct _A {
  B *b;
  gulong some_signal_id;

static void
some_method_of_a (A *self)
  B *b = get_b_from_somewhere ();
  g_assert (a->some_signal_id == 0);
  a->some_signal_id = g_signal_connect (b, "some-signal", (GCallback)some_signal_cb, a, 0);

static void
a_dispose (GObject *object)
  A *a = (A *)object;
  g_clear_signal_handler (&a->some_signal_id, a->b);
  G_OBJECT_CLASS (a_parent_class)->dispose (object);

Here, g_clear_signal_handler() first checks if &a->some_signal_id is 0. If not, it disconnects and sets &a->some_signal_id to 0. Setting your stored signal ID to 0 and checking whether it is 0 before disconnecting is important because dispose may run multiple times to break reference cycles. Attempting to disconnect the signal multiple times is another common programmer error!

Instead of calling g_clear_signal_handler(), you could equivalently write:

if (a->some_signal_id != 0) {
  g_signal_handler_disconnect (a->b, a->some_signal_id);
  a->some_signal_id = 0;

But writing that manually is no fun.

Yet another way to mess up would be to use the wrong integer type to store the signal ID, like guint instead of gulong.

There are other disconnect functions you can use to avoid the need to store the signal handler ID, like g_signal_handlers_disconnect_by_data(), but I’ve shown the most general case.

Sometimes, object implementation code will intentionally not disconnect signals if the programmer believes that the object that emits the signal will never outlive the object that is connecting to it. This assumption may usually be correct, but since GObjects are refcounted, they may be reffed in unexpected places, leading to use-after-free vulnerabilities if this assumption is ever incorrect. Your code will be safer and more robust if you disconnect always.

Mistake #2: Misuse of GSource Handler ID

Mistake #2 is basically the same as Mistake #1, but using GSource rather than signal handlers. For simplicity, my examples here will use the default main context, so I don’t have to show code to manually create, attach, and destroy the GSource. The default main context is what you’ll want to use if (a) you are writing application code, not library code, and (b) you want your callbacks to execute on the main thread. (If either (a) or (b) does not apply, then you need to carefully study GMainContext to ensure you do not mess up; see Mistake #4.)

Let’s use the example of a timeout source, although the same style of bug can happen with an idle source or any other type of source that you create:

static gboolean
my_timeout_cb (gpointer user_data)
  A *self = user_data;
  a_do_something (self);

static void
some_method_of_a (A *self)
  g_timeout_add (42, (GSourceFunc)my_timeout_cb, a);

You’ve probably guessed the flaw already: if object A is destroyed before the timeout fires, then the call to a_do_something() is a use-after-free, just like when we were working with signals. The fix is very similar: store the source ID and remove it in dispose:

// Object A instance struct (or priv struct)
struct _A {
  gulong my_timeout_id;

static gboolean
my_timeout_cb (gpointer user_data)
  A *self = user_data;
  a_do_something (self);
  a->my_timeout_id = 0;

static void
some_method_of_a (A *self)
  g_assert (a->my_timeout_id == 0);
  a->my_timeout_id = g_timeout_add (42, (GSourceFunc)my_timeout_cb, a);

static void
a_dispose (GObject *object)
  A *a = (A *)object;
  g_clear_handler_id (&a->my_timeout_id, g_source_remove);
  G_OBJECT_CLASS (a_parent_class)->dispose (object);

Much better: now we’re not vulnerable to the use-after-free issue.

As before, we must be careful to ensure the source is removed exactly once. If we remove the source multiple times by mistake, GLib will usually emit a critical warning, but if you’re sufficiently unlucky you could remove an innocent unrelated source by mistake, leading to unpredictable misbehavior. This is why we need to write a->my_timeout_id = 0; before returning from the timeout function, and why we need to use g_clear_handler_id() instead of g_source_remove() on its own. Do not forget that dispose may run multiple times!

We also have to be careful to return G_SOURCE_REMOVE unless we want the callback to execute again, in which case we would return G_SOURCE_CONTINUE. Do not return TRUE or FALSE, as that is harder to read and will obscure your intent.

Mistake #3: Failure to Cancel Asynchronous Function

When working with asynchronous functions, you must think about when it should be canceled to prevent the callback from executing too late. Because passing a GCancellable to asynchronous function calls is optional, it’s common to see code omit the cancellable. Be suspicious when you see this. The cancellable is optional because sometimes it is really not needed, and when this is true, it would be annoying to require it. But omitting it will usually lead to use-after-free vulnerabilities. Here is an example of what not to do:

static void
something_finished_cb (GObject      *source_object,
                       GAsyncResult *result,
                       gpointer      user_data)
  A *self = user_data;
  B *b = (B *)source_object;
  g_autoptr (GError) error = NULL;

  if (!b_do_something_finish (b, result, &error)) {
    g_warning ("Failed to do something: %s", error->message);

  a_do_something_else (self);

static void
some_method_of_a (A *self)
  B *b = get_b_from_somewhere ();
  b_do_something_async (b, NULL /* cancellable */, a);

This should feel familiar by now. If we did not use A inside the callback, then we would have been able to safely omit the cancellable here without harmful effects. But instead, this example calls a_do_something_else(). If object A is destroyed before the asynchronous function completes, then the call to a_do_something_else() will be a use-after-free.

We can fix this by storing a cancellable in our instance struct, and canceling it in dispose:

// Object A instance struct (or priv struct)
struct _A {
  GCancellable *cancellable;

static void
something_finished_cb (GObject      *source_object,
                       GAsyncResult *result,
                       gpointer      user_data)
  B *b = (B *)source_object;
  A *self = user_data;
  g_autoptr (GError) error = NULL;

  if (!b_do_something_finish (b, result, &error)) {
    if (!g_error_matches (error, G_IO_ERROR, G_IO_ERROR_CANCELLED))
      g_warning ("Failed to do something: %s", error->message);
  a_do_something_else (self);

static void
some_method_of_a (A *self)
  B *b = get_b_from_somewhere ();
  b_do_something_async (b, a->cancellable, a);

static void
a_init (A *self)
  self->cancellable = g_cancellable_new ();

static void
a_dispose (GObject *object)
  A *a = (A *)object;

  g_cancellable_cancel (a->cancellable);
  g_clear_object (&a->cancellable);

  G_OBJECT_CLASS (a_parent_class)->dispose (object);

Now the code is not vulnerable. Note that, since you usually do not want to print a warning message when the operation is canceled, there’s a new check for G_IO_ERROR_CANCELLED in the callback.

Update #1: I managed to mess up this example in the first version of my blog post. The example above is now correct, but what I wrote originally was:

if (!b_do_something_finish (b, result, &error) &&
    !g_error_matches (error, G_IO_ERROR, G_IO_ERROR_CANCELLED)) {
  g_warning ("Failed to do something: %s", error->message);
a_do_something_else (self);

Do you see the bug in this version? Cancellation causes the asynchronous function call to complete the next time the application returns control to the main context. It does not complete immediately. So when the function is canceled, A is already destroyed, the error will be G_IO_ERROR_CANCELLED, and we’ll skip the return and execute a_do_something_else() anyway, triggering the use-after-free that the example was intended to avoid. Yes, my attempt to show you how to avoid a use-after-free itself contained a use-after-free. You might decide this means I’m incompetent, or you might decide that it means it’s too hard to safely use unsafe languages. Or perhaps both!

Update #2:  My original example had an unnecessary explicit check for NULL in the dispose function. Since g_cancellable_cancel() is NULL-safe, the dispose function will cancel only once even if dispose runs multiple times, because g_clear_object() will set a->cancellable = NULL. Thanks to Guido for suggesting this improvement in the comments.

Mistake #4: Incorrect Use of GMainContext in Library or Threaded Code

My fourth common mistake is really a catch-all mistake for the various other ways you can mess up with GMainContext. These errors can be very subtle and will cause functions to execute at unexpected times. Read this main context tutorial several times. Always think about which main context you want callbacks to be invoked on.

Library developers should pay special attention to the section “Using GMainContext in a Library.” It documents several security-relevant rules:

  • Never iterate a context created outside the library.
  • Always remove sources from a main context before dropping the library’s last reference to the context.
  • Always document which context each callback will be dispatched in.
  • Always store and explicitly use a specific GMainContext, even if it often points to some default context.
  • Always match pushes and pops of the thread-default main context.

If you fail to follow all of these rules, functions will be invoked at the wrong time, or on the wrong thread, or won’t be called at all. The tutorial covers GMainContext in much more detail than I possibly can here. Study it carefully. I like to review it every few years to refresh my knowledge. (Thanks Philip Withnall for writing it!)

Properly-designed libraries follow one of two conventions for which main context to invoke callbacks on: they may use the main context that was thread-default at the time the asynchronous operation started, or, for method calls on an object, they may use the main context that was thread-default at the time the object was created. Hopefully the library explicitly documents which convention it follows; if not, you must look at the source code to figure out how it works, which is not fun. If the library documentation does not indicate that it follows either convention, it is probably unsafe to use in threaded code.


All four mistakes are variants on the same pattern: failure to prevent a function from being unexpectedly called at the wrong time. The first three mistakes commonly lead to use-after-free vulnerabilities, which attackers abuse to hack users. The fourth mistake can cause more unpredictable effects. Sadly, today’s static analyzers are probably not smart enough to catch these mistakes. You could catch them if you write tests that trigger them and run them with an address sanitizer build, but that’s rarely realistic. In short, you need to be especially careful whenever you see signals, asynchronous function calls, or main context sources.


The adventure continues in Common GLib Programming Errors, Part Two: Weak Pointers.

July 26, 2022

dLeyna has moved

Happy to announce that dLeyna has moved to its new home, GNOME World:

The four previously separated repositories core, connector-dbus, renderer and server have been combined into this single repository. I was amongst those who promoted for the split, but since then the conditions have changed and it is far easier to maintain in an all-in-one repository now. Most files should have kept history.

The upcoming dLeyna 0.8 will be the first unified release.

July 25, 2022

GSoC mid term report for Health

It's been a while since I last updated my progress. I've made significant progress after the last update.
I started creating the User model in the last update. By now, I have migrated the whole User model to the Database from the GSettings and refactored the codebase accordingly.

Here is my major progress so far:

  • Creation of a new User model: The new user model comprises all the user details that were initially saved in the GSettings such as user_birthday, user_height, etc. This new model will help in associating each user with a single data structure.
pub struct User {
        pub user_id: i64,
        pub user_name: String,
        pub user_birthday: glib::DateTime,
        pub user_height: Length,
        pub user_weightgoal: Mass,
        pub user_stepgoal: i64,
        pub enabled_plugins: Vec<PluginName>,
        pub recent_activity_types: Vec<ActivityType>,
        pub did_initial_setup: bool,

  • Migration to Database
    The current user data is being stored in the Database instead of GSettings for additional flexibility and to help in the support for multiple different users. Each user is assigned a user ID and the active user ID is saved in the GSettings for quick access to a particular user.

  • Associating Weights and Activities to a User
    The next part of my project dealt with associating weights and activities to a particular user, such that each data can be saved and extracted based on the user ID of a particular user.

  • Handling Database Migration
    Earlier the migration function to migrate Date to DateTime ran for every function and it resulted in a slower application start time. This has been fixed by adding a Version to the Database such that if the Database version is equal to the current version, we skip the migration otherwise, we run the migration and update the database. Furthermore, three additional migration functions are added such that an initial User will be created from the data in the GSettings file and associate each activity and weight to an initial user with user ID 1.

  • Adding a method to switch users
    The final part up to my mid-term evaluation dealt with adding a UI to switch multiple users.

That's all for my project until now. Here's a link to my MR:

Next Steps:
Now, I would be working on:

  • Adding a new sync model that would help in the support of multiple sync providers for different health categories for each user.
  • Pulling out actual activities from Google Fit,
  • Two way sync support
  • Adding a UI to handle switching of multiple sync providers.

See you all after 3 weeks. Thanks for reading!

Book Notes: Summer 2022 (burnout and the good life)

I promised in my post on water to blog more this summer. So far, so fail, but in part it’s because I’ve been reading a lot. Some miscellaneous notes on those books follow.

“An interesting bookshelf photorealistic”, as rendered by Midjourney’s image-creation AI, another summer hobby.

Those of you who have emailed my work address lately will have noticed I’m also on sabbatical this summer, because after five years of focus on Tidelift I’m feeling pretty burnt out. This is not a criticism of Tidelift: it’s a great team; I’m very proud of what we are doing; and I will be going back shortly. But a big theme of the summer has been to think about what I want to do, and how that intersects with Tidelift—so that when I come back I’ll be both a strong contributor, and a happy and healthy contributor.

Work—burnout and better futures

The End of Burnout, by Jonathan Malesic: Malesic puts the blame for burnout squarely on our culture rather than us as individuals, which means the book has very few prescriptions for how we as individuals can deal with burnout. But it has interesting meditations on how we can create a culture that mitigates against burnout.

I hope to do a fuller review soon, because I find it difficult to summarize quickly, and much of it applies to open collaborative communities, where the line between self-affirming creation and self-destructive labor can be very fluid. In the meantime, I’ve put some of my favorite quotes up on Goodreads and annotated many of them.

Imaginable, by Jane McGonigal: I found this equal parts fascinating and frustrating. 

Good: it helped me ask “what the hell am I doing” in much better ways. Two key tricks to this: asking it in a ten year timeframe, and using a bunch of neat futurist-y brainstorming techniques to help think genuinely outside of the box. For this reason I think it might end up being, in ten years, the most influential “self-help” book I ever read.

Bad: it’s a classic “this book should have been an article”, and it is the first time I’ve thought “this book should have been an app”—the structured brainstorming exercises could have been much more impactful if guided with even minimal software. There actually is a companion(?) pay-to-enter community, which so far I’ve really enjoyed—if I stick with it, and find value, I suspect in the future I’ll recommend joining that community rather than reading the book.

Other big failure(?): it focuses a lot on What Is Going On In The World and How You Can Change It, when one of my takeaways from Malesic’s burnout book was to focus less on The World and more on the concrete people and places around me. The book’s techniques are still helpful for this, which is why I think it’ll be impactful for me, but I think it’d be a better book if its examples and analysis also drilled down on the personal.


I’ve had the luxury of spending the summer in Bozeman, visiting my sister and nieces/nephew. So a few books on Montana:

History of Montana in 101 Objects: Terrific. Great selection of objects; thoughtful but concise essays. I wish someone would write the same about SF. Highly recommended for anyone who spends time in the state.

Ties, Rails, and Telegraph Wires, by Dale Martin: A thing that is hard to wrap one’s head around when it comes to Montana is the vastness of the place; fourth biggest state, and 7.5 people per square mile. (CA: 254/mi2; SF: 6,200/mi2, The Mission: 30K/mi2.) This book does a lovely job capturing the vast spaces of Montana at the beginning and end of two massive technological changes: the coming of the train and the coming of cars. Bonus: lavishly photographed (largely via the work of Ron Nixon). 

Water, Climate, and Climate Action

A disconnect I’ve been struggling with is between my digitally-focused work and my increasing concerns for/interest in the Real World. Related reads:

Introduction to Water in California, by David Carle: Recommend strongly if you’re a Californian wanting to geek out, but for most the Wikipedia article is probably sufficient.

How To Blow Up A Pipeline, by Andreas Malm: I recommend every citizen of the developed, carbon-dependent world read this. It might not motivate you to commit violence against carbon-generating property, but it will at least put you in the right place to react appropriately when you see reports of such violence against property. There’s a lot to unpack, and again, I recommend reading it, but at the end of the day much boils down to an image from the end of the book: when the author and other allies took down a fence around a brown-coal power plant, even Green party politicians condemned that as “violence”. The emissions of the power plant themselves? Not condemned; not considered violence in our discourse or politics.

Asceticism I didn’t read

In the past, I’ve on occasion turned to a certain sort of philosophical asceticism when in a frustrated place. So I packed these:

Zen and the Art of Motorcycle Maintenance: I liked this book a lot in my teens and 20s, and much of the focus on Quality still resonates with me. I thought it’d be fun to re-read it in Bozeman (where much of the book takes place). But ultimately I haven’t even cracked the cover, because right now I don’t want to retreat to craft, no matter how well done. Instead, an outgoing, community-centric approach to life feels more appropriate.

Meditations of Marcus Aurelius, translated and annotated by Robin Waterfield: Unlike Zen and…, I have started this one, and would highly recommend it—the translation is very accessible and the annotations are terrific. But again, the detached life feels like the wrong route right now—even if it is one that in the past I’ve fallen into very easily.


Read a fair bit of fiction over the summer, much of it light, trite, and not worth recommending or even thinking much about. If you want every detail, it’s in my Goodreads feed; the best of it will get added at some point to my mega-thread of diverse science-fiction/fantasy recs over on Twitter.

July 24, 2022

Berlin Mini GUADEC 2022

Wow it’s been ages since I last attended a conference in-person and since I last blogged.

I’m on my way back form Berlin Mini GUADEC 2022, and it was delightful! I’ve been able to meet pals, colleagues and comrades, old and new! The location was great, it’s a really cool hackerspace called c-base, decorated like a sci-fi spaceship with a gigantic 40×16px screen made out of old bottles of Club Mate.

The Talks

While I’ve not been able to watch all the talks I wanted to and while I didn’t always watch attentively, many talks caught my interest.

🌐 — I have particularly been interested by the talks about internet autonomy by Robert McQueen as this is something I’m realy looking forward to since a really long time. After all, why should I give Google — or any cloud provider — my calendar and the many many sensitive information it contains, when what I really want is to have the same calendar on my laptop and phone, all while keeping this private info for myself‽

⚡ — I also enjoyed the power measurement talk by Aditya Manglik. I’d really like to give Usage a new life as it’s a really neat little app, and having some power measurement info there totally makes sense.

✏️ — Allan Day’s talk about best practices for app design gave a great insight of what design languages are and how GNOME grew itself one.

🥵 — I didn’t learn much from Tobias Bernard’s talk about post-collapse computing, but it was nonetheless interesting and moving.

🤝 — I was particularly interested by the AGM this year too… am I… finally becoming an adult? 😳

I can’t wait to be back home to watch the talks I missed, especially the ones about community, accessibility and inclusivisty!

The Work

I didn’t do much regular work this week — conferences are quite demanding and you really want to focus on the rare occasions to socialize — yet I still did a few things, like:

Julian and I also looked a bit at making Metronome vendor its crates in a way that doesn’t require internet at build time, so we can finally update it on Flathub to the GNOME 42 runtime… without success sadly.

I also spent time trying recent versions of GNOME apps on the Librem 5 — especially the upcoming version of Nautilus — and compared the upcoming version of Phosh with the exciting mobile prototype of Shell on Jonas’ tablet and Pinephone Pro.

But more importantly, this conference has been a great opportunity to bond again with members of GNOME, including coworkers and friends! It’s easy to undermine how important in-person social interactions are for the health of the whole project.

The Format

The Berlin Mini GUADEC is a sattelite experiment of the main event in Guadalajara. I really enjoyed the mixed local and remote experience, and I think distributed conferences are something that should become the norm for GNOME.

I don’t have number, but this distributed experience felt like it was more inclusive as it allowed more persons to attend, and I assume it was cheaper too as the GNOME Foundation doesn’t have to subsidies as many transatlantic flights. But more importantly it allowed me to attend without contributing much to the destruction of life on Earth — including ours. I’m really considering to never ever attend any conference by plane anymore, and I would simply not have attended GUADEC this year because of the transatlantic flight, even though I would have loved to visit Guadalajara.

And The Rest

On a (somewhat) lighter note, I recently got into street stickers — you know, the kind that cover lamp posts in your city — and I have to say: Berlin didn’t disappoint me! Its streets are riddled with really cool looking ones.

BTW. Thibault. My name isn’t Adrieng Plazza.

“Sponsored by GNOME Foundation” badge

GNOME Radio 16 on GNOME 42 Presentation at GUADEC 2022

GNOME Radio 16 is the Public Network Radio Software for Accessing Free World Broadcasts on Internet running on GNOME 42.

GNOME Radio 16 is available with Hawaii Public Radio (NPR) and 62 British Broadcasting Corporation (BBC) live audio broadcasts for GNOME 42.

The latest GNOME Radio 16 release during GUADEC 2022 (between July 20-25, 2022) features 200 international radio stations and 110 city map markers around the world, including National Public Radio, 62 BBC radio stations broadcasting live from United Kingdom and 4 SomaFM radio stations broadcasting live from San Francisco, California. GNOME Radio 16 for GNOME 42 is developed on the GNOME 42 desktop platform with GNOME Maps, GeoClue, libchamplain and geocode-lib and it requires at least GTK+ 3.0 and GStreamer 1.0 for audio playback.

Join the Bird of a Feather meeting about GNOME Radio 16 on GNOME 42 during the GUADEC 2022 at 24 Jul 2022 13-15 in GUADEC 2022-Track 2 Samsung at

8 years before GNOME 43 occured I began writing GNOME Internet Radio Locator for GNOME 2 between 2014-2017 and 5 more years GNOME 3, after Norwegian Broadcasting Corporation (NRK) shut down its FM broadcasts. In 2022 we are going to build GNOME 43 support for further international as well as Norwegian radio stations with help from the GStreamer and the GNOME community.

Here is some of the newly written code for GNOME 43 in the new GNOME Radio 42 application org.gnome.Radio:

#include <gst/player/player.h> #include <gtk/gtk.h> static void activate(GtkApplication * app, gpointer user_data) { GtkWidget *window; GstPlayer *player; window = gtk_application_window_new(app); gtk_window_set_application (GTK_WINDOW(window), GTK_APPLICATION(app)); gtk_window_set_title(GTK_WINDOW(window), "Radio"); gtk_window_set_default_size(GTK_WINDOW(window), 800, 600); gtk_widget_show(window); player = gst_player_new (NULL, gst_player_g_main_context_signal_dispatcher_new(NULL)); gst_player_set_uri (GST_PLAYER (player), ""); gst_player_play (GST_PLAYER (player)); } int main(int argc, char **argv) { GtkApplication *app; int status; gst_init(&argc, &argv); gst_init(NULL, NULL); app = gtk_application_new("org.gnome.Radio", G_APPLICATION_FLAGS_NONE); g_signal_connect(app, "activate", G_CALLBACK(activate), NULL); status = g_application_run(G_APPLICATION(app), argc, argv); g_object_unref(app); return status; }

GNOME Internet Radio Locator 3 (Washington)

In 2018 I began writing my Bachelor of Science thesis in Electrical Engineering about GNOME Radio and GNOME Internet Radio Locator and on June 24, 2020 I published my Bachelor thesis on GNOME Radio; gnome-radio-16.0.43 and gnome-internet-radio-locator-12.6.0, at Oslo Metropolitan University and University of Oslo in Norway.

See my GUADEC 2022 talk on GNOME Radio 16 scheduled for the BoF Workshop GUADEC 2022 BoF Rm 2 session July 24, 2022 between 13:00-15:00.

Visit and for full details on GNOME Radio 42.

July 23, 2022

GSOC 2022: Second Update

Hello everyone! 😄

Its been a while since my last update, and I have made significant progress :D

In my previous blog post, I mentioned using the GtkListView for the templates submenu. But, after a few discussions with my mentor @antoniof, we decided to use the GtkListBox to create the custom widget for the new documents creation feature, as it would be easier to implement and there is no need to create a factory for it.

The GtkListBox takes in the GtkExpander and GtkBox widgets as its children, making the whole list of templates visible in a single view!
The GtkBox has GtkLabel and GtkImage as its children, displaying the file name and icon respectively. 
This is what it looks like:

But this implementation has its flaws :/
  • The subfolders were not showing the expander widget on the first try.
  • When clicking on a subfolder, instead of expanding, it gave an error message saying it's not a file.
The templates submenu also needs a feature which gives users an option to search for templates. I am using a GtkVerticalBox widget for this, which takes in GtkSearchEntry and the GtkListBox I created earlier as its child widgets. But it's unable to search for templates within subfolders if they are not expanded earlier.

Another issue I faced was implementing this same custom widget in the path bar menu. You see, the new documents creation feature is accessible from the context menu as well as the path bar menu. But it proved difficult to use the same widget in the path bar menu. 

So, the final solution to all these problems is to use a dialog box for the templates submenu :)
The custom widget is the same, the only difference is that instead of it being a child of the GtkPopoverMenu, it is now implemented in a dialog box!
This is what the final implementation looks like:

All the problems I was facing earlier are resolved by this implementation 🎉

Future plans:
Search is still not fully functional in the dialog box, but should be completed pretty soon. 
After that, naming a template is to be made as part of the process of creating a new template.

I would like to thank my mentor for helping me throughout this project! It has been a really fun experience working with him. 😄

Thanks for reading,
See you in a few weeks! 😉