July 05, 2022

Running the Steam Deck’s OS in a virtual machine using QEMU

SteamOS desktop


The Steam Deck is a handheld gaming computer that runs a Linux-based operating system called SteamOS. The machine comes with SteamOS 3 (code name “holo”), which is in turn based on Arch Linux.

Although there is no SteamOS 3 installer for a generic PC (yet), it is very easy to install on a virtual machine using QEMU. This post explains how to do it.

The goal of this VM is not to play games (you can already install Steam on your computer after all) but to use SteamOS in desktop mode. The Gamescope mode (the console-like interface you normally see when you use the machine) requires additional development to make it work with QEMU and will not work with these instructions.

A SteamOS VM can be useful for debugging, development, and generally playing and tinkering with the OS without risking breaking the Steam Deck.

Running the SteamOS desktop in a virtual machine only requires QEMU and the OVMF UEFI firmware and should work in any relatively recent distribution. In this post I’m using QEMU directly, but you can also use virt-manager or some other tool if you prefer, we’re emulating a standard x86_64 machine here.

General concepts

SteamOS is a single-user operating system and it uses an A/B partition scheme, which means that there are two sets of partitions and two copies of the operating system. The root filesystem is read-only and system updates happen on the partition set that is not active. This allows for safer updates, among other things.

There is one single /home partition, shared by both partition sets. It contains the games, user files, and anything that the user wants to install there.

Although the user can trivially become root, make the root filesystem read-write and install or change anything (the pacman package manager is available), this is not recommended because

  • it increases the chances of breaking the OS, and
  • any changes will disappear with the next OS update.

A simple way for the user to install additional software that survives OS updates and doesn’t touch the root filesystem is Flatpak. It comes preinstalled with the OS and is integrated with the KDE Discover app.

Preparing all necessary files

The first thing that we need is the installer. For that we have to download the Steam Deck recovery image from here: https://store.steampowered.com/steamos/download/?ver=steamdeck&snr=

Once the file has been downloaded, we can uncompress it and we’ll get a raw disk image called steamdeck-recovery-4.img (the number may vary).

Note that the recovery image is already SteamOS (just not the most up-to-date version). If you simply want to have a quick look you can play a bit with it and skip the installation step. In this case I recommend that you extend the image before using it, for example with ‘truncate -s 64G steamdeck-recovery-4.img‘ or, better, create a qcow2 overlay file and leave the original raw image unmodified: ‘qemu-img create -f qcow2 -F raw -b steamdeck-recovery-4.img steamdeck-recovery-extended.qcow2 64G

But here we want to perform the actual installation, so we need a destination image. Let’s create one:

$ qemu-img create -f qcow2 steamos.qcow2 64G

Installing SteamOS

Now that we have all files we can start the virtual machine:

$ qemu-system-x86_64 -enable-kvm -smp cores=4 -m 8G \
    -device usb-ehci -device usb-tablet \
    -device intel-hda -device hda-duplex \
    -device VGA,xres=1280,yres=800 \
    -drive if=pflash,format=raw,readonly=on,file=/usr/share/ovmf/OVMF.fd \
    -drive if=virtio,file=steamdeck-recovery-4.img,driver=raw \
    -device nvme,drive=drive0,serial=badbeef \
    -drive if=none,id=drive0,file=steamos.qcow2

Note that we’re emulating an NVMe drive for steamos.qcow2 because that’s what the installer script expects. This is not strictly necessary but it makes things a bit easier. If you don’t want to do that you’ll have to edit ~/tools/repair_device.sh and change DISK and DISK_SUFFIX.

SteamOS installer shortcuts

Once the system has booted we’ll see a KDE Plasma session with a few tools on the desktop. If we select “Reimage Steam Deck” and click “Proceed” on the confirmation dialog then SteamOS will be installed on the destination drive. This process should not take a long time.

Now, once the operation finishes a new confirmation dialog will ask if we want to reboot the Steam Deck, but here we have to choose “Cancel”. We cannot use the new image yet because it would try to boot into the Gamescope session, which won’t work, so we need to change the default desktop session.

SteamOS comes with a helper script that allows us to enter a chroot after automatically mounting all SteamOS partitions, so let’s open a Konsole and make the Plasma session the default one in both partition sets:

$ sudo steamos-chroot --disk /dev/nvme0n1 --partset A
# steamos-readonly disable
# echo '[Autologin]' > /etc/sddm.conf.d/zz-steamos-autologin.conf
# echo 'Session=plasma.desktop' >> /etc/sddm.conf.d/zz-steamos-autologin.conf
# steamos-readonly enable
# exit

$ sudo steamos-chroot --disk /dev/nvme0n1 --partset B
# steamos-readonly disable
# echo '[Autologin]' > /etc/sddm.conf.d/zz-steamos-autologin.conf
# echo 'Session=plasma.desktop' >> /etc/sddm.conf.d/zz-steamos-autologin.conf
# steamos-readonly enable
# exit

After this we can shut down the virtual machine. Our new SteamOS drive is ready to be used. We can discard the recovery image now if we want.

Booting SteamOS and first steps

To boot SteamOS we can use a QEMU line similar to the one used during the installation. This time we’re not emulating an NVMe drive because it’s no longer necessary.

$ cp /usr/share/OVMF/OVMF_VARS.fd .
$ qemu-system-x86_64 -enable-kvm -smp cores=4 -m 8G \
   -device usb-ehci -device usb-tablet \
   -device intel-hda -device hda-duplex \
   -device VGA,xres=1280,yres=800 \
   -drive if=pflash,format=raw,readonly=on,file=/usr/share/ovmf/OVMF.fd \
   -drive if=pflash,format=raw,file=OVMF_VARS.fd \
   -drive if=virtio,file=steamos.qcow2 \
   -device virtio-net-pci,netdev=net0 \
   -netdev user,id=net0,hostfwd=tcp::2222-:22

(the last two lines redirect tcp port 2222 to port 22 of the guest to be able to SSH into the VM. If you don’t want to do that you can omit them)

If everything went fine, you should see KDE Plasma again, this time with a desktop icon to launch Steam and another one to “Return to Gaming Mode” (which we should not use because it won’t work). See the screenshot that opens this post.

Congratulations, you’re running SteamOS now. Here are some things that you probably want to do:

  • (optional) Change the keyboard layout in the system settings (the default one is US English)
  • Set the password for the deck user: run ‘passwd‘ on a terminal
  • Enable / start the SSH server: ‘sudo systemctl enable sshd‘ and/or ‘sudo systemctl start sshd‘.
  • SSH into the machine: ‘ssh -p 2222 deck@localhost

Updating the OS to the latest version

The Steam Deck recovery image doesn’t install the most recent version of SteamOS, so now we should probably do a software update.

  • First of all ensure that you’re giving enought RAM to the VM (in my examples I run QEMU with -m 8G). The OS update might fail if you use less.
  • (optional) Change the OS branch if you want to try the beta release: ‘sudo steamos-select-branch beta‘ (or main, if you want the bleeding edge)
  • Check the currently installed version in /etc/os-release (see the BUILD_ID variable)
  • Check the available version: ‘steamos-update check
  • Download and install the software update: ‘steamos-update

Note: if the last step fails after reaching 100% with a post-install handler error then go to Connections in the system settings, rename Wired Connection 1 to something else (anything, the name doesn’t matter), click Apply and run steamos-update again. This works around a bug in the update process. Recent images fix this and this workaround is not necessary with them.

As we did with the recovery image, before rebooting we should ensure that the new update boots into the Plasma session, otherwise it won’t work:

$ sudo steamos-chroot --partset other
# steamos-readonly disable
# echo '[Autologin]' > /etc/sddm.conf.d/zz-steamos-autologin.conf
# echo 'Session=plasma.desktop' >> /etc/sddm.conf.d/zz-steamos-autologin.conf
# steamos-readonly enable
# exit

After this we can restart the system.

If everything went fine we should be running the latest SteamOS release. Enjoy!

Reporting bugs

SteamOS is under active development. If you find problems or want to request improvements please go to the SteamOS community tracker.

Edit 06 Jul 2022: Small fixes, mention how to install the OS without using NVMe.

July 04, 2022

2022-07-04 Monday

  • Mail chew, planning call, patch review. Lunch with H. and M.
  • If you know someone interested in promoting FLOSS and helping Collabora to build a business that can sustain and grow its contribution who has skills in marketing, and is capable of working with hacker-types ? please do direct them to our new a marketing role.

July 03, 2022

2022-07-03 Sunday

  • All Saints in the morning (school leavers service), H. on piano; home for a chicken salad in the garden; missed Laura. Took M. and N. to StAG / punting, watched Snatch with H. picked up babes, bed.

July 02, 2022

Pitivi GSoC Update

Hello everyone! I hope you all are doing well.

This is the 4th week since GSoC coding period officially began, this summer I'm hacking on the Pitivi project, porting it to GTK4, a much-requested feature for the editor.

I Love Pitivi

I have had some experience with GTK before as I developed an extension for GNOME, called Logomenu which helped me in jumpstarting the work.

We have made considerable progress till now, and we are on track (specifically on the porting part), but not everything went as planned, there were some hardships causing delays to the timelines I anticipated, but well, you can't anticipate everything beforehand. These failures make us learn new things and makes us better coders, provided we learn from them :D

compile errors meme

We are currently in the backported stage, implementing changes that were made available to gtk+ 3.x for a smoother transition, and it is getting over soon, my next goal is to implement gesture and event handling changes and some miscellaneous changes after that, which will mark the ending of this phase, I believe it won't be a smooth ride but I'm ready for the challenge :)

Once the backported phase is over, we will go to the "Breaking Stage" (Starting the port) during which I think hell will break loose with compile errors, this stage will also include some blind archery and will be as fun as it is scary. I can't wait to enter and finish it successfully.

compile errors meme

What this work has taught me till now is that any port or development won't be possible without proper documentation and community support, I knew the importance of good documentation, community, and dev supports, but working on something this big has given me a new level of appreciation for them.

I'm not a good writer, even writing this blog was not easy for me, thus I have great respect for all the documenters clanking their keyboards to make our life easier.

documentator smashing keys

I'm also glad on getting amazing mentors, both my mentors are super cool and helpful, I like to refer to one of my mentors, Aleb as Git Ninja (Haven't told him that yet :p). I didn't know git was so strong, I mostly only used git clone, add, fetch, pull, commit, and push commands, but now I use so many. Thanks, Mr. Torvalds.

I'm also looking forward to GUADEC'22, I wanted to attend it physically but it is not cheap, to say the least, thus I will be joining remotely (If someone from India goes there, please bring me back some stickers ;) ). Which reminds me that I also have to work on my lightning talk presentation, please help my hands.


Well, that would be it for this blog, have a great time :)

Selected for GSoC'22

Welcome, readers!

I'm pleased to share that I'm accepted for Google Summer of Code (GSoC) 2022 under GNOME Foundation umbrella on the Pitivi project. This summer I will be updating the project from GTK3 to the latest GTK4 toolkit.

To anyone that wants to be a part of GSoC, I have only one piece of advice, just go for it. Don't think if you can do it or not, don't assume failure before attempting, and don't overthink. I always felt that it is for the best of the best, and I won't be able to clear it, all the big organizations on the GSoC page overwhelmed me, but instead of making it a dream, I made it a life goal. And well, now I'm enjoying it.

I will be posting more blogs here regarding my progress on the project and also some casual ones too.

So be sure to check this page :)

Thank you for reading :D

July 01, 2022

Creating Windows installation media on Linux

Every so often I need to install Windows, most recently for my GNOME on WSL experiments, and to do this I need to write the Windows installer ISO to a USB stick. Unlike most Linux distro ISOs, these are true, pure ISO 9660 images—not hybrid images that can also be treated as a DOS/MBR disk image—so they can’t just be written directly to the disk. Microsoft’s own tool is only available for Windows, of course.

I’m sure there are other ways but this is what I do. Edit: check the comments for an approach which involves 2 partitions and a little more careful copying, but no special tools. I’m writing it down so I can easily find the instructions next time!

The basic process is quite simple:

  • Download an ISO 9660 disk image from Microsoft
  • Partition the USB drive with a single basic data partition, formatted as FAT32
  • Mount the ISO image – on GNOME, you should just be able to double-click it to mount it with Disk Image Mounter
  • Copy all the files from the mounted ISO image to the USB drive

But there is a big catch with that last step: at least one of the .wim files in the ISO is too large for a FAT32 partition.

The trick is to first copy all the files to a writeable directory on internal storage, then use a tool called wimlib-imagex split from wimlib to split the large .wim file into a number of smaller .swm files before copying them to the FAT32 partition. I think I compiled it from source, in a toolbox container, but you could also use this OCI container image whose README helpfully provides these instructions:

find . -size +4294967000c -iname '*.wim' -print | while read -r wimpath; do
  wimbase="$(basename "$wimpath" '.wim')"
  wimdir="$(dirname "$wimpath")"
  echo "splitting ${wimpath}"
  docker run \
    --rm \
    --interactive \
    --tty \
    --volume "$(pwd):/work" \
    "backplane/wimlib-imagex" \
      split "$wimpath" "${wimdir}/${wimbase}.swm" 4000

Now you can copy all those files, minus the too-large .wim, onto the FAT32 drive, and then boot from it.

This all assumes that you only care about a modern system with EFI firmware. I have no idea about creating a BIOS-bootable Windows installer on Linux, and fortunately I have never needed to do this: to test stuff on a BIOS Windows installation, I have used the time-limited virtual machines that Microsoft publishes for testing stuff in old versions of Internet Explorer.

I was inspired to resurrect this old draft post by a tweet by Ross Burton.

Fri 2022/Jul/01

I wrote a technical overview of the WebKit WPE project for the WPE WebKit blog, for those interested in WPE as a potential solution to the problem of browsers in embedded devices.

This article begins a series of technical writeups on the architecture of WPE, and we hope to publish during the rest of the year further articles breaking down different components of WebKit, including graphics and other subsystems, that will surely be of great help for those interested in getting more familiar with WebKit and its internals.

#50 Extend the Web

Update on what happened across the GNOME project in the week from June 24 to July 01.

Core Apps and Libraries


Web browser for the GNOME desktop.

patrick reports

Epiphany has received numerous improvements to WebExtension support.


Building blocks for modern GNOME apps using GTK4.

Alexander Mikhaylenko announces

Libadwaita now has AdwMessageDialog as an adaptive replacement for GtkMessageDialog.


Lets you install and update applications and system extensions.

Philip Withnall says

Milan Crha has improved support for displaying flatpak permissions in gnome-software


Configure various aspects of your GNOME desktop.

Georges Stavracas (feaneron) reports

Thanks to the fantastic work of Kate Hsuan and Richard Hughes, device security information is now available in Settings. The security information is provided by the Fwupd project.


The low-level core library that forms the basis for projects such as GTK and GNOME.

Emmanuele Bassi announces

GLib 2.74 will require an additional part of the C99 specification: variadic arguments in macros using __VA_ARGS__. All supported toolchains (GCC, Clang, MSVC) already comply to the standard, so if you use a different compiler make sure it supports C99: https://gitlab.gnome.org/GNOME/glib/-/merge_requests/2791

Emmanuele Bassi says

Two new macros to define enumeration and flags types directly in your C code without using glib-mkenums are going to be available in the next stable release of GLib: https://gitlab.gnome.org/GNOME/glib/-/merge_requests/2788

Third Party Projects

Chris 🌱️ reports

Loupe has gained a brand new gallery view, with smooth image loading, swipe navigation support, and more.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

June 30, 2022

GSoC 2022: Second update - Research


Two weeks have passed since I started the first phase of my “Make New Documents feature discoverable” Nautilus GSoC project. It’s called “Researching the underlying problem and use cases'', and according to the timeline that was set up in my last planning post, I’m here to share our findings and results.

Why do the research

Before we start revamping/fixing issues with our implementation of the “New Document” feature, it’s essential that we look into other operating systems, file managers, and web apps, to see how they approach allowing the user to create files. Do not be mistaken - the intention is not to blindly copy them, but to take inspiration from them, identify potential problems with specific solutions, and find out what users may expect from us. If there’s a clear trend in some approach, there may be a valid reason to implement it. We don’t exist in a vacuum, and we are definitely able to learn from the accomplishments or mistakes of others.. Everybody, in some way or the other, builds up on their predecessors work, and continues the everlasting chain of inspiration and creation. While doing the research one needs to take into account the different types of problems they are trying to solve and specific kinds of users that they need to focus on.

List of software to test

And so I put on my spying disguise and began my journey through the multiverse of different Operating system’s file managers and web apps:

  • Other Operating Systems
    • Windows 11 File Explorer
    • macOS 12 Finder
    • ChromeOS Files
  • Other Linux distributions
    • KDE's Dolphin
    • Deepin File Manager
    • elementaryOS Files
  • Web apps
    • Google Drive
    • Dropbox
    • Microsoft Onedrive
    • Apple iCloud
    • Nextcloud
    • GitLab file tree web UI  -- templates are offered by the text editor after creating new file

Some of them, like macOS 12 Finder, Apple iCloud and CrOS don't have the “New Document” feature at all - they rely on an application-centric model, therefore they are ignored in the rest of the post.

Enter the matrix, (KDE) Neo(n)!

Putting a stop to the movie references here - we’ve quickly identified several common features across different implementations, and to avoid repetition, we’ve summarized them in the following matrix:

Pretty pictures

If all you wanted from this post are fancy screenshots, here are the goodies. I’ve divided the screenshots not by the implementation itself, but rather by the features I’ll be referring to.

“New” menu containing a “New Folder” submenu

All of the tested implementations except for Deepin Files moved the “New Folder” entry into a “New” submenu, which could be something we might consider doing as well. If we move the “New Folder” item into a “New” menu, we should also add a [ + ] menu button to the headerbar, since all of the web apps have one.

Windows 11 File Explorer groups the “New Folder” feature together with creating other files in one single “New” menu. Among others, it also allows creating empty zip archives no-questions-asked, perhaps because they are treated as normal folders, i.e. user can double click and “enter” them in the file explorer.

elementaryOS Files “New” menu which allows creating folders

KDE Dolphin “Create New” feature which offers “New Folder” option, among others

Gitlab file tree web UI also groups “New File” with “New directory” in one specific “+” menu, as all of the other web solutions do.


KDE Neon 5.25 Dolphin, Windows 11 File Explorer and Dropbox add a “New Link” entry to the “New” menu which allows you to create a link to a file. Nautilus doesn’t even show the option to create links by default, perhaps this way of creating links is more intuitive than copy->paste link. It could be interesting to consider adding it to the “New” menu, since there may be a discoverability problem with links as well. It would require a whole new dialog, so unfortunately it’s out of scope of this project.

KDE Neon 5.25 Dolphin allows creating links from the “Create New” menu.

Windows 11 File Explorer opens a link wizard after selecting the “Create Shortcut” option.

Default entries

One of the conclusions is that the new text files and new office documents are both common features users can fairly expect us to have.

Deepin Files offering default templates for office documents and plain text files

Google Drive also creates a shortcut to each editor application’s “choose template” functionality, and there are even more default templates in the “more” submenu

OneDrive showing default templates in its specific format

Dropbox allowing to choose a specific format when selecting a general default file type

Nextcloud showing many default templates

Templates directory

Deepin 20 File Manager preserves the Templates directory functionality, but hides the directory as .Templates, while KDE Neon 5.25 Dolphin ignores the Templates directory altogether and requires creating .desktop files in .local/share/templates directory pointing to the templates, wherever they are. Although there are implementations using it, the XDG Templates directory seems not to be that much of a standard after all - XDG_DIR_TEMPLATES is a standard location, but it doesn’t look like it mandates any special behaviour at all. We are free to change how the Templates integration works!

elementaryOS Files uses the XDG Templates directory to allow the user to add more “New File” entries, just like GNOME Files

Deepin Files hides the Templates (.Templates) directory, but the functionality remains.

KDE Dolphin ignores the XDG Templates directory altogether, requiring the user to create .desktop files in .local/share/templates directory pointing to the templates.

Adding more app templates

Google Drive allows the user to add more items than the default ones by guiding users through the installation of apps, which after installing add the templates to the “New” menu, which is something we could consider doing as well.

Renaming on creation

All of the implementations (except for Google Drive and Dropbox, which open the document handling app) immediately allow the user to rename files after creation, except for Nautilus. It’s outside of the scope of this project, but important to note nonetheless.

elementaryOS Files and Deeping Files allowing the user to immediately rename the created file

KDE Dolphin opening a dialog after creating a file in which user can change the filename

Immediately opening new files in the app

All of the web apps immediately open new files in their appropriate editor applications, while all of the non-web apps don’t.

Google Docs allowing the user to choose from different templates while creating a new file

Nextcloud allowing the user to choose from different templates or a blank file before opening it in a web app


The research highlighted many interesting things, including how unique GNOME Files is with not grouping the “New File” and “New Directory” features, how popular it is to include default templates for office and text files in other implementations, how XDG Templates “standard” is neglected, how often the other implementations allow for renaming the files on creation or how consistent web solutions are with providing a “+” button. There’s also the matter of discoverability of the “New Link” feature, but it’s out of the scope for my project. The next phase is “Designing a mockup based on aforementioned research”, which includes asking the design team for pointers - that’s what I’ll be doing now, so I can prepare a design later. I also want to tremendously thank my mentor Antonio Fernandes, who’s credited for the awesome feature matrix, and most of the conclusions from the research. See you in the next update :)

June 29, 2022

Summer Maps

 So, as tradition has it here's the (Northern Hemisphere) summer blog post about GNOME Maps.

One large under-the-hood change I've made in Maps since last time is migrating the JS code to use ES6 modules.

So, using “import” instead of referring modules as objects from the old-school global “imports” object in GJS.

When using GI modules to access introspectable GObject-based libraries it means something like

const GLib = imports.gi.GLib;

let var = new GLib.SomeClassFromGLib();


import GLib from 'gi://GLib';

let var = new GLib.SomeClassFromGLib();


Here the URI scheme gi:// referres to the GObject introspectable libraries as seen in runtime in the managed code (in this case JS).

When using our own classes locally defined in JS, we can refer to resources found inside the program's GResource, so something that was before

const MapMarker = imports.mapMarker;

let marker = new MapMarker.Marker(...);


import {MapMarker} from './mapMarker.js';

let marker = new MapMarker(...);

Here we import then class definition directly into the local namespace (this could have been done before as well with the old “imports“ object, using

const MapMarker = imports.mapMarker.MapMarker;

but this was something we never did before, but now I've decided to go this path. Took some style guidance from Weather, which had already done this switch before.

These classes are now in turn defined in their sources as exportable

export class SomeClass extends GObject {...

rather than just being declared as “var” on the top level scope in the modules as before

var SomeClass = class SomeClass extends GObject {...

This makes it also clearer what is visible outside a module, so this works much like “public“ in Java, C#, and Vala for example.

For our utility modules containing non-class functions, typically wildcard imports are now used

 import * as Utils from './utils.js';

Functions in this modules that should be used from outside are defined with “export“ just like classes in the above example.

Now the utility function can be access just like it would have been in the cases before the change (when assigning the object from the “imports” global object to a local const variable as



There has also been some changes on the surface. Sten Smoller has contributed the missing “keep left” and “keep right” instruction icons for turn-by-turn-based navigation, so there are no missing icons in route instructions


Also over in libshumate James Westman has done some good improvements the in-progress vector-tile renderer that he has covered in an excellent blog post


And speaking of libshumate and GTK 4, recently the GWeather library has switched to be built by default using the libsoup (the GObject HTTP library) 3 API/ABI, and still uses the same library ABI version.

As our old libchamplain renderer makes use of the libsoup version 2 ABI we have kinda hit a bit of a wall here.

For our Flatpak builds we could still build the necessary dependencies with libsoup 2 build flags, and bundle them. But as this is not feasible for traditional packages (and it won't work for the GNOME OS builds) we pretty much have to make a go at migrating to GTK 4 and libshumate for GNOME 43 in September.

It will probably be a bit of a tight fit to finish it, but wish us luck!


Oh, and I couldn't resist adding a little special “summer feature”.

Quite a long time ago Andreas Nilsson designed a cute little steam locomotive icon for use in public transit itineraries for tourist railways.

I decided to include this icon now. But as none of the providers we currently support has this distinction I decided to put in a little hard-coding in the plugin for the Swedish Resrobot plugin to use this icon (or rather override the route type so that the icon gets used) for a couple selected agencies.

So that you for example gets this icon for journey's hauled by Lennakatten's locomotive Thor

 Until next time, happy summer! 😎

WebExtension Support in Epiphany

I’m excited to help bring WebExtensions to Epiphany (GNOME Web) thanks to investment from my employer Igalia. In this post, I’ll go over a summary of how extensions work and give details on what Epiphany supports.

Web browsers have supported extensions in some form for decades. They allow the creation of features that would otherwise be part of a browser but can be authored and experimented with more easily. They’ve helped develop and popularize ideas like ad blocking, password management, and reader modes. Sometimes, as in very popular cases like these, browsers themselves then begin trying to apply lessons upstream.

Toward universal support

For most of this history, web extensions have used incompatible browser-specific APIs. This began to change in 2015 with Firefox adopting an API similar to Chrome’s. In 2020, Safari also followed suit. We now have the foundations of an ecosystem-wide solution.

“The foundations of” is an important thing to understand: There are still plenty of existing extensions built with browser-specific APIs and this doesn’t magically make them all portable. It does, however, provide a way towards making portable extensions. In some cases, existing extensions might just need some porting. In other cases, they may utilize features that aren’t entirely universal yet (or, may never be).

Bringing Extensions to Epiphany

With version 43.alpha Epiphany users can begin to take advantage of some of the same powerful and portable extensions described above. Note that there are quite a few APIs that power this and with this release we’ve covered a meaningful segment of them but not all (details below). Over time our API coverage and interoperability will continue to grow.

What WebExtensions can do: Technical Details

At a high level, WebExtensions allow a private privileged web page to run in the browser. This is an invisible Background Page that has access to a browser JavaScript API. This API, given permission, can interact with browser tabs, cookies, downloads, bookmarks, and more.

Along with the invisible background page, it gives a few options to show a UI to the user. One such method is a Browser Action which is shown as a button in the browser’s toolbar that can popup an HTML view for the user to interact with. Another is an Options Page dedicated to configuring the extension.

Lastly, an extension can inject JavaScript directly into any website it has permissions to via Content Scripts. These scripts are given full access to the DOM of any web page they run in. However content scripts don’t have access to the majority of the browser API but, along with the above pages, it has the ability to send and receive custom JSON messages to all pages within an extension.

Example usage

For a real-world example, I use Bitwarden as my password manager which I’ll simplify how it roughly functions. Firstly there is a Background Page that does account management for your user. It has a Popup that the user can trigger to interface with your account, passwords, and options. Finally, it also injects Content Scripts into every website you open.

The Content Script can detect all input fields and then wait for a message to autofill information into them. The Popup can request the details of the active tab and, upon you selecting an account, send a message to the Content Script to fill this information. This flow does function in Epiphany now but there are still some issues to iron out for Bitwarden.

Epiphany’s current support

Epiphany 43.alpha supports the basic structure described above. We are currently modeling our behavior after Firefox’s ManifestV2 API which includes compatibility with Chrome extensions where possible. Supporting ManifestV3 is planned alongside V2 in the future.

As of today, we support the majority of:

  • alarms - Scheduling of events to trigger at specific dates or times.
  • cookies - Management and querying of browser cookies.
  • downloads - Ability to start and manage downloads.
  • menus - Creation of context menu items.
  • notifications - Ability to show desktop notifications.
  • storage - Storage of extension private settings.
  • tabs - Control and monitoring of browser tabs, including creating, closing, etc.
  • windows - Control and monitoring of browser windows.

A notable missing API is webRequest which is commonly used by blocking extensions such as uBlock Origin or Privacy Badger. I would like to implement this API at some point however it requires WebKitGTK improvements.

For specific API details please see Epiphany’s documentation.

What this means today is that users of Epiphany can write powerful extensions using a well-documented and commonly used format and API. What this does not mean is that most extensions for other browsers will just work out of the box, at least not yet. Cross-browser extensions are possible but they will have to only require the subset of APIs and behaviors Epiphany currently supports.

How to install extensions

This support is still considered experimental so do understand this may lead to crashes or other unwanted behavior. Also please report issues you find to Epiphany rather than to extensions.

You can install the development release and test it like so:

flatpak remote-add --if-not-exists gnome-nightly https://nightly.gnome.org/gnome-nightly.flatpakrepo
flatpak install gnome-nightly org.gnome.Epiphany.Devel
flatpak run --command=gsettings org.gnome.Epiphany.Devel set org.gnome.Epiphany.web:/org/gnome/epiphany/web/ enable-webextensions true

# Due to a temporary bug you need to run this:
mkdir -p ~/.var/app/org.gnome.Epiphany.Devel/data/epiphany/web_extensions/

You will now see Extensions in Epiphany’s menu and if you run it from the terminal it will print out any message logged by extensions for debugging. You can download extensions most easily from Mozilla’s website.

June 28, 2022

Fixing test coverage reports in at-spi2-core

Over the past weeks I have been fixing the test coverage report for at-spi2-core. It has been a bit of an adventure where I had to do these:

  • Replace a code coverage tool for another one...
  • ... which was easier to modify to produce more accessible HTML reports.
  • Figuring out why some of at-spi2-core's modules got 0% coverage.
  • Learning to mock DBus services.

What is a code coverage report?

In short — you run your program, or its test suite. You generate a coverage report, and it tells you which lines of code were executed in your program, and which lines weren't.

A coverage report is very useful! It lets one answer some large-scale questions:

  • Which code in my project does not get exercised by the tests?

  • If there is code that is conditionally-compiled depending on build-time options, am I forgetting to test a particular build configuration?

And small-scale questions:

  • Did the test I just added actually cause the code I am interested in to be run?

  • Are there tests for all the error paths?

You can also use a coverage report as an exploration tool:

  • Run the program by hand and do some things with it. Which code was run through your actions?

I want to be able to do all those things for the accessibility infrastructure: use the report as an exploration tool while I learn how the code works, and use it as a tool to ensure that the tests I add actually test what I want to test.

A snippet of a coverage report

This is a screenshot of the report for at-spi2-core/atspi/atspi-accessible.c:

Coverage report for the atspi_accessible_get_child_count() function

The leftmost column is the line number in the source file. The second column has the execution count and color-coding for each line: green lines were executed one or more times; red lines were not executed; white lines are not executable.

By looking at that bit of the report, we can start asking questions:

  • There is a return -1 for an error condition, which is not executed. Would the calling code actually handle this correctly, since we have no tests for it?

  • The last few lines in the function are not executed, since the check before them works as a cache. How can we test those lines, and cause them to be executed? Are they necessary, or is everything handled by the cache above them? How can we test different cache behavior?

First pass: lcov

When I initially added continuous integration infrastructure to at-spi2-core, I copied most of it from libgweather, as Emmanuele Bassi had put in some nice things in it like static code analysis, address-sanitizer, and a code coverage report via lcov.

The initial runs of lcov painted a rather grim picture: test coverage was only at about 30% of the code. However, some modules which are definitely run by the test suite showed up with 0% coverage. This is wrong; those modules definitely have code that gets executed; why isn't it showing up?

Zero coverage

At-spi2-core has some peculiar modules. It does not provide a single program or library that one can just run by hand. Instead, it provides a couple of libraries and a couple of daemons that get used through those libraries, or through raw DBus calls.

In particular, at-spi2-registryd is the registry daemon for accessibility, which multiplexes requests from assitive technologies (ATs) like screen readers into applications. It doesn't even use the session DBus; it registers itself in a separate DBus daemon specific to accessibility, to avoid too much traffic in the main session bus.

at-spi2-registryd gets started up as soon as something requires the accessibility APIs, and remains running until the user's session ends.

However, in the test runner, there is no session. The daemon runs, and gets a SIGTERM from its parent dbus-daemon when it terminates. So, while at-spi2-registryd has no persistent state that it may care about saving, it doesn't exit "cleanly".

And it turns out that gcc's coverage data gets written out only if the program exits cleanly. When you compile with the --coverage option, gcc emits code that turns on the flag in libgcc to write out coverage information when the program ends (libgcc is the compiler-specific runtime helper that gets linked into normal programs compiled with gcc).

It's as if main() had a wrapper:

void main_wrapper(void)
    int r = main(argc, argv);



int main(int argc, char **argv) 
    /* your program goes here */

Of course, if your program terminates prematurely through SIGTERM, the wrapper will not finish running and it will not write out the coverage info.

So, how do we simulate a session in the test runner?

Mocking gnome-session

I recently learned of a fantastic tool, python-dbusmock, which makes it really easy to create mock implementations of DBus interfaces.

There are a couple of places in at-spi2-core that depend on watching the user session's lifetime, and fortunately they only need two things from the gnome-session interfaces:

I wrote a mock of these DBus interfaces so that the daemons can register against the fake session manager. Then I made the test runner ask the mock session to tell the daemons to exit when the tests are done.

With that, at-spi2-registryd gets coverage information written out properly.

Obtaining coverage for atk-adaptor

atk-adaptor is a bunch of glue code between atk, the GObject-based library that GTK3 uses to expose accessible interfaces, and libatspi, the hand-written DBus binding to the accessibility interfaces.

The tests for this are very interesting. We want to simulate an application that uses atk to make itself accessible, and to test that e.g. the screen reader can actually interface with them. Instead of creating ATK implementations by hand, there is a helper program that reads XML descriptions of accessible objects, and exposes them via ATK. Each individual test uses a different XML file, and each test spawns the helper program with the XML it needs.

Again, it turns out that the test runner just sent a SIGTERM to the helper program when each test was done. This is fine for running the tests normally, but it prevents code coverage from being written out when the helper program terminates.

So, I installed a gmain signal handler in the helper program, to make it exit cleanly when it gets that SIGTERM. Problem solved!

Missing coverage info for GTK2

The only part of at-spi2-core that doesn't have coverage information yet is the glue code for GTK2. I think this would require running a test program under xvfb so that its libgtk2 can load the module that provides the glue code. I am not sure if this should be tested by at-spi2-core itself, or if that should be the responsibility of GTK2.

Are the coverage reports accessible?

For a sighted person, it is easy to look at a coverage report like the example above and just look for red lines — those that were not executed.

For people who use screen readers, it is not so convenient. I asked around a bit, and Eitan Isaacson gave me some excellent tips on improving the accessibility of lcov and grcov's HTML output.

Lcov is an old tool, and I started using it for at-spi2-core because it is what libgweather already used for its CI. Grcov is a newer tool, mostly by Mozilla people, which they use for Firefox's coverage reports. Grcov is also the tool that librsvg already uses. Since I'd rather baby-sit one tool instead of two, I decided to switch at-spi2-core to use grcov as well and to improve the accessibility of its reports.

The extract from the screenshot above looks like a table with three columns (line number, execution count, source code), but it is not a real HTML <table>it is done with div elements and styling. Something like this:

  <div class="columns">
    <div class="column">
      line number
    <div class="column color-coding-executed">
      execution count
    <div class="column color-coding-executed">
      <pre>source code</pre>
  <!-- repeat the above for each source line -->

Eitan showed me how to use ARIA tags to actually expose those divs as something that can be navigated as a table:

  • Add role="table" aria-label="Coverage report" to the main <div>. This tells web browsers to go into whatever interaction model they use for navigating tables via accessible interfaces. It also gives a label to the table, so that it is easy to find by assistive tools; for example, a screen reader may let you easily navigate to the next table in a document, and you'd like to know what the table is about.

  • Add role="row" to each row's div.

  • Add role="cell" to an individual cell's div.

  • Add an aria-label to cells with the execution count: while the sighted version shows nothing (non-executable lines), or just a red background (lines not executed), or a number with a green background, the screen reader version cannot depend on color coding alone. That aria-label will say "no coverage", or "0", or the actual execution count, respectively.

Time will tell whether this makes reports easier to peruse. I was mainly worried about being able to scan down the source quickly to find lines that were not executed. By using a screen reader's commands for tabular navigation, one can move down in the second column until you reach a "zero". Maybe there is a faster way? Advice is appreciated!

Grcov now includes that patch, yay!

Next steps

I am starting to sanitize the XML interfaces in at-spi2-core, at least in terms of how they are used in the build. Expect an update soon!

June 27, 2022

See you in GUADEC!

Hey there!

After two virtual conferences, GUADEC is finally getting back to its physical form. And there couldn’t be a better place for us to meet again than Mexico! If you haven’t registered yet, hurry up!

Looking forward to seeing you all!

June 25, 2022

Builder GTK 4 Porting, Part VII

It’s been another couple weeks of porting, along with various distractions.

The big work this time around has been deep surgery to Builder’s “Foundry”. This is the sub-system that is responsible for build-systems, pipelines, external-devices, SDKs, toolchains, deployments-strategies and more. The sub-system was starting to show it’s age as it was one of the first bits of Builder to organically emerge.

One of the things that become so difficult over the years is dealing with all the container layers we have to poke holes through. Running a command is never just running a command. We have to setup PTYs (and make sure the TTY setup ioctl()s happen in the right place), pass environment variables (but to only the right descendant process), and generally a lot more headaches.

What kicked off this work was my desire to remove a bunch of poorly abstracted bits and we’re almost there. What has helped considerably is creating a couple new objects to help manage the process.

The first is an IdeRunContext. It is sort of like a GSubprocessLauncher but allows you to create layers. At the end you can convert those layers into a subprocess launcher but only after each layer is allowed to rewrite the state as you pop back to the root. In practice this has been working quite well. I finally have control without crazy amounts of argument rewriting and guesswork.

To make that possible, I’ve introduced an IdeUnixFDMap which allows to manage source↔dest FD translations for FDs that will end up in the subprocess. It has a lot of helpers around it to make it fit well into the IdeRunContext world.

All of this has allowed the new IdeRunCommand to really shine. We have various run command providers (e.g. plugins) all of which can seamlessly be used across the sub-systems supporting IdeRunContext. Plugins such as meson can even export unit tests as run commands.

The shellcmd plugin has also been rewritten upon these foundations. You can create custom commands and map them to keyboard shortcuts. The commands, like previous version of Builder, can run in various localities. A subprocess, from the build pipeline, as an app runner, or on the host. What has improved, however, is that they can also be used in surrogate of your projects run command. These two features combined means you can make Builder work for a lot of scenarios it never did before by configuring a few commands.

There aren’t a lot of screenshots for things like this, because ideally it doesn’t look too different. But under the hood it’s faster, more reliable, and far more extensible than it was previously. Hopefully that helps us cover a number of highly requested use-cases.

a screenshot of the debugger

a screenshot of the build menu with debug selected

a screenshot of the run command selection selection dialog

a screenshot showing the location of the select run command menu item

a screenshot editing a command

Release day

It's release day, sorta. Both libopenraw and Exempi got a new release within two days. Here is what's up.


Yesterday I released libopenraw 0.3.2. It's a bug fix release that add a few more camera listed. Some of these bugs where found during the still in progress Rust rewrite of the library. I also updated the MP4 parser, rebasing the patches to the latest code from Mozilla (it's for Canon CR3).

About that rewrite, I plan it to be version 0.4.0. It will have the same API for C/C++, more or less, but also available as a Rust crate. The objective is to have at least the same feature set as libopenraw 0.3.2, which is why, when I find bugs during the rewrite, I fix them so that the test suite can be shared.

Why rewriting in Rust? Simply because libopenraw is a parser of files and I want safety first. Rust allow writing fast code that is inherently more secure than C++. I find myself being much productive, and Rust is already required for the MP4 parser mentionned above.

The currently missing from the Rust rewrite: CIFF support (old Canon CRW), MRW (Minolta) and more importantly the C API.

The first place where I will used it is Niepce.


Today I released Exempi 2.6.2. Just some bug fixes, nothing ground breaking.

June 24, 2022

First Update!

Hello everyone! 😄

It's been almost 2 weeks into the GSoC coding period, and the project has picked up the pace!

But first things first,
Community Bonding Period:
I and Ignacy Kuchciński were selected for the project "Revamp New Documents Sub-menu", which led to an unexpected situation, but Ignacy has explained how we solved it in their blog.
So the main aim of my project now is to design the UI for the New Documents creation feature when the user is using at least one template i.e when the templates folder is not empty. We had a few more discussions during this time, mainly on chat, further planning for the coding period.

Week 1:
So finally it was the 13th of June, and the coding period began. 🎉
For me, the first week was full of confusion, as to how should I get started and where should I write the code. But having a few discussions with my mentor and going through a lot of blogs and resources, we finally had a plan. 
The current implementation of the New Documents submenu is a GtkPopoverMenu. It uses the GMenuModel, which is added as submenus in the right-click popover menu. 
What we plan to do is to expose all the templates in a single-view, tree-like structure!
So how are we supposed to do this? Well, the answer is GtkListView as a custom child widget in the popover menu model. And for a GtkListView, we need a GListModel or more specifically GtkTreeListModel as well as a factory to create list items. This will result in a design similar to this:

Week 2:
So now, with the planning done, the actual coding started. As of writing this, the GListModel is almost complete, and now I need to work on the GtkTreeListModel and a factory for creating list items. I will also be adding icons that will help visually recognize what type of file it is! I'll keep you all updated on my progress. 💪

I would like to thank my mentor @antoniof for guiding me through every step of this process, as all these concepts were completely new to me 😊

Thanks for reading,
See you in a week or two! 😉

#49 New Views

Update on what happened across the GNOME project in the week from June 17 to June 24.

Core Apps and Libraries


Providing a simple and integrated way of managing your files and browsing your file system.

antoniof reports

The fresh new Files list view mode has arrived in Nightly, with many bugfixes and new features.

Ignacy Kuchciński announces

Hello there! My name is Ignacy Kuchciński and I’ve been selected for the GSoC'22 program to improve the discoverability of the “New Document” feature in Nautilus, the file manager for GNOME, under the guidance of my mentor Antonio Fernandes.

You can follow my progress with the project by checking out my blog here. You’ll find there a short initial post introducing me, as well as first update post that talks about our plans and milestones for the project.


A simple calendar application.

Adrien Plazas says

Calendar now has a pinch gesture for its week view. It works on touchscrens and touchpad as well as by scrolling while holding the Ctrl key. It will help the application work nicely on small touchscreens like GNOME mobiles.

Adrien Plazas says

Calendar’s events received a fresh coat of paint. The new style better matches the latest designs and supports the dark style.

Circle Apps and Libraries

Pika Backup

Simple backups based on borg.

Sophie says

Pika Backup 0.4.1 has been released today. It mostly addresses one external problem with scheduled backups and updates several translations.

The new version contains a workaround that fixes scheduled backups not working on some systems when using Flatpak. The systems affected are or were missing a fix for Flatpak’s auto start functionality.

The new version also comes with BorgBackup 1.2.1 on Flathub which brings several minor fixes.

Third Party Projects

Mazhar Hussain reports

Login Manager Settings v0.6 has been released with some very important bug fixes.

Bug fixes

  • A lot of Fedora (and other SELinux enabled distros) users were experiencing their Login Manager breaking after using this app. This has been fixed.
  • On Ubuntu, some settings (Shell Theme, Background, Top Bar Tweaks) were not getting applied. This has also been fixed.
  • Some other minor bugs were fixed.

New features

  • Command-line options for verbosity and printing application version have been added.
  • The application got a new GitHub Pages website with a nice Show in App Store link.


  • 4 new languages
  • 5 languages updated
  • Switched to Weblate for translations.

Click here for full changelog.

GNOME Shell Extensions

Advendra Deswanta announces

Lock Screen Message is released with GNOME 42 & libadwaita support and longer text feature! (max 480 chars width). So you can add your message with a longer explanations.

Advendra Deswanta announces

Shell Configurator v5 has been released after more than a year since the latest version (v4) was released and suspended the development.

This extension comes with BIG changes, including:

  • Added GNOME 41 & 42 (with libadwaita) Support
  • Rewritten and redesigned preferences look
  • New extension preset and configuration search feature
  • Added suggested extension section
  • Added more configurations!
  • Support more than 10 supported languages
  • Configuration module system
  • Bug fixes

and more changes on CHANGELOG.md.

You can install this extension on extensions.gnome.org

You can also contribute this extension by following these rules on CONTRIBUTING.md. All contributors are welcome.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

June 23, 2022

Outreachy Week-3: Everybody struggles

This blog post is on my progress till week-3 and vocabulary terms which I got to know while working on this project .

Journey uptill now

When I initally replaced gtk3 version with gtk4 version there were so many errors about 300 or so and double warnings 😵. Then I went through each error one by one , and figured out that I have to change some parent classes . Because now in gtk4 version , some classes have been made final which were not final in gtk3 and now we can no more derive from those classes . This was a big task as there were many components which were deriving from those final classes for example DL-teams page, projects page and Poeditor page some other components . The one which was looking toughest to me was poeditor page . It was a component deriving from GtkNotebook , we have to remove that component and modify GtrTab component as its replacement .

Some vocabulary terms which I got to know

When I started working there were many terms related to translation which I was coming accross like po file , internationalisation , localisation etc. I read some sections from gettext manual .

June 22, 2022

Pango 1.90

I’ve finally convinced myself that I need to make a Pango 2.0 release to clean up the API, and introduce some new APIs without breaking users that expect Pango to be very stable.

So, here it is… well not quite. What I am presenting today is not Pango 2.0  yet,  but 1.90 – an unstable preview of the coming changes, to gather feedback and give some heads-up about whats coming.

Whats changed?

Pango is now shipped as a single shared object, libpango-2.so, which contains the high-level cross-platform code as well as platform-specific fontmap implementations and the cairo support (if it is enabled). All of the APIs have been cleaned up and modernized.

PangoFontMap  has seen some significant changes. It is now possible to instantiate a PangoFontMap, and populate it manually with PangoFontFamily and PangoFontFace objects.

There are still platform-specific subclasses

  •  PangoFcFontMap
  • PangoCoreTextFontMap
  • PangoDirectWriteFontMap

which will use platform APIs to enumerate fonts and populate the fontmap.

Whats new?

PangoLineBreaker is the core of pango’s line-breaking algorithm,
broken out from PangoLayout. Having this available independent
from PangoLayout will facilitate uses such as multi-column
layout, text flow between frames and shaping paragraphs around

Here is an example that shows changing the column width mid-paragraph:

PangoLines is the ‘formatted output’ part of a PangoLayout, and can be used to collect the output of a PangoLineBreaker.

PangoHbFont is a font implementation that is a thin wrapper around HarfBuzz font and face objects. This the way in which Pango handles fonts on all platforms now.

PangoUserFont is a  callback-based font implementation to allow for entirely application-defined font handling, including glyph drawing. This is similar to cairo user fonts, where this example was borrowed:

Many smaller changes, such as better control over line height with line-height attributes and control over the trimming of leading, or guaranteed font ↔ description roundtrips with face-ids.

How can I try this?

The Pango code lives on the pango2 branch, and there is a corresponding pango2 branch of GTK, which contains a port of GTK to the new APIs.

The tarballs are here.


If you have an interest in text rendering, please try this out and tell us what you think. Your feedback will make Pango 2 better.

For more details about the changes in this release, see the NEWS, and have a look at the migration guide.

If you want to learn more about the history of Pango and the background for some of these changes, come to my Guadec talk in Guadalajara!

Giving up on GNOME To Do

Seven years ago, back when I was a university student living with my parents and with lots of free time in my hands, I created GNOME To Do to help me organize my Google Summer of Code tasks. It was a fantastic time of my life, and I had the privilege of having time to procrastinate on writing productivity tools without earning a single coin on it.

Over the years, however, things changed. I married, moved to a new home with my partner, adopted a lovely dogga. Had to deal with the sucky parts of life like paying bills, planning meals, doing groceries, therapy, relearning how to live and operate under ADHD, taking care of myself and people around me.

Parallel to that, I’ve been collecting maintainership duties like pokémons, both due to a chronic inability to say ‘no’ and also due to genuinely being interested in most of the things I get involved with.

As it turns out, having massively less free time AND an increasing the number of things to do is not going well.

Which is why, after too much postponing and avoidance, I’m giving up maintainership of GNOME To Do. I do not have the personal resources to maintain it anymore. If anyone is interested in stepping up to maintain it, reach out and I’ll be happy to get you up to speed on the app.

June 21, 2022

an optimistic evacuation of my wordhoard

Good morning, mallocators. Last time we talked about how to split available memory between a block-structured main space and a large object space. Given a fixed heap size, making a new large object allocation will steal available pages from the block-structured space by finding empty blocks and temporarily returning them to the operating system.

Today I'd like to talk more about nothing, or rather, why might you want nothing rather than something. Given an Immix heap, why would you want it organized in such a way that live data is packed into some blocks, leaving other blocks completely free? How bad would it be if instead the live data were spread all over the heap? When might it be a good idea to try to compact the heap? Ideally we'd like to be able to translate the answers to these questions into heuristics that can inform the GC when compaction/evacuation would be a good idea.

lospace and the void

Let's start with one of the more obvious points: large object allocation. With a fixed-size heap, you can't allocate new large objects if you don't have empty blocks in your paged space (the Immix space, for example) that you can return to the OS. To obtain these free blocks, you have four options.

  1. You can continue lazy sweeping of recycled blocks, to see if you find an empty block. This is a bit time-consuming, though.

  2. Otherwise, you can trigger a regular non-moving GC, which might free up blocks in the Immix space but which is also likely to free up large objects, which would result in fresh empty blocks.

  3. You can trigger a compacting or evacuating collection. Immix can't actually compact the heap all in one go, so you would preferentially select evacuation-candidate blocks by choosing the blocks with the least live data (as measured at the last GC), hoping that little data will need to be evacuated.

  4. Finally, for environments in which the heap is growable, you could just grow the heap instead. In this case you would configure the system to target a heap size multiplier rather than a heap size, which would scale the heap to be e.g. twice the size of the live data, as measured at the last collection.

If you have a growable heap, I think you will rarely choose to compact rather than grow the heap: you will either collect or grow. Under constant allocation rate, the rate of empty blocks being reclaimed from freed lospace objects will be equal to the rate at which they are needed, so if collection doesn't produce any, then that means your live data set is increasing and so growing is a good option. Anyway let's put growable heaps aside, as heap-growth heuristics are a separate gnarly problem.

The question becomes, when should large object allocation force a compaction? Absent growable heaps, the answer is clear: when allocating a large object fails because there are no empty pages, but the statistics show that there is actually ample free memory. Good! We have one heuristic, and one with an optimum: you could compact in other situations but from the point of view of lospace, waiting until allocation failure is the most efficient.


Moving on, another use of empty blocks is when shrinking the heap. The collector might decide that it's a good idea to return some memory to the operating system. For example, I enjoyed this recent paper on heuristics for optimum heap size, that advocates that you size the heap in proportion to the square root of the allocation rate, and that as a consequence, when/if the application reaches a dormant state, it should promptly return memory to the OS.

Here, we have a similar heuristic for when to evacuate: when we would like to release memory to the OS but we have no empty blocks, we should compact. We use the same evacuation candidate selection approach as before, also, aiming for maximum empty block yield.


What if you go to allocate a medium object, say 4kB, but there is no hole that's 4kB or larger? In that case, your heap is fragmented. The smaller your heap size, the more likely this is to happen. We should compact the heap to make the maximum hole size larger.

side note: compaction via partial evacuation

The evacuation strategy of Immix is... optimistic. A mark-compact collector will compact the whole heap, but Immix will only be able to evacuate a fraction of it.

It's worth dwelling on this a bit. As described in the paper, Immix reserves around 2-3% of overall space for evacuation overhead. Let's say you decide to evacuate: you start with 2-3% of blocks being empty (the target blocks), and choose a corresponding set of candidate blocks for evacuation (the source blocks). Since Immix is a one-pass collector, it doesn't know how much data is live when it starts collecting. It may not know that the blocks that it is evacuating will fit into the target space. As specified in the original paper, if the target space fills up, Immix will mark in place instead of evacuating; an evacuation candidate block with marked-in-place objects would then be non-empty at the end of collection.

In fact if you choose a set of evacuation candidates hoping to maximize your empty block yield, based on an estimate of live data instead of limiting to only the number of target blocks, I think it's possible to actually fill the targets before the source blocks empty, leaving you with no empty blocks at the end! (This can happen due to inaccurate live data estimations, or via internal fragmentation with the block size.) The only way to avoid this is to never select more evacuation candidate blocks than you have in target blocks. If you are lucky, you won't have to use all of the target blocks, and so at the end you will end up with more free blocks than not, so a subsequent evacuation will be more effective. The defragmentation result in that case would still be pretty good, but the yield in free blocks is not great.

In a production garbage collector I would still be tempted to be optimistic and select more evacuation candidate blocks than available empty target blocks, because it will require fewer rounds to compact the whole heap, if that's what you wanted to do. It would be a relatively rare occurrence to start an evacuation cycle. If you ran out of space while evacuating, in a production GC I would just temporarily commission some overhead blocks for evacuation and release them promptly after evacuation is complete. If you have a small heap multiplier in your Immix space, occasional partial evacuation in a long-running process would probably reach a steady state with blocks being either full or empty. Fragmented blocks would represent newer objects and evacuation would periodically sediment these into longer-lived dense blocks.

mutator throughput

Finally, the shape of the heap has its inverse in the shape of the holes into which the mutator can allocate. It's most efficient for the mutator if the heap has as few holes as possible: ideally just one large hole per block, which is the limit case of an empty block.

The opposite extreme would be having every other "line" (in Immix terms) be used, so that free space is spread across the heap in a vast spray of one-line holes. Even if fragmentation is not a problem, perhaps because the application only allocates objects that pack neatly into lines, having to stutter all the time to look for holes is overhead for the mutator. Also, the result is that contemporaneous allocations are more likely to be placed farther apart in memory, leading to more cache misses when accessing data. Together, allocator overhead and access overhead lead to lower mutator throughput.

When would this situation get so bad as to trigger compaction? Here I have no idea. There is no clear maximum. If compaction were free, we would compact all the time. But it's not; there's a tradeoff between the cost of compaction and mutator throughput.

I think here I would punt. If the heap is being actively resized based on allocation rate, we'll hit the other heuristics first, and so we won't need to trigger evacuation/compaction based on mutator overhead. You could measure this, though, in terms of average or median hole size, or average or maximum number of holes per block. Since evacuation is partial, all you need to do is to identify some "bad" blocks and then perhaps evacuation becomes attractive.

gc pause

Welp, that's some thoughts on when to trigger evacuation in Immix. Next time, we'll talk about some engineering aspects of evacuation. Until then, happy consing!

Thread safety support in libsoup3

In libsoup2 there’s some thread safety support that allows to send messages from a thread different than the one where the session is created. There are other APIs that can be used concurrently too, like accessing some of the session properties, and others that aren’t thread safe at all. It’s not clear what’s thread safe and even sending a message is not fully thread safe either, depending on the session features involved. However, several applications relay on the thread safety support and have always worked surprisingly well.

In libsoup3 we decided to remove the (broken) thread safety support and only allowed to use the API from the same thread where the session was created. This simplified the code and made easier to add the HTTP/2 implementation. Note that HTTP/2 supports multiple request over the same TCP connection, which is a lot more efficient than starting multiple requests from several threads in parallel.

When apps started to be ported to libsoup3, those that relied on the thread safety support ended up being a pain to be ported. Major refactorings where required to either stop using the sync API from secondary threads, or moving all the soup usage to the same secondary thread. We managed to make it work in several modules like gstreamer and gvfs, but others like evolution required a lot more work. The extra work was definitely worth it and resulted in much better and more efficient code. But we also understand that porting an application to a new version of a dependency is not a top priority task for maintainers.

So, in order to help with the migration to libsoup3, we decided to add thread safety support to libsoup3 again, but this time trying to cover all the APIs involved in sending a message and documenting what’s expected to be thread safe. Also, since we didn’t remove the sync APIs, it’s expected that we support sending messages synchronously from secondary threads. We still encourage to use only the async APIS from a single thread, because that’s the most efficient way, especially for HTTP/2 requests, but apps currently using threads can be easily ported first and then refactored later.

The thread safety support in libsoup3 is expected to cover only one use case: sending messages. All other APIs, including accessing session properties, are not thread safe and can only be used from the thread where the session is created.

There are a few important things to consider when using multiple threads in libsoup3:

  • In the case of HTTP/2, two messages for the same host sent from different threads will not use the same connection, so the advantage of HTTP/2 multiplexing is lost.
  • Only the API to send messages can be called concurrently from multiple threads. So, in case of using multiple threads, you must configure the session (setting network properties, features, etc.) from the thread it was created and before any request is made.
  • All signals associated to a message (SoupSession::request-queued, SoupSession::request-unqueued, and all SoupMessage signals) are emitted from the thread that started the request, and all the IO will happen there too.
  • The session can be created in any thread, but all session APIs except the methods to send messages must be called from the thread where the session was created.
  • To use the async API from a thread different than the one where the session was created, the thread must have a thread default main context where the async callbacks are dispatched.
  • The sync API doesn’t need any main context at all.

June 20, 2022

blocks and pages and large objects

Good day! In a recent dispatch we talked about the fundamental garbage collection algorithms, also introducing the Immix mark-region collector. Immix mostly leaves objects in place but can move objects if it thinks it would be profitable. But when would it decide that this is a good idea? Are there cases in which it is necessary?

I promised to answer those questions in a followup article, but I didn't say which followup :) Before I get there, I want to talk about paged spaces.

enter the multispace

We mentioned that Immix divides the heap into blocks (32kB or so), and that no object can span multiple blocks. "Large" objects -- defined by Immix to be more than 8kB -- go to a separate "large object space", or "lospace" for short.

Though the implementation of a large object space is relatively simple, I found that it has some points that are quite subtle. Probably the most important of these points relates to heap size. Consider that if you just had one space, implemented using mark-compact maybe, then the procedure to allocate a 16 kB object would go:

  1. Try to bump the allocation pointer by 16kB. Is it still within range? If so we are done.

  2. Otherwise, collect garbage and try again. If after GC there isn't enough space, the allocation fails.

In step (2), collecting garbage could decide to grow or shrink the heap. However when evaluating collector algorithms, you generally want to avoid dynamically-sized heaps.


Here is where I need to make an embarrassing admission. In my role as co-maintainer of the Guile programming language implementation, I have long noodled around with benchmarks, comparing Guile to Chez, Chicken, and other implementations. It's good fun. However, I only realized recently that I had a magic knob that I could turn to win more benchmarks: simply make the heap bigger. Make it start bigger, make it grow faster, whatever it takes. For a program that does its work in some fixed amount of total allocation, a bigger heap will require fewer collections, and therefore generally take less time. (Some amount of collection may be good for performance as it improves locality, but this is a marginal factor.)

Of course I didn't really go wild with this knob but it now makes me doubt all benchmarks I have ever seen: are we really using benchmarks to select for fast implementations, or are we in fact selecting for implementations with cheeky heap size heuristics? Consider even any of the common allocation-heavy JavaScript benchmarks, DeltaBlue or Earley or the like; to win these benchmarks, web browsers are incentivised to have large heaps. In the real world, though, a more parsimonious policy might be more appreciated by users.

Java people have known this for quite some time, and are therefore used to fixing the heap size while running benchmarks. For example, people will measure the minimum amount of memory that can allow a benchmark to run, and then configure the heap to be a constant multiplier of this minimum size. The MMTK garbage collector toolkit can't even grow the heap at all currently: it's an important feature for production garbage collectors, but as they are just now migrating out of the research phase, heap growth (and shrinking) hasn't yet been a priority.


So now consider a garbage collector that has two spaces: an Immix space for allocations of 8kB and below, and a large object space for, well, larger objects. How do you divide the available memory between the two spaces? Could the balance between immix and lospace change at run-time? If you never had large objects, would you be wasting space at all? Conversely is there a strategy that can also work for only large objects?

Perhaps the answer is obvious to you, but it wasn't to me. After much reading of the MMTK source code and pondering, here is what I understand the state of the art to be.

  1. Arrange for your main space -- Immix, mark-sweep, whatever -- to be block-structured, and able to dynamically decomission or recommission blocks, perhaps via MADV_DONTNEED. This works if the blocks are even multiples of the underlying OS page size.

  2. Keep a counter of however many bytes the lospace currently has.

  3. When you go to allocate a large object, increment the lospace byte counter, and then round up to number of blocks to decommission from the main paged space. If this is more than are currently decommissioned, find some empty blocks and decommission them.

  4. If no empty blocks were found, collect, and try again. If the second try doesn't work, then the allocation fails.

  5. Now that the paged space has shrunk, lospace can allocate. You can use the system malloc, but probably better to use mmap, so that if these objects are collected, you can just MADV_DONTNEED them and keep them around for later re-use.

  6. After GC runs, explicitly return the memory for any object in lospace that wasn't visited when the object graph was traversed. Decrement the lospace byte counter and possibly return some empty blocks to the paged space.

There are some interesting aspects about this strategy. One is, the memory that you return to the OS doesn't need to be contiguous. When allocating a 50 MB object, you don't have to find 50 MB of contiguous free space, because any set of blocks that adds up to 50 MB will do.

Another aspect is that this adaptive strategy can work for any ratio of large to non-large objects. The user doesn't have to manually set the sizes of the various spaces.

This strategy does assume that address space is larger than heap size, but only by a factor of 2 (modulo fragmentation for the large object space). Therefore our risk of running afoul of user resource limits and kernel overcommit heuristics is low.

The one underspecified part of this algorithm is... did you see it? "Find some empty blocks". If the main paged space does lazy sweeping -- only scanning a block for holes right before the block will be used for allocation -- then after a collection we don't actually know very much about the heap, and notably, we don't know what blocks are empty. (We could know it, of course, but it would take time; you could traverse the line mark arrays for all blocks while the world is stopped, but this increases pause time. The original Immix collector does this, however.) In the system I've been working on, instead I have it so that if a mutator finds an empty block, it puts it on a separate list, and then takes another block, only allocating into empty blocks once all blocks are swept. If the lospace needs blocks, it sweeps eagerly until it finds enough empty blocks, throwing away any nonempty blocks. This causes the next collection to happen sooner, but that's not a terrible thing; this only occurs when rebalancing lospace versus paged-space size, because if you have a constant allocation rate on the lospace side, you will also have a complementary rate of production of empty blocks by GC, as they are recommissioned when lospace objects are reclaimed.

What if your main paged space has ample space for allocating a large object, but there are no empty blocks, because live objects are equally peppered around all blocks? In that case, often the application would be best served by growing the heap, but maybe not. In any case in a strict-heap-size environment, we need a solution.

But for that... let's pick up another day. Until then, happy hacking!

GSoC 2022: First update - Planning


This summer I'm contributing to Nautilus as part of GSoC, focusing on improving the discoverability of the new document feature. In this post I will describe how the project was split between me and Utkarsh, briefly go over the schedule established for my work, and briefly mention my current research in GNOME Boxes.

The split

The initial short project idea assumed that only one student was going to work on it, so when both me and Utkarsh Gandhi were accepted, we had quite an unexpected situation. Fortunately, the fact that the project had many stretch goals allowed us to split it so that both of us can work independently. The unexpected situation has taught us to share tasks in a meaningful way, which has made us all the more able to grow at our assignments, furthermore we have learned how to work without blocking each other's progress. Most of the initial tasks that aim to revamp the UI and the code of the New Document menu go to Utkarsh, while I'm going to focus on the discoverability side and the default experience of the user, meaning what happens when there are no templates in the Templates directory.

Make "New Documents" feature discoverable

Finally, the subject of my project turned out to be about resolving the accessibility issue of this feature - when there are no templates in the Templates directory, the new document menu is not shown, and many users don't know about its existence. They completely ignore the Templates directory, not knowing what it does and just assuming it's one of those "just there" directories and files. Another thing I'll take a look at is the ability to easily add templates, without the cumbersome process of creating and copying files. I'll also reconsider the pros and cons of default templates.


While it's not final, because we've lost our crystal balls, here's the current anticipated schedule I'll be following:
  1. Research the underlying problem and use cases by looking at other implementations (operating systems, file managers, web apps) (2 weeks) - 12.06-26.06 (current)
  2. Design a mockup based on above research, adhering to GNOME HIG and designers review (1 week) - 26.06-03.07
  3. Code prototype iteration in a development branch that provides a meaningful empty state, makes sure the "new documents" menu item is always shown, and the user can add more templates (2 weeks) - 03.07-17.07
  4. Test and review the prototype iteration, refine the prototype based on feedback and repeat if necessary (2 weeks) - 17.07-31.07
  5. Open a Merge Request to merge the development branch to the master branch (4 days) - 31.07-04.08

Beginning of research

As I have started the first point of my timeline/schedule, I found myself in need of many virtual machines. Equipped with powerful yet simple and elegant GNOME Boxes, I managed to run 2 different operating systems on it:
  • ChromeOS Flex - knowledge gained on how to work with libvirt xml files allowed me to figure out how to get this web centric system running in Boxes. I have documented the necessary steps in this guide, but it definitely deserves a separate blog post.
  • Windows 11 thanks to this excellent guide.
Next to try are file managers from other linuxes, and web apps; already tested macOS Finder. My next GSoC update definitely won't lack colorful pictures.


The project is coming along quite smoothly, with reasonable objectives and deliverables. We’ve managed to figure out how to split the project, establish a schedule for our work, and I’ve learnt how to use GNOME Boxes to test different implementations of the “New Document” feature. I found the community very helpful and welcoming, just like my mentor Antonio Fernandes who’s very understanding and patient :)

June 19, 2022

Google Summer of Code with GNOME Foundation.

Originally posted on dev.to.

Google Summer of Code — every undergrad’s dream to get selected in it one day. I found out about Google Summer of Code in my freshman year. I was so excited that a program like this exists where open source contributors collaborate over projects with various organizations!

At first, Google announces the program in mid February, then after a month organizations are announced. Applicants start applying to various organizations and make proposals for the project they like. And in May, the results are announced and the period lasts for around three months.

Learn more about Google Summer of Code here.

When should I apply?

Google announces organizations around February. Look at the detailed timeline here. There’s no right time to start contributing to Open Source and getting selected in Google Summer of Code. You can start right now, contact admins and work on your issues. This will increase your chances for the next term!

Tip: Look for projects from idea lists of organizations which didn’t get selected for GSoC this year. Contact mentors and start individually contributing towards it. This will boost your chances, whenever you decide to apply :)

I will share my experience below on how I got started with Google Summer of Code and made into it.

Finding an organization

You should first ask yourself, what are my skills? What am I proficient with? Is the community supportive? Do I have any experience in this field? Do I know at least 50% of the skills mentioned by the organization? Rest can be learnt while contributing towards project.

I started looking for past selected organizations in February, found an organization named Metacall, which made polyglot programming easy. I made some contributions there. I looked into their past projects and tried to understand how the code base worked. The tech stack was mainly Python, C++, Rust, Nodejs, Docker. I knew very little about these.

I am intermediate in web dev, so alongside that, I also started looking for organizations which had web dev projects. Basically GSoC allows you to make a maximum of three proposals, out of which only one gets selected. I would suggest you to do your research first. I recommend you to choose only one organization and start contributing towards it.

How did I get to know about GNOME?

Since I used Ubuntu distro of Linux, it had GNOME Desktop. I was impressed that even the organization which made the UI for Ubuntu is Open Source. I researched about them and found out that they participate in Google Summer of Code and Outreachy internship programs.

In March, the selected organizations were announced publicly, I browsed through different organizations and their web dev related projects and I landed on GNOME Foundation’s idea list page. As I was going through the different project ideas, the idea of Faces of GNOME — Continuing the Development of the Platform caught my eye.

Selecting and working on project

The Faces of GNOME is a Foundation-led initiative with the intent of championing the contributors and recognizing their continuous and previous efforts towards the GNOME project. Faces aim to be a historical platform where you’re able to see the faces behind the GNOME project. From current contributors to past contributors. Faces intend to be a place where Contributors have their own profile, serving as a directory of the current and past Contributors of the Project.

The project used Jekyll, HTML, CSS, JavaScript as its tech stack. I had no idea about Jekyll when I started this project. Had worked with Hugo, which is a similar static site generator.

I started studying and experimenting with Jekyll as I had no idea about that static site generator. Took a week to study Jekyll and codebase and then jumped onto ongoing issues. My mentors, Claudio Wunder and Caroline Henriksen were supportive and helped me clear all my doubts (even silly ones)!.

After getting familiar with the codebase, I started making contributions by adding features, creating wikis, suggesting ideas, etc. Check out all of my contributions here.

Contribution and Proposal drafting period

Next, in April, we had to submit our proposal. I had proposed a few new features which was really appreciated by my mentor. Creating a project proposal was a difficult task as I had to cover every bit of project feature in detail. I talked with my mentor about how I approached each topic, which helped me understand what they expected of me as well. This is a crucial issue since I was interpreting some features differently while, in reality, they were designed to accomplish something else. This tiny misunderstanding could lead you to make a poor proposal.

Previous year’s GSoC mentee, Arijit Kundu, helped me with drafting my proposal. I got my proposal reviewed from different foundation members who were overlooking the project. Received a nice feedback from everyone. Finally, I created my proposal utilizing the template provided by the organization.

One of the most significant judging criteria is timeframe, therefore take care when selecting or drafting it.

Even after making proposal, my contributions didn’t stop and I started engaging with the community more. I asked doubts, joined different channels and talked about various features I wanted to implement in this project.

Result Day!

Finally, the result day came and I was happy to get selected in Google Summer of Code’22 under GNOME Foundation. I never imagined that I would be a part of this program. Open Source truly does wonders!

So, this was my experience on getting selected into Google Summer of Code. Hope you got any insights on it. If you have any questions, please connect with me on different social media platforms. I’d be happy to help you :)

Happy Summers!🌞

June 17, 2022

GSoC update #1 – Planning

GSoC coding period started on Monday, so this is a good time to blog about what I’ve started working on and what’s my milestone to finish the project. First off, I’ve created a simple mockup using Sonny Piers’ amazing Workbench app. This is the first step in knowing how we want the UI to look like, at least in the first iteration.

Media history viewer mockup

Thanks to the mockup, I’ve created a milestone with approximate time estimates for each task. This is what my milestone looks like:

First part – Implement a basic media history viewer (18 days)

  • Add MediaTimeline list model that can load media messages (6 days, in progress)
  • Add a subpage to the RoomDetails dialog for the media history with a GtkGridView that links to the MediaTimeline (2 days)
  • Add MediaHistoryImage widget that can show an image message type (3 days)
  • Add MediaHistoryVideo widget that can show a video message type (3 days)
  • Add MediaHistoryAudio widget that can show an audio message type (1 day)
  • Add MediaHistoryVoice widget that can show a voice message type (1 day)
  • Add MediaHistoryFile widget that can show a file message type (2 days)

Second part – Add click actions to the media history widgets (18 days)

  • Integrate the MediaViewer inside the media history page as a subpage (2 days)
  • Make image and video message types to be opened by the MediaViewer on click (2 days)
  • Make the file of the MediaHistoryFile widget to download on click and show the progress (6 days)
  • Make the file of the MediaHistoryFile widget open on click when it’s downloaded (2 days)
  • Add a dialog to listen to the audio of the MediaHistoryAudio and MediaHistoryVoice widgets on click (6 days)

Third part – Filters & Animations (12 days)

  • Wrap the MediaTimeline list model to a GtkFilterListModel to be able to filter the list (1 days)
  • Add options to filter the media history by media type (4 days)
  • Add animations to the MediaViewer to open and close photos and videos (3 days)
  • Add a swipe back gesture to the MediaViewer, similar to the one found in Telegram (4 days)

Sneak peek: Media viewer animations

Some days ago I started working on a media viewer for my app Telegrand. I wanted a similar feeling of the media viewer on Telegram iOS and Android, which I’ve always found really cool to use. You can see my progress in the tweets below. The animations and the swipe gestures were liked quite a bit, so I’ve decided to add them in Fractal too, so that they can also be used in the media history viewer.

Status update, 17/06/2022

I am currently in the UK – visiting folk, working, and enjoying the nice weather. So my successful travel plans continue for the moment… (corporate mismanagement has led to various transport crises in the UK so we’ll see if I can leave as successfully as I arrived).

I started the Calliope playlist toolkit back in 2016. The goal is to bring open data together and allow making DIY music recommenders, but its rather abstract to explain this via the medium of JSON documents. Coupled with a desire to play with GTK4, which I’ve had no opportunity to do yet, and inspired by a throwaway comment in the MusicBrainz IRC room, I prototyped up a graphical app that shows what kind of open data is available for playlist generation.

This “calliope popup” app can watch MPRIS nofications, or page through an existing playlist. In future it could also page through your Listenbrainz listen history. So far it just shows one type of data:

This screenshot shows MusicBrainz metadata for my test playlist’s first track, which happens to be the song “Under Pressure”. (This is a great test because it is credited to two artists :-). The idea is to flesh out the app with metadata from various different providers, making it easier to see what data is available and detect bad/missing info.

The majority of time spent on this so far has been (re-)learning GTK and figuring out how to represent the data on screen. There was some also work involved making Calliope itself return data more usefully.

Some nice discoveries since I last did anything in GTK are the Blueprint UI language, and the Workbench app. Its also very nice having the GTK Inspector available everywhere, and being able to specify styling via a CSS file. (I’ve probably done more web sites than GTK apps in the last 10 years, so being able to use the same mental model for both is a win for me.). The separation of Libadwaita from GTK also makes sense and helps GTK4 feels more focused, avoiding (mostly) having 2 or 3 widgets for one purpose.

Apart from that, I’ve been editing and mixing new Vladimir Chicken music – I can strongly recommend that you never try to make an eight minute song. This may be the first and last 8 minute song from VC 🙂

June 15, 2022

Cambalache 0.10.0 is out!

3rd party libs release!

After almost 6 months of work I am pleased to announce a new Cambalache release!

Adwaita and Handy support

This cycle main focus was to add support for 3rd parties libraries and what better than Adwaita and Handy to start with.

Keep in mind that workspace support for new widgets is minimal which means you should be able to create all widgets and set its properties but some widgets might not show correctly in the workspace or lack placeholder support, please file an issue if you find something!

Inline object properties support

One of the new features in Gtk 4 is the ability to define a new object directly in a property instead of using a reference.

 <object class="GtkWindow">
   <property name="child">
     <object class="GtkLabel">
       <property name="label">Hola Mundo</property>

You will be able to create such object by clicking in the + icon of the object property and the child will appear in the hierarchy with the property name as prefix.

Special child type support

An important missing feature was being able to define special child type which is needed for things like setting a titlebar widget in a window.

<object class="GtkWindow">
   <child type="titlebar">
     <object class="GtkHeaderBar"/>

Now all you have to do is add the widget as usual and set the special type in the layout tab!

New Property Editors

From now on you will not have to remember all the icon names just select the icon you want with the new chooser popover.

GdkColor and GdkRgba properties are also supported using a color button chooser.

Child reordering support

Some times the order of serialization matter a lot specially when there is no layout/packing property to define the order of children, this is why now you can reorder children serialization position directly in the hierarchy!

Full Release Notes

  • Add Adwaita and Handy library support
  • Add inline object properties support (only Gtk 4)
  • Add special child type support (GtkWindow title widget)
  • Improve clipboard functionality
  • Add support for reordering children position
  • Add/Improve workspace support for GtkMenu, GtkNotebook, GtkPopover, GtkStack, GtkAssistant, GtkListBox, GtkMenuItem and GtkCenterBox
  • New property editors for icon name and color properties
  • Add support for GdkPixbuf, Pango, Gio, Gdk and Gsk flags/enums types
  • Add Ukrainian translation (Volodymyr M. Lisivka)
  • Add Italian translation (capaz)
  • Add Dutch translation (Gert)


Cambalache is still in heavy development so if you find something that does not work please file a bug here

Matrix channel

Have any question? come chat with us at #cambalache:gnome.org

Where to get it?

Download source from gitlab

git clone https://gitlab.gnome.org/jpu/cambalache.git

or the bundle from flathub

flatpak remote-add --user --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak install --user flathub ar.xjuan.Cambalache

Happy coding!

The tree view is undead, long live the column view‽

As the title, this is a spin-off of my last post in which I’ll talk about on Files list view instead of grid view.

But before that, a brief summary of what happened in-between.

Legitimate succession

In my last post we were at the interregnum: Files grid view was temporarily managed by GtkFlowBox. Since then the switch to GTK4 has happened and with it came GtkColumnView to claim its due place.

Despite that, GNOME 42 couldn’t ship the GTK4-based Files app (but it still benefited from it, with the new pathbar and more). Can you guess whose blame it was?

A view of trees

That’s how the spin-off starts.

Files list view has for a long time been managed by GtkTreeView, a venerable GTK widget which is still in GTK4 and didn’t have major API changes.

What looked like good news, for ease of porting, was hiding bad news: its drag-and-drop API is still a nightmare.

Drag and drag

GTK4 bring a new drag-and-drop paradigm to the table that makes it dramatically easier to implement drag-and-drop within and between apps. But GtkTreeView doesn’t employ widgets for its rows, so it can’t use the new paradigm.

So, it does its own thing, but with a different API from GTK3 too. I tried to use it to restore drag-and-drop on list view, but:

1. it was laborious and time-consuming;
2. grid view, which still lacked drag-and-drop support, couldn’t benefit from this work;
3. it might require debugging and improving GtkTreeView itself.

So I realized GtkTreeView was just dragging me down and we’d better move on.


 Because treeview

Users, designers, and developers have long requested things for Files list view that are basically impossible to do correctly and maintainably with GtkTreeView:

  • rubberband selection;
  • background space around items (for folder context menu);
  • sort menu shared with the grid view;
  • CSS styling;
  • animations;
  • rich search results list (without a clunky “Location” column);
  • and more…

Much like EelCanvas, GtkTreeView doesn’t employ child widgets for the content items, which makes it lack many useful GTK features.

A view of columns

In my previous blog post I’ve mentioned how GTK4 brings new scalable view widgets. But I didn’t mention they are super amazing, did I?

The hero of this blog post is  GtkColumnView. It is a relative of GtkGridView, but displays items in a list with column instead.

Both take a model and use a factory to produce item widgets on-demand.

This has made it simpler to implement the new list view. All I had to do was copy the grid view code and make a few changes. That was going to be easy!

Famous last words

While the initial implementation was indeed a quick job, it was possible only by taking many shortcuts. Also known as very ugly hacks. It was good enough to share this screenshot in early February, but not good enough to release in GNOME 42.

As the 42 release was no longer the target, there was enough time to do things right. I’ve learnt more about GtkColumnView, fixed some GTK bugs, reported a few others and engaged with GTK developers on API discussion. Thanks their invaluable help, I was able to get rid of the hacks one by one and the quality and design of the code have improved significantly.

Old VS New

Who needs words when I have screenshots?

Old Recents list ─ misaligned thumbnails and name, wide Location column
New Recents list ─ Centered thumbnails, location as a caption, size column present
Old search results list ─ wide Location column, truncated full-text snippet, cannot change sort order.


New search results list ─ Sort menu, full snippets, location caption only for subfolder results
New List view ─ compact mode, rubberband selection, background space between and around rows

Columns & trees?

For a long time, Files has got an optional feature for list view which allows expanding folders in the same view. I don’t use it, but still did my best to implement it in GtkColumnView.

However, this implementation is still very unstable, so there is a chance GNOME 43 won’t have this feature. If you can code and want this feature to be included in GNOME 43, you can pick up on where I’ve left, your help is welcome!

A view of cells

Unlike the previous blog post, I’m going to share a little about the code design.

As mentioned, both GtkGridView and GtkColumnView use a model. The new Files list and grid views use a NautilusViewModel (containing NautilusViewItem objects) and share a lot of model-related code under a NautilusListBase abstract class.

src/nautilus-list-base.c: 1291 lines of code
src/nautilus-list-view.c: 1139 lines of code
src/nautilus-grid-base.c: 502 lines of code

In order to maximize the shared code, the child widgets of both views inherit from a NautilusViewCell widget class:

  • in grid view, each item creates one cell widget: NautilusGridCell;
  • in list view, each item creates one cell widget per column:
    • NautilusNameCell for the first column.
    • NautilusStarCell for the last column.
    • NautilusLabelCell for every other column.

Thanks to this cell abstraction, NautilusListBase can also hold common code for child widgets of both views, including event controllers! And this means they are also going to share drag-and-drop code!

Reviews welcome in https://gitlab.gnome.org/GNOME/nautilus/-/merge_requests/847


June 14, 2022

Attempting to create an aesthetic global line breaking algorithm

The Knuth-Plass line breaking algorithm is one of the cornerstones of TeX and why its output looks so pleasing to read (even to people who do not like the look of Computer Modern). While most text editors do line breaking with a quick & dirty algorithm that looks at each line in isolation, TeX does something fancier called minimum raggedness. The basic algorithm defines a global metric over the entire chapter and then chooses line breaks that minimize it. The basic function is the following:

For each line measure the difference between the desired width and the actual width and square the value. Then add these values together.

As you can easily tell, line breaks made at the beginning of the chapter affect the potential line breaks you can do later. Sometimes it is worth it to make a locally non-optimal choice at the beginning to get a better line break possibility much later. Evaluating a global metric like this can be potentially slow, which is why interactive programs like LibreOffice do not use this method.

The classical way of solving this problem is to use dynamic programming. It has the requirement that the problem must conform to a requirement called the Bellman optimality condition (or, if you are into rocketry, the Pontryagin maximum principle). This is perhaps best illustrated with an example: suppose you are in Paris and want to drive to Venice. This requires picking some path to drive that is "optimal" for your requirements. Now suppose we know that Zürich is along the path of this optimal route. The requirement basically says, then, that the optimal route you take from Paris to Zürich does not in any way affect the optimal route from Zürich to Venice. That is, the two paths can be routed independently of each other. This is true for the basic form of Knuth-Plass line breaking.

It is not true for line breaking in practice.

As an example there is an aesthetic requirement that there should not be three or more consecutive lines that end with a hyphen. Suppose you have split the problem in two and that in the top part the last two lines end with a dash and that the first line of the bottom part also ends with a dash. Each of the two parts is optimal in isolation but when combined they'd get the additional penalty of three consecutive hyphens and thus said solution might not be globally optimal.

So then what?

Computers today are a fair bit faster than in the late 70s/early 80s when TeX was developed. The problem size is also fairly small, the average text chapter only contains a few dozen lines (unless you are James Joyce). This leads to the obvious question of "couldn't you just work harder rather than smarter and try all the options?" Sadly the deities of combinatorics say you can't. There are just too many possibilities.

If you are a bit smarter about it, though, you can get most of the way there. For any given point in the raw text there are reasonably only a few places where you could place the optimal line break since every line must be "fairly smooth". The main split point is the one "closest" to the chapter width and then you can try one or two potential split points around it. These choices can be examined recursively fairly easily. So this is what I implemented as a test.

It even worked fairly well for a small sample text and created a good looking set of line breaks in a fraction of a second. Then I tried it with a different sample text that was about twice as long. The program then froze taking 100% CPU and producing no results. Foiled by algorithmic complexity once again!

After a bunch of optimizations what eventually ended up working was to store, for each split point, the N paths with the smallest penalties up to that point. Every time we enter that point the penalty of the current path is evaluated and compared to the list. If the penalty is larger than the worst option then search is abandoned. The resulting algorithm is surprisingly fast and could possibly even be used in real time.

The GUI app

Ideally you'd want to have tests for this functionality. This is tricky, since there is no golden correct answer, only what "looks good". Thus I wrote an application that can be used to examine the behaviour of the program with different texts, fonts and other parameters.

On the left you have the raw editable text, the middle shows how it would get rendered and on the right are the various statistics and parameters to twiddle. If we run the optimization on this piece of text the result looks like this:

For comparison here's what it looks like in LibreOffice:

And in Scribus:

No sample picture of TeX provided because I have neither the patience nor the skills to find out how to make it use Gentium.

While the parameters are not exactly the same in all three cases, we can clearly see that the new implementation produces more uniform results than the existing ones. One thing to note is that in some cases the new method creates lines that are a bit wider than the target box, where the other two never do. This causes the lines to be squished when justified and it looks really bad if done even a bit too much. The optimization function would probably need to be changed to penalize wide lines more than narrow ones.

The code

Get it here. It uses Gtk 4 and a bunch of related tech so getting it to work on anything else than Linux is probably a challenge.

There are a bunch of optimizations one could do, for example optical margin alignment or stretching individual letters on lines that fall far from the target width.

Thanks to our sponsor

This blog post was brought to you in part by two weeks of sick leave due to a dislocated shoulder. Special thanks to the paramedics on call and the fentanyl they administered to me.

How many Flathub apps reuse other package formats?

Today I read Comparison of Fedora Flatpaks and Flathub remotes by Hari Rana, who is an active and valued member of the Flatpak community. The article is a well-researched and well-written overview of how these two Flatpak ecosystems differ, and contains the following remark about one major difference (emphasis mine):

Flathub is open with what source a Flatpak application (re)uses, whereas Fedora Flatpaks strictly reuses the RPM format.

As such, Flathub has tons of applications that reuse other package formats.

When this article was discussed in the Flatpak Matrix channel, several people wondered whether “tons” is a fair assessment. Let’s find out!

The specific examples given in the article are of apps which reuse a .deb (to which I will add .rpm), AppImage, Snap package, or binary .tar.gz archive. It’s not so easy to distinguish a binary tarball from a source tarball, so as a substitute I will look for apps which use the extra-data to download external sources at install time rather than at build time.

I have cloned every repo from the Flathub GitHub organisation with this script I had lying around. There are 2,220 such repositories. This is a bigger number than the 1,518 apps cited in the blog post, because it includes many thing which are not apps, such as 258 GTK themes and 60 digital audio workstation plugins. I also believe that the 1,518 number does not include end-of-lifed apps, whereas my methodology does. This post will also ignore the existence of OBS Studio and Firefox, where those projects build the Flatpak from source on their own infrastructure and push the result into Flathub.

Now I’m just going to grep them all for the offending strings:

$ (for i in */
    if git -C $i grep --quiet -E '(\.(deb|rpm|AppImage|snap)\&gt;)|(extra-data)'
        echo $i
done) | wc -l

(Splitting apart the search terms, we have 141 repos matching .deb, 10 for .rpm, 23 for .AppImage, 6 for .snap, and 110 for extra-data. These numbers don’t sum to 237 because the same repo can use multiple formats, and these binary files are often used by extra-data apps.)

So by my back-of-an-envelope calculation, 237 out of 2220 repos on Flathub repackage other binary formats. This is a little under 11%. Of those 237, 51 are GTK themes, specifically variations of the Mint, Pop and Yaru themes. If we assume that all the other 186 are apps, and that none of them are EOLed, then 186 divided by 1,518 gives us a little more than 12% of apps on Flathub that are repackaged from other binary formats. (I believe this is a slight overestimate but I have run out of time this morning.)

Is that a big number? It’s roughly what I expected. Is it “ton[ne]s”? Compared to Fedora’s Flatpak repo, where everything is built from source, it certainly is: indeed, it’s more than the total number of apps in the Fedora Flatpak repo!

If it is valuable for Flathub to provide proprietary apps like Slack whose publishers do not currently wish to support Flatpak (which I believe it is) then it’s unavoidable that some apps repackage other binary formats. OK, time for one last bit of data: what if we exclude extra-data apps?

$ (for i in */
    if ! git -C $i grep --quiet extra-data && \
       git -C $i grep --quiet -E '\.(deb|rpm|AppImage|snap)\>'
        echo $i
done )| wc -l

So (ignoring non-extra-data apps which use binary tarballs, if any such apps exist) that’s something like 76 apps and 51 GTK themes which probably could be built from source by Flathub, but aren’t. It may be hard to build some of these apps from source (perhaps the upstream build system requires network access) but the rewards would include support for aarch64 and any other architectures Flathub may add, and arguably greater transparency in how the app is built.

If you want to do your own research in this vein, you may be interested in gasinvein‘s Flatpak remote metadata fetcher, which would let you generate and analyse a 200 MiB JSON file rather than by cloning and grep-ing 4.6 GiB of Git repositories. His analysis using this data yields 174 apps, quite close to my 186 estimate above.

./flatpak-remote-metadata.py -u https://dl.flathub.org/repo flathub | \
    jq -r '.[] | select(
        .manifest | objects | .modules[] | recurse(.modules | arrays | .[]) |
        .sources | arrays | .[] | .url | strings | test(".*.(deb|rpm|snap|AppImage)$")
    ) | .metadata.Application.name // .metadata.Runtime.name' | \
    sort -u | wc -l

June 12, 2022

Flatpak Brand Refresh


Flatpak has been at the center of the recent app renaissance, but its visual identity has remained fairly stale.

Without diverging too much from the main elements of its visual identity we’ve made it more contemporary. The logo in particular has been simplified to work in all of the size scenarios and visual complexity contexts.

Flatpak Logo

There’s definitely a few spots where the rebrand has yet to propagate to, so please refer to the guidelines if you spot and old coat of paint.

If you’re giving a talk on Flatpak, feel free to make use of the LibreOffice Impress Template.

June 11, 2022

Using VS Code and Podman to Develop SYCL Applications With DPC++'s CUDA Backend

I recently wanted to create a development container for VS Code to develop applications using SYCL based on the CUDA backend of the oneAPI DPC++ (Data Parallel C++) compiler. As I’m running Fedora, it seemed natural to use Podman’s rootless containers instead of Docker for this. This turned out to be more challenging than expected, so I’m going to summarize my setup in this post. I’m using Fedora Linux 36 with Podman version 4.1.0.


Since the DPC++ is going to use CUDA behind the scene, you will need an NVIDIA GPU and the corresponding Kernel driver for it. I’ve been using the NVIDIA GPU driver from RPM Fusion. Note that you do not have to install CUDA, it is part of the development container alongside the DPC++ compiler.

Next, you require Podman, which on Fedora can be installed by executing

sudo dnf install -y podman

Finally, you require VS Code and the Remote - Containers extension. Just follow the instructions behind those links.

Installing and Configuring the NVIDIA Container Toolkit

The default configuration of the NVIDIA Container Toolkit does not work with Podman, so it needs to be adjusted. Most steps are based on this guide by Red Hat, which I will repeat below.

  1. Add the repository:

    curl -sL https://nvidia.github.io/nvidia-docker/rhel9.0/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo

    If you aren’t using Fedora 36, you might have to replace rhel9.0 with your distribution, see the instructions.

  2. Next, install the nvidia-container-toolkit package.

    sudo dnf install -y nvidia-container-toolkit
  3. Red Hat’s guide mentioned to configure two settings in /etc/nvidia-container-runtime/config.toml. But when using Podman with --userns=keep-id to map the UID of the user running the container to the user running inside the container, you have to change a third setting. So open /etc/nvidia-container-runtime/config.toml with

    sudo -e /etc/nvidia-container-runtime/config.toml

    and change the following three lines:

    #no-cgroups = false
    no-cgroups = true
    #ldconfig = "@/sbin/ldconfig"
    ldconfig = "/sbin/ldconfig"
    #debug = "/var/log/nvidia-container-runtime.log"
    debug = "~/.local/nvidia-container-runtime.log"
  4. Next, you have to create a new SELinux policy to enable GPU access within the container:

    curl -sLO https://raw.githubusercontent.com/NVIDIA/dgx-selinux/master/bin/RHEL7/nvidia-container.pp
    sudo semodule -i nvidia-container.pp
    sudo nvidia-container-cli -k list | sudo restorecon -v -f -
    sudo restorecon -Rv /dev
  5. Finally, tell VS Code to use Podman instead of Docker by going to User SettingsExtensionsRemote - Containers, and change Remote › Containers: Docker Path to podman, as in the image below.

Docker Path setting

Replace docker with podman.

Using the Development Container

I created an example project that is based on a container that provides

You can install additional tools by editing the project’s Dockerfile.

To use the example project, clone it:

git clone https://github.com/sebp/vscode-sycl-dpcpp-cuda.git

Next, open the vscode-sycl-dpcpp-cuda directory with VS Code. At this point VS Code should recognize that the project contains a development container and suggest reopening the project in the container

Reopen project in container

Click Reopen in Container.

Initially, this step will take some time because the container’s image is downloaded and VS Code will install additional tools inside the container. Subsequently, VS Code will reuse this container, and opening the project in the container will be quick.

Once the project has been opened within the container, you can open the example SYCL application in the file src/sycl-example.cpp. The project is configured to use the DPC++ compiler with the CUDA backend by default. Therefore, you just have to press Ctrl+Shift+B to compile the example file. Using the terminal, you can now execute the compiled program, which should print the GPU it is using and the numbers 0 to 31.

Alternatively, you can compile and directly run the program by executing the Test task by opening the Command Palette (F1, Ctrl+Shift+P) and searching for Run Test Task.


While the journey to use a rootless Podman container with access to the GPU with VS Code was rather cumbersome, I hope this guide will make it less painful for others. The example project should provide a good reference for a devcontainer.json to use rootless Podman containers with GPU access. If you aren’t interested in SYCL or DPC++, you can replace the exising Dockerfile. There are two steps that are essential for this to work:

  1. Create a vscode user inside the container.
  2. Make sure you create certain directories that VS Code (inside the container) will require access to.

Otherwise, you will encounter various permission denied errors.

June 10, 2022

How to get your application to show up in GNOME Software

Adding Applications to the GNOME Software Center

Written by Richard Hughes and Christian F.K. Schaller

This blog post is based on a white paper style writeup Richard and I did a few years ago, since I noticed this week there wasn’t any other comprehensive writeup online on the topic of how to add the required metadata to get an application to appear in GNOME Software (or any other major open source appstore) online I decided to turn the writeup into a blog post, hopefully useful to the wider community. I tried to clean it up a bit as I converted it from the old white paper, so hopefully all information in here is valid as of this posting.


Traditionally we have had little information about Linux applications before they have been installed. With the creation of a software center we require access to rich set of metadata about an application before it is deployed so it it can be displayed to the user and easily installed. This document is meant to be a guide for developers who wish to get their software appearing in the Software stores in Fedora Workstation and other distributions. Without the metadata described in this document your application is likely to go undiscovered by many or most linux users, but by reading this document you should be able to relatively quickly prepare you application.


GNOME Software

Installing applications on Linux has traditionally involved copying binary and data files into a directory and just writing a single desktop file into a per-user or per-system directory so that it shows up in the desktop environment. In this document we refer to applications as graphical programs, rather than other system add-on components like drivers and codecs. This document will explain why the extra metadata is required and what is required for an application to be visible in the software center. We will try to document how to do this regardless of if you choose to package your application as a rpm package or as a flatpak bundle. The current rules is a combination of various standards that have evolved over the years and will will try to summarize and explain them here, going from bottom to top.

System Architecture

Linux File Hierarchy

Traditionally applications on Linux are expected to install binary files to /usr/bin, the install architecture independent data files to /usr/share/ and configuration files to /etc. If you want to package your application as a flatpak the prefix used will be /app so it is critical for applications to respect the prefix setting. Small temporary files can be stored in /tmp and much larger files in /var/tmp. Per-user configuration is either stored in the users home directory (in ~/.config) or stored in a binary settings store such as dconf. As an application developer never hardcode these paths, but set them following the XDG standard so that they relocate correctly inside a Flatpak.

Desktop files

Desktop files have been around for a long while now and are used by almost all Linux desktops to provide the basic description of a desktop application that your desktop environment will display. Like a human readable name and an icon.

So the creation of a desktop file on Linux allows a program to be visible to the graphical environment, e.g. KDE or GNOME Shell. If applications do not have a desktop file they must be manually launched using a terminal emulator. Desktop files must adhere to the Desktop File Specification and provide metadata in an ini-style format such as:

  • Binary type, typically ‘Application’
  • Program name (optionally localized)
  • Icon to use in the desktop shell
  • Program binary name to use for launching
  • Any mime types that can be opened by the applications (optional)
  • The standard categories the application should be included in (optional)
  • Keywords (optional, and optionally localized)
  • Short one-line summary (optional, and optionally localized)

The desktop file should be installed into /usr/share/applications for applications that are installed system wide. An example desktop file provided below:

[Desktop Entry]
Exec=openscad %f

The desktop files are used when creating the software center metadata, and so you should verify that you ship a .desktop file for each built application, and that these keys exist: Name, Comment, Icon, Categories, Keywords and Exec and that desktop-file-validate correctly validates the file. There should also be only one desktop file for each application.

The application icon should be in the PNG format with a transparent background and installed in
/usr/share/icons,/usr/share/icons/hicolor//apps/, or /usr/share/${app_name}/icons/*. The icon should be at least 128×128 in size (as this is the minimum size required by Flathub).

The file name of the desktop file is also very important, as this is the assigned ‘application ID’. New applications typically use a reverse-DNS style, e.g. org.gnome.Nautilus would be the app-id. And the .desktop entry file should thus be name org.gnome.Nautilus.desktop, but older programs may just use a short name, e.g. gimp.desktop. It is important to note that the file extension is also included as part of the desktop ID.

You can verify your desktop file using the command ‘desktop-file-validate’. You just run it like this:

desktop-file-validate myapp.desktop

This tools is available through the desktop-file-utils package, which you can install on Fedora Workstation using this command

dnf install desktop-file-utils

You also need what is called a metainfo file (previously known as AppData file= file with the suffix .metainfo.xml (some applications still use the older .appdata.xml name) file should be installed into /usr/share/metainfo with a name that matches the name of the .desktop file, e.g. gimp.desktop & gimp.metainfo.xml or org.gnome.Nautilus.desktop & org.gnome.Nautilus.metainfo.xml.

In the metainfo file you should include several 16:9 aspect screenshots along with a compelling translated description made up of multiple paragraphs.

In order to make it easier for you to do screenshots in 16:9 format we created a small GNOME Shell extension called ‘Screenshot Window Sizer’. You can install it from the GNOME Extensions site.

Once it is installed you can resize the window of your application to 16:9 format by focusing it and pressing ‘ctrl+alt+s’ (you can press the key combo multiple times to get the correct size). It should resize your application window to a perfect 16:9 aspect ratio and let you screenshot it.

Make sure you follow the style guide, which can be tested using the appstreamcli command line tool. appstreamcli is part of the ‘appstream’ package in Fedora Workstation.:

appstreamcli validate foo.metainfo.xml

If you don’t already have the appstreamcli installed it can be installed using this command on Fedora Workstation:

dnf install appstream

What is allowed in an metainfo file is defined in the AppStream specification but common items typical applications add is:

  • License of the upstream project in SPDX identifier format [6], or ‘Proprietary’
  • A translated name and short description to show in the software center search results
  • A translated long description, consisting of multiple paragraphs, itemized and ordered lists.
  • A number of screenshots, with localized captions, typically in 16:9 aspect ratio
  • An optional list of releases with the update details and release information.
  • An optional list of kudos which tells the software center about the integration level of the
  • A set of URLs that allow the software center to provide links to help or bug information
  • Content ratings and hardware compatibility
  • An optional gettext or QT translation domain which allows the AppStream generator to collect statistics on shipped application translations.

A typical (albeit somewhat truncated) metainfo file is shown below:

<?xml version="1.0" encoding="UTF-8"?>
<component type="desktop-application">
<metadata_license>GPL-3.0+ or GFDL-1.3-only</metadata_license>
<name xml:lang="ar">الطرفية</name>
<name xml:lang="an">Terminal</name>
<summary>Use the command line</summary>
<summary xml:lang="ar">استعمل سطر الأوامر</summary>
<summary xml:lang="an">Emplega la linia de comandos</summary>
<p>GNOME Terminal is a terminal emulator application for accessing a UNIX shell environment which can be used to run programs available on your system.</p>
<p xml:lang="ar">يدعم تشكيلات مختلفة، و الألسنة و العديد من اختصارات لوحة المفاتيح.</p>
<p xml:lang="an">Suporta quantos perfils, quantas pestanyas y implementa quantos alcorces de teclau.</p>
<screenshot type="default">https://help.gnome.org/users/gnome-terminal/stable/figures/gnome-terminal.png</screenshot>
<content_rating type="oars-1.1"/>
<url type="homepage">https://wiki.gnome.org/Apps/Terminal</url>

Some Appstrean background

The Appstream specification is an mature and evolving standard that allows upstream applications to provide metadata such as localized descriptions, screenshots, extra keywords and content ratings for parental control. This intoduction just touches on the surface what it provides so I recommend reading the specification through once you understood the basics. The core concept is that the upstream project ships one extra metainfo XML file which is used to build a global application catalog called a metainfo file. Thousands of open source projects now include metainfo files, and the software center shipped in Fedora, Ubuntu and OpenSuse is now an easy to use application filled with useful application metadata. Applications without metainfo files are no longer shown which provides quite some incentive to upstream projects wanting visibility in popular desktop environments. AppStream was first introduced in 2008 and since then many people have contributed to the specification. It is being used primarily for application metadata but also now is used for drivers, firmware, input methods and fonts. There are multiple projects producing AppStream metadata and also a number of projects consuming the final XML metadata.

When applications are being built as packages by a distribution then the AppStream generation is done automatically, and you do not need to do anything other than installing a .desktop file and an metainfo.xml file in the upstream tarball or zip file. If the application is being built on your own machines or cloud instance then the distributor will need to generate the AppStream metadata manually. This would for example be the case when internal-only or closed source software is being either used or produced. This document assumes you are currently building RPM packages and exporting yum-style repository metadata for Fedora or RHEL although the concepts are the same for rpm-on-OpenSuse or deb-on-Ubuntu.

NOTE: If you are building packages, make sure that there are not two applications installed with one single package. If this is currently the case split up the package so that there are multiple subpackages or mark one of the .desktop files as NoDisplay=true. Make sure the application-subpackages depend on any -common subpackage and deal with upgrades (perhaps using a metapackage) if you’ve shipped the application before.

Summary of Package building

So the steps outlined above explains the extra metadata you need to have your application show up in GNOME Software. This tutorial does not cover how to set up your build system to build these, but both for Meson and autotools you should be able to find a long range of examples online. And there are also major resources available to explain how to create a Fedora RPM or how to build a Flatpak. You probably also want to tie both the Desktop file and the metainfo file into your i18n system so the metadata in them can be translated. It is worth nothing here that while this document explains how you can do everything yourself we do generally recommend relying on existing community infrastructure for hosting source code and packages if you can (for instance if your application is open source), as they will save you work and effort over time. For instance putting your source code into the GNOME git will give you free access to the translator community in GNOME and thus increase the chance your application is internationalized significantly. And by building your package in Fedora you can get peer review of your package and free hosting of the resulting package. Or by putting your package up on Flathub you get wide cross distribution availability.

Setting up hosting infrastructure for your package

We will here explain how you set up a Yum repository for RPM packages that provides the needed metadata. If you are making a Flatpak we recommend skipping ahead to the Flatpak section a bit further down.

Yum hosting and Metadata:

When GNOME Software checks for updates it downloads various metadata files from the server describing the packages available in the repository. GNOME Software can also download AppStream metadata at the same time, allowing add-on repositories to include applications that are visible in the the software center. In most cases distributors are already building binary RPMS and then building metadata as an additional step by running something like this to generate the repomd files on a directory of packages. The tool for creating the repository metadata is called createrepo_c and is part of the package createrepo_c in Fedora. You can install it by running the command:

dnf install createrepo_c.

Once the tool is installed you can run these commands to generate your metadata:

$ createrepo_c --no-database --simple-md-filenames SRPMS/
$ createrepo_c --no-database --simple-md-filenames x86_64/

This creates the primary and filelist metadata required for updating on the command line. Next to build the metadata required for the software center we we need to actually generate the AppStream XML. The tool you need for this is called appstream-builder. This works by decompressing .rpm files and merging together the .desktop file, the .metainfo.xml file and preprocessing the icons. Remember, only applications installing AppData files will be included in the metadata.

You can install appstream builder in Fedora Workstation by using this command:

dnf install libappstream-glib-builder

Once it is installed you can run it by using the following syntax:

$ appstream-builder \
   --origin=yourcompanyname \
   --basename=appstream \
   --cache-dir=/tmp/asb-cache \
   --enable-hidpi \
   --max-threads=1 \
   --min-icon-size=32 \
   --output-dir=/tmp/asb-md \
   --packages-dir=x86_64/ \

This takes a few minutes and generates some files to the output directory. Your output should look something like this:

Scanning packages...
Processing packages...
Merging applications...
Writing /tmp/asb-md/appstream.xml.gz...
Writing /tmp/asb-md/appstream-icons.tar.gz...
Writing /tmp/asb-md/appstream-screenshots.tar...Done!

The actual build output will depend on your compose server configuration. At this point you can also verify the application is visible in the yourcompanyname.xml.gz file.
We then have to take the generated XML and the tarball of icons and add it to the repomd.xml master document so that GNOME Software automatically downloads the content for searching.
This is as simple as doing:

modifyrepo_c \
    --no-compress \
    --simple-md-filenames \
    /tmp/asb-md/appstream.xml.gz \
modifyrepo_c \
    --no-compress \
    --simple-md-filenames \
    /tmp/asb-md/appstream-icons.tar.gz \


Deploying this metadata will allow GNOME Software to add the application metadata the next time the repository is refreshed, typically, once per day. Hosting your Yum repository on Github Github isn’t really set up for hosting Yum repositories, but here is a method that currently works. So once you created a local copy of your repository create a new project on github. Then use the follow commands to import your repository into github.

cd ~/src/myrepository
git init
git add -A
git commit -a -m "first commit"
git remote add origin git@github.com:yourgitaccount/myrepo.git
git push -u origin master

Once everything is important go into the github web interface and drill down in the file tree until you find the file called ‘repomd.xml’ and click on it. You should now see a button the github interface called ‘Raw’. Once you click that you get the raw version of the XML file and in the URL bar of your browser you should see a URL looking something like this:
Copy that URL as you will need the information from it to create your .repo file which is what distributions and users want in order to reach you new repository. To create your .repo file copy this example and edit it to match your data:

name=Remarkable Markdown editor software and updates

So on top is your Repo shortname inside the brackets, then a name field with a more extensive name. For the baseurl paste the URL you copied earlier and remove the last bits until you are left with either the ‘norach’ directory or your platform directory for instance x86_64. Once you have that file completed put it into /etc/yum.repos.d on your computer and load up GNOME Software. Click on the ‘Updates’ button in GNOME Software and then on the refresh button in the top left corner to ensure your database is up to date. If everything works as expected you should then be able to do a search in GNOME software and find your new application showing up.

Example of self hosted RPM

Flapak hosting and Metadata

The flatpak-builder binary generates AppStream metadata automatically when building applications if the appstream-compose tool is installed on the flatpak build machine. Flatpak remotes are exported with a separate ‘appstream’ branch which is automatically downloaded by GNOME Software and no addition work if required when building your application or updating the remote. Adding the remote is enough to add the application to the software center, on the assumption the AppData file is valid.


AppStream files allow us to build a modern software center experience either using distro packages with yum-style metadata or with the new flatpak application deployment framework. By including a desktop file and AppData file for your Linux binary build your application can be easily found and installed by end users greatly expanding its userbase.

Writing a simple time tracker in Rust

Today was another Red Hat Day of Learning. Half a year ago I started learning Rust, but have not really done much with it since then. I did try to port simple-term, but that was quickly thwarted by the unmaintained and broken vte Rust binding for GTK3 – that issue is still way over my head, I didn’t make much progress after two hours of monkey patching. I have used gtimelog to track my work for my entire professional life (since 2004).

June 09, 2022

Water on the brain; joining OpenET board

I’m becoming a Westerner (in an age of aridification) because I have water permanently on the brain.

Quite related, I’ve joined the board of OpenET to help bring open data on evapotranspiration (a key part of the water cycle) to Colorado River water management, and eventually to the whole world. I’ll be advising on both basics like licensing and of course the more complex bits like economic sustainability, where (via Tidelift) my head mostly is these days.

Many thanks to John Fleck (GNOME documentation project, ret.) for dragging my head into this space years ago by writing about it so well for so long.

A quick textmode-themed update

Summer is coming and I've got a couple of posts cooking that may turn out mildly interesting, but — time constraints being what they are — in the meantime there's this.


I (judiciously, as one might opine) pulled back from posting about every single feature release, but things have kept plodding along in quiet. ImageMagick is finally going away as per a buried remark from 2020, which means no more filling up /tmp, no more spawning Inkscape to read in SVGs, and so on. There's also lots of convenience and robustness and whatnot. Go read the release notes.

Text terminals, ANSI art groups, my dumb pet projects: they just won't.

As for eye candy, I guess the new 16/8-color mode qualifies. It's the good old "eight colors, but bold attribute makes foreground bright" trick, which requires a bit of special handling since the quantization step must apply two different palettes.

With this working, the road to ANSI art scene Naraka nirvana is short: Select code points present in your favorite IBM code page, strip newlines (only if your output is 80 columns wide), and convert Chafa's Unicode output to the target code page. You'll get a file worthy of the .ANS extension and perhaps a utility like Ansilove (to those who care: There's some mildly NSFW art in their Examples section. Definitely don't look at it. You've been warned).

Taken together, it goes something like this:

$ chafa -f symbol -c 16/8 -s 80 -w 9 --font-ratio 1 --color-space din99d \
    --symbols space+solid+half+stipple+ascii they_wont.jpg | tr -d \\n | \
    iconv -c -f utf8 -t cp437 > they_wont.ans
$ ansilove -f 80x50 -r they_wont.ans -o top_notch_blog_fodder.png

It's a bit of a screenful, but should get better once I get around to implementing presets.

Finally, I added a new internal symbol range for Latin scripts. It's got about 350 new symbols to work with on top of the ASCII that was already there. Example anim below; might be a good idea to open this one in a separate tab, as browser scaling kind of ruins it.

--fg-only --symbols latin. Input from 30000fps.


Apart from the packagers, who are excellent but too numerous to list for fear of leaving anyone out, this time I'd like to thank Lionel Dricot aka Ploum for lots of good feedback. He develops a text mode offline-first browser for Gemini, Gopher, Spartan and the web called Offpunk, and you should check it out.

One more. When huntr.dev came onto my radar for the first time this spring, I admit to being a little bit skeptical. However, they've been a great help, and every interaction I've had with both staff and researchers has been professional, pleasant and highly effective. Big thumbs up. I've more thoughts on this, probably enough for a post of its own. Eventually.

A propos

I came across Aaron A. Reed's project 50 Years of Text Games a while back (via Emily Short's blog, I suspect), and have been following it with interest. He launched his kickstarter this week and is knocking it out of the park. The selection is a tad heavy on story/IF games (quoth the neckbeard, "grumble grumble, Empire, ZZT, grumble"), but it's really no complaint considering the effort that obviously went into this.

Seems low-risk too (the draft articles are already written and available to read), but I have a 75% miss rate on projects I've backed, so what do I know. Maybe next year it'll be 60%.

June 08, 2022

Apps: Attempt of a status report

This is not an official post from GNOME Foundation nor am I part of the GNOME Foundation’s Board that is responsible for the policies mentioned in this post. However, I wanted to sum up the current situation as I understand it to let you know what is currently happening around app policies.

Core and Circle

Ideas for (re)organizing GNOME apps have been around for a long time, like with this initiative from 2018. In May 2020, the Board of Directors brought forward the concept of differentiating “official GNOME software” and “GNOME Circle.” One month later the board settled on the GNOME Foundation’s software policy. GNOME Circle was officially launched in November 2020.

With this, there are two categories of software:

    1. Official GNOME software, curated by the release team. This software can use the GNOME brand, the org.gnome app id prefix, and can identify the developers as GNOME. Internally the release team refers to official software as core.

    2. GNOME Circle, curated by the Circle committee. This software is not official GNOME software and cannot use the GNOME trademarks. Projects receive hosting benefits and promotion.

Substantial contribution to the software of either of those categories makes contributors eligible for GNOME Foundation membership.

Those two categories are currently the only ones that exist for apps in GNOME.

Current Status and Outlook

Since the launch of GNOME Circle, no less than 42 apps have joined the project. With Apps for GNOME, we have an up-to-date representation of all apps in GNOME. And more projects benefitting from this structure are under development. Combined with other efforts like libadwaita, new developer docs, and a new HIG, I think we have seen an incredible boost in app quality and development productivity.

Naturally, there remain open issues after such a huge change. App criteria and workflows have to be adapted after collecting our first experiences. We need more clarification on what a “Core” app means to the project. And last but not least, I think we can do better with communicating about these changes.

Hopefully, at the upcoming GUADEC 2022 we will be able to add some cornerstones to get started with addressing the outstanding issues and continue this successful path. If you want to get engaged or have questions, please let me know. Maybe, some questions can already be answered below 🙂

Frequent Questions

Why is this my favorite app missing?

I often get questions about why an app is absent from apps.gnome.org. The answer is usually, that the app just never applied to Circle. So if your favorite app is missing, you may want to ask them to apply to GNOME Circle.

What do the “/World” and “/GNOME” GitLab namespaces mean?

I often get asked why an app is not on apps.gnome.org or part of “Core” while its repository resides in /GNOME. However, there is no specific meaning to /GNOME. It’s mostly a historical category and many of the projects in /GNOME have no specific status inside the project. By the way, many GNOME Circle projects are not even hosted on GNOME’s GitLab instance.

New “Core” apps however will be moved to /GNOME.

But I can still use org.gnome in my app id or GNOME as part of  my app name?

To be very clear: No. If you are not part of “Core” (Official GNOME software) you can’t. As far as I can see, we won’t require apps to change their app id if they have used it before July 2020.

What about those GNOME games?

We have a bunch of nice little games that were developed within the GNOME project (and that largely also still carry legacy GNOME branding.) None of them currently have an official status. At the moment, no rules exclude games from becoming part of GNOME Circle. However, most of those games would probably need an overhaul before being eligible. I hope we can take care of them soon. Let me know if you want to help.

June 07, 2022


Hello everyone!

I’m Marco Melorio, a 22-year-old Italian computer science student. I’m a GNOME user for about 2 years and I’ve quite literally felt in love with it since then. Last year I started developing Telegrand, a Telegram client built to be well integrated with GNOME, which is a project I’m really proud of and it’s gaining quite a bit of interest. That was the moment where I started being more active in the community and also when I started contributing to various GNOME projects.

Fast-forward to today

I’m excited to announce that I’ve been selected for GSoC’22 to implement a media history viewer in Fractal, the matrix client for GNOME, with the help of my mentor Julian Sparber. More specifically, this is about adding a page to the room info dialog that can display all the media (e.g. images, videos, gifs) sent in the room. This is similar to what it’s found in other messaging apps, like Telegram, WhatsApp, etc.

I will be posting more in the next days with details on the implementation and milestones about the project.

Thanks for reading.

Creating your own math-themed jigsaw puzzle from scratch

 Don't you just hate it when you get nerd sniped?

I don't either. It is usually quite fun. Case in point, some time ago I came upon this YouTube video:

It is about how a "500 piece puzzle" usually does not have 500 pieces, but instead slightly more to make manufacturing easier (see the video for the actual details, they are actually quite interesting). As I was watching the video I came up with an idea for my own math-themed jigsaw puzzle.

You can probably guess where this is going.

The idea would not leave me alone so I had to yield to temptation and get the damn thing implemented. This is where problems started. The puzzle required special handling and tighter tolerances than the average jigsaw puzzle made from a custom photo. As a taste of things to come, the final puzzle will only have two kind of pieces, namely these:

For those who already deciphered what the final result will look like: good job.

As you can probably tell, the printed pattern must be aligned very tightly to the cut lines. If it shifts by even a couple of millimeters, which is common in printing, then the whole thing breaks apart. Another requirement is that I must know the exact piece count beforehand so that I can generate an output image that matches the puzzle cut.

I approached several custom jigsaw puzzle manufacturers and they all told me that what I wanted was impossible and that their manufacturing processes are not capable of such precision. One went so far as to tell me that their print tolerances are corporate trade secrets and so is the cut used. Yes, the cut. Meaning the shapes of the resulting pieces. The one feature that is the same on every custom jigsaw puzzle and thus is known by anyone who has ever bought one of them. That is a trade secret. No, it makes no sense to me either.

Regardless it seemed like the puzzle could not be created. But, as the old saying goes, all problems are solvable with a sufficient application of public libraries and lasers.

This is a 50 Watt laser cutter and engraver that is freely usable in my local library. This nicely steps around the registration issues because printing and cutting are done at the same time and the machine is actually incredibly accurate (sub-millimeter). The downside is that you can't use color in the image. Color is created by burning so you can only create grayscale images and the shade is not particularly precise, though the shapes are very accurate.

After figuring this out the procedure got simple. All that was needed was some Python, Cairo and 3mm plywood. Here is the machine doing the engraving.

After the image had been burned, it was time to turn the laser to FULL POWER and cut the pieces. First sideways

then lengthwise.

And here is the final result all assembled up.

This is a 256 piece puzzle showing a Hilbert Curve. It is a space filling curve, that is, it travels through each "pixel" in the image exactly once in a continuous fashion and never intersects itself. As you can (hopefully) tell, there is also a gradient so that the further along the curve you get the lighter the printing gets. So in theory you could assemble this jigsaw puzzle by first ordering the pieces from darkest to lightest and then just joining the pieces one after the other.

The piece cut in this puzzle is custom. The "knob" shape is parameterized by a bunch of variables and each cut between two pieces has been generated by picking random values for said parameters. So in theory you could generate an arbitrarily large jigsaw puzzle with this method (it does need to be a square with the side length being a power of two, though).