September 21, 2021

Properties, introspection, and you

It is a truth universally acknowledged, that a GObject class in possession of a property, must be in want of an accessor function.

The main issue with that statement is that it’s really hard to pair the GObject property with the accessor functions that set the property’s value, and retrieve it.

From a documentation perspective, tools might not establish any relation (gtk-doc), or they might require some additional annotation to do so (gi-docgen); but at the introspection level there’s nothing in the XML or the binary data that lets you go from a property name to a setter, or a getter, function. At least, until now.

GObject-introspection 1.70, released alongside GLib 2.70 and GNOME 41, introduced various annotations for both properties and methods that let you go from one to the other; additionally, new API was added to libgirepository to allow bindings to dynamic languages to establish that relation at run time.


If you have a property, and you document it as you should, you’ll have something like this:

 * YourWidget:your-property
 * A property that does something amazing.

If you want to associate the setter and getter functions to this property, all you need to do is add the following identifier annotations to it:

 * YourWidget:your-property: (setter set_your_property) (getter get_your_property)
 * A property that does something amazing.

The (setter) and (getter) annotations take the name of the method that is used to set, and get, the property, respectively. The method name is relative to the type, so you should not pass the C symbol.

On the accessor methods side, you have two additional annotations:

 * your_widget_set_your_property: (set-property your-property)
 * @self: your widget
 * @value: the value to set
 * Sets the given value for your property.


 * your_widget_get_your_property: (get-property your-property)
 * @self: your widget
 * Retrieves the value of your property.
 * Returns: the value of the property


Of course, you’re now tempted to go and add those annotations to all your properties and related accessors. Before you do that, though, you should know that the introspection scanner will try and match properties and accessors by itself, using appropriate heuristics:

  • if your object type has a writable, non-construct-only property, and a method that is called set_<property>, then the property will have a setter and the method will be matched to the property
  • if your object type has a readable property, and a method that is called get_<property>, then the property will have a getter and the method will be matched to the property
  • additionally, if the property is read-only and the property type is boolean, the scanner will look at a method that has the same name as the property as well; this is meant to catch getters like gtk_widget_has_focus(), which accesses the read-only property has-focus


All of the above ends up in the introspection XML, which is used by documentation tools and code generators. Bindings for dynamic languages using libgirepository can also access this information at run time, by using the API in GIPropertyInfo to retrieve the setter and getter function information for a property; and the API in GIFunctionInfo to retrieve the property being set.


Ideally, with this information, language bindings should be able to call the accessor functions instead of going through the generic g_object_set_property() and g_object_get_property() API, except as a fallback. This should speed up the property access in various cases. Additionally, bindings could decide to stop exposing C accessors, and only expose the property, in order to make the API more idiomatic.

On the documentation side, this will ensure that tools like gi-docgen will be able to bind the properties and their accessors more reliably, without requiring extra attributes.

And one more thing

One thing that did not make it in time for the 1.70 release, but will land early in the next development cycle for gobject-introspection, is the validation for properties. Language bindings don’t really like it when the C API exposes properties that have the same name of methods and virtual functions; we already have a validation pass ready to land, so expect warnings in the near future.

Another feature that will land early in the cycle is the (emitter) annotation, which will bind a method emitting a signal with the signal name. This is a feature taken from Vala’s metadata, and should improve the quality of life of people using introspection data with Vala, as well as removing the need for another attribute in gi-docgen.

Finally, if you maintain a language binding: please look at !204, and make sure you’re not calling g_assert_not_reached() or g_error() when encountering a new scope type. The forever scope cannot land if it breaks every single binding in existence.

September 20, 2021

Maps and GNOME 41


 It's been a while since my last blog post. And in the meantime GNOME 41 was released. So I thought it would be time for a new post, although there's not been that much news to show in Maps itself since the last update (where I showcased the refreshed icons for search results.

But a few small visual improvements have been done since.

Already in 40.0, we made the display of population numbers for places (such as towns, cities, and similar) locale-aware. So that it now uses localized digits and decimal separators.

Now in 41.0 another small refinement has been made to show elevations below mean sea level expressed in words, rather that just showing a negative numer (which, although correct, may look a bit technical):

Also along the lines of visual polish, we now show population numbers in a rounded format if the value is an exact multiple of 100,000, assuming such a figure is most likely not an exact number but rather an approximation.

This utilises the localization API from ES (JavaScript) and as can be seen here gives a localized unit suffix and also in the case of Japanese as shown in the last example, the multiple in this case is 10,000, as this is based on traditional Chinese numerals, with denominations 10⁴, 10⁸ and so on. So in this case it would translate to “800 ten-thousands (man)”.

And over in libshumate (the new GTK4-based map rendering library we're working to eventually replace libchamplain, and enable migrating to GTK4), James and Corentin has been busy.

Among other things, James has implemented rotation support (for pinch gestures on tough screens among others), fractional scaling (should make smoother pinch zooming possible, something that has been quite poor currently in libchamplain, and thus in Maps). Also he started working on a renderer that vector format tiles:

Using vector tiles is something that's been in long-term plans for a long time. One thing that this could enable is the possibility to download map data for offline usage, something that is not really feasible with bitmap tiles. But actually I think something perhaps even more useful could be the possibility to render names in the user's language. This has always been an area where compromises had to be done. For example Mapbox' street tiles uses English names in the default tile set which would have the benefit of rendering as something that many people could read out as common Romanized transliterations of place names where the native reading is in a script they can't read. The downside being that they see place names near there home always in English, even though they could read (and might be more familiar with) the native reading. On the other hand the default tiles (which Maps now uses) renders the native names (which is better for your home location, but vice versa would make any place where the native script is unfamiliar not understandable).

And myself I have started on a little side project GNOME streets. a style for rendering bitmap tile maps using the GNOME color palette (though only some parts of the map uses these colors so far):

Eventually such a style could either be deployed on a GNOME-hosted bitmap tile server, or it could perhaps be used as the base of a stylesheet for rendering vectors tiles client-side in Maps.

So, over and out for now, until next time!

Glyphtracer 2.0

Ages ago I wrote a simple GUI app called Glyphtracer to simplify the task of creating fonts from scanned images. It seems people are still using it. The app is written in Python 2 and Qt 4, so getting it running becomes harder and harder as time goes by.

Thus I spent a few hours porting it to Python3 and PyQt 5 and bumped the major version to 2.0. The program can be obtained from this Github repo.

September 19, 2021

Creating Quality Backtraces for Crash Reports

Hello Linux users! Help developers help you: include a quality backtraces taken with gdb each and every time you create an issue report for a crash. If you don’t, most developers will request that you provide a backtrace, then ignore your issue until you manage to figure out how to do so. Save us the trouble and just provide the backtrace with your initial report, so everything goes smoother. (Backtraces are often called “stack traces.” They are the same thing.)

Don’t just copy the lower-quality backtrace you see in your system journal into your issue report. That’s a lot better than nothing, but if you really want the crash to be fixed, you should provide the developers with a higher-quality backtrace from gdb. Don’t know how to get a quality backtrace with gdb? Read on.

Future of Crash Reporting

Here are instructions for getting a quality backtrace for a crashing process on Fedora 35, which is scheduled to be released in October:

$ coredumpctl gdb
(gdb) bt full

Press ‘c’ (continue) when required. When it’s done printing, press ‘q’ to quit. That’s it! That’s all you need to know. You’re done. Two points of note:

  • When a process crashes, a core dump is caught by systemd-coredump and stored for future use. The coredumpctl gdb command opens the most recent core dump in gdb. systemd-coredump has been enabled by default in Fedora since Fedora 26, which was released four years ago. (It’s also enabled by default in RHEL 8.)
  • After opening the core dump, gdb uses debuginfod to automatically download all required debuginfo packages, ensuring the generated backtrace is useful. debuginfod is a new feature in Fedora 35.

If you’re not an inhabitant of the future, you are probably missing at least debuginfod today, and possibly also systemd-coredump depending on which operating system you are using, so we will also learn how to take a backtrace without these tools. It will be more complicated, of course.


If your operating system enables systemd-coredump by default, then congratulations! This makes reporting crashes much easier because you can easily retrieve a core dump for any recent crash using the coredumpctl command. For example, coredumpctl alone will list all available core dumps. coredumpctl gdb will open the core dump of the most recent crash in gdb. coredumpctl gdb 1234 will open the core dump corresponding to the most recent crash of a process with pid 1234. It doesn’t get easier than this.

Core dumps are stored under /var/lib/systemd/coredump. systemd-coredump will automatically delete core dumps that exceed configurable size limits (2 GB by default). It also deletes core dumps if your free disk space falls below a configurable threshold (15% free by default). Additionally, systemd-tmpfiles will delete core dumps automatically after some time has passed (three days by default). This ensures your disk doesn’t fill up with old core dumps. Although most of these settings seem good to me, the default 2 GB size limit is way too low in my opinion, as it causes systemd to immediately discard crashes of any application that uses WebKit. I recommend raising this limit to 20 GB by creating an /etc/systemd/systemd.conf.d/50-coredump.conf drop-in containing the following:


The other settings are likely sufficient to prevent your disk from filling up with core dumps.

Sadly, although systemd-coredump has been around for a good while now and many Linux operating systems have it enabled by default, many still do not. Most notably, the Debian ecosystem is still not yet on board. To check if systemd-coredump is enabled on your system:

$ cat /proc/sys/kernel/core_pattern

If you see systemd-coredump, then you’re good.

To enable it in Debian or Ubuntu, just install it:

# apt install systemd-coredump

Ubuntu users, note this will cause apport to be uninstalled, since it is currently incompatible. Also note that I switched from $ (which indicates a normal prompt) to # (which indicates a root prompt).

In other operating systems, you may have to manually enable it:

# echo "kernel.core_pattern=|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h" > /etc/sysctl.d/50-coredump.conf
# /usr/lib/systemd/systemd-sysctl --prefix kernel.core_pattern

Note the exact core pattern to use changes occasionally in newer versions of systemd, so these instructions may not work everywhere.

Detour: Manual Core Dump Handling

If you don’t want to enable systemd-coredump, life is harder and you should probably reconsider, but it’s still possible to debug most crashes. First, enable core dump creation by removing the default 0-byte size limit on core files:

$ ulimit -c unlimited

This change is temporary and only affects the current instance of your shell. For example, if you open a new tab in your terminal, you will need to set the ulimit again in the new tab.

Next, run your program in the terminal and try to make it crash. A core file will be generated in the current directory. Open it by starting the program that crashed in gdb and passing the filename of the core file that was created. For example:

$ gdb gnome-chess ./core

This is downright primitive, though:

  • You’re going to have a hard time getting backtraces for services that are crashing, for starters. If starting the service normally, how do you set the ulimit? I’m sure there’s a way to do it, but I don’t know how! It’s probably easier to start the service manually, but then what command line flags are needed to properly do so? It will be different for each service, and you have to figure this all out for yourself.
  • Special situations become very difficult. For example, if a service is crashing only when run early during boot, or only during an initial setup session, you are going to have an especially hard time.
  • If you don’t know how to reproduce a crash that occurs only rarely, it’s inevitably going to crash when you’re not prepared to manually catch the core dump. Sadly, not all crashes will occur on demand when you happen to be running the software from a terminal with the right ulimit configured.
  • Lastly, you have to remember to delete that core file when you’re done, because otherwise it will take up space on your disk space until you do. You’ll probably notice if you leave core files scattered in ~/home, but you might not notice if you’re working someplace else.

Seriously, just enable systemd-coredump. It solves all of these problems and guarantees you will always have easy access to a core dump when something crashes, even for crashes that occur only rarely.

Debuginfo Installation

Now that we know how to open a core dump in gdb, let’s talk about debuginfo. When you don’t have the right debuginfo packages installed, the backtrace generated by gdb will be low-quality. Almost all Linux software developers deal with low-quality backtraces on a regular basis, because most users are not very good at installing debuginfo. Again, if you’re reading this in the future using Fedora 35 or newer, you don’t have to worry about this anymore because debuginfod will take care of everything for you. I would be thrilled if other Linux operating systems would quickly adopt debuginfod so we can put the era of low-quality crash reports behind us. But since most readers of this blog today will not yet have debuginfod enabled, let’s learn how to install debuginfo manually.

As an example, I decided to force gnome-chess to crash using the command killall -SEGV gnome-chess, then I ran coredumpctl gdb to open the resulting core dump in gdb. After a bunch of spam, I saw this:

Missing separate debuginfos, use: dnf debuginfo-install gnome-chess-40.1-1.fc34.x86_64
--Type <RET> for more, q to quit, c to continue without paging--
Core was generated by `/usr/bin/gnome-chess --gapplication-service'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007fa23d8b55bf in __GI___poll (fds=0x5636deb06930, nfds=2, timeout=2830)
    at ../sysdeps/unix/sysv/linux/poll.c:29
29  return SYSCALL_CANCEL (poll, fds, nfds, timeout);
[Current thread is 1 (Thread 0x7fa23ca0cd00 (LWP 140177))]

If you are using Fedora, RHEL, or related operating systems, the line “missing separate debuginfos” is a good hint that debuginfo is missing. It even tells you exactly which dnf debuginfo-install command to run to remedy this problem! But this is a Fedora ecosystem feature, and you won’t see this on most other operating systems. Usually, you’ll need to manually locate the right debuginfo packages to install. Debian and Ubuntu users can do this by searching for and installing -dbg or -dbgsym packages until each frame in the backtrace looks good. You’ll just have to manually guess the names of which debuginfo packages you need to install based on the names of the libraries in the backtrace. Look here for instructions for popular operating systems.

How do you know when the backtrace looks good? When each frame has file names, line numbers, function parameters, and local variables! Here is an example of a bad backtrace, if I continue the gnome-chess example above without properly installing the required debuginfo:

(gdb) bt full
#0 0x00007fa23d8b55bf in __GI___poll (fds=0x5636deb06930, nfds=2, timeout=2830)
    at ../sysdeps/unix/sysv/linux/poll.c:29
        sc_ret = -516
        sc_cancel_oldtype = 0
#1 0x00007fa23eee648c in g_main_context_iterate.constprop () at /lib64/
#2 0x00007fa23ee8fc03 in g_main_context_iteration () at /lib64/
#3 0x00007fa23e4b599d in g_application_run () at /lib64/
#4 0x00005636dd7b79a2 in chess_application_main ()
#5 0x00007fa23d7e7b75 in __libc_start_main (main=0x5636dd7aaa50 <main>, argc=2, argv=0x7fff827b6438, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fff827b6428)
    at ../csu/libc-start.c:332
        self = <optimized out>
        result = <optimized out>
        unwind_buf = 
              {cancel_jmp_buf = {{jmp_buf = {94793644186304, 829313697107602221, 94793644026480, 0, 0, 0, -829413713854928083, -808912263273321683}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x2, 0x7fff827b6438}, data = {prev = 0x0, cleanup = 0x0, canceltype = 2}}}
        not_first_call = <optimized out>
#6 0x00005636dd7aaa9e in _start ()

This backtrace has six frames, which shows where the code was during program execution when the crash occurred. You can see line numbers for frame #0 (poll.c:29) and #5 (libc-start.c:332), and these frames also show the values of function parameters and variables on the stack, which are often useful for figuring out what went wrong. These frames have good debuginfo because I already had debuginfo installed for glibc. But frames #1 through #4 do not look so useful, showing only function names and the library and nothing else. This is because I’m using Fedora 34 rather than Fedora 35, so I don’t have debuginfod yet, and I did not install proper debuginfo for libgio, libglib, and gnome-chess. (The function names are actually only there because programs in Fedora include some limited debuginfo by default. In many operating systems, you will see ??? instead of function names.) A developer looking at this backtrace is not going to know what went wrong.

Now, let’s run the recommended debuginfo-install command:

# dnf debuginfo-install gnome-chess-40.1-1.fc34.x86_64

When the command finishes, we’ll start gdb again, using coredumpctl gdb just like before. This time, we see this:

Missing separate debuginfos, use: dnf debuginfo-install avahi-libs-0.8-14.fc34.x86_64 colord-libs-1.4.5-2.fc34.x86_64 cups-libs-2.3.3op2-7.fc34.x86_64 fontconfig-2.13.94-2.fc34.x86_64 glib2-2.68.4-1.fc34.x86_64 graphene-1.10.6-2.fc34.x86_64 gstreamer1-1.19.1- gstreamer1-plugins-bad-free-1.19.1- gstreamer1-plugins-base-1.19.1- gtk4-4.2.1-1.fc34.x86_64 json-glib-1.6.6-1.fc34.x86_64 krb5-libs-1.19.2-2.fc34.x86_64 libX11-1.7.2-3.fc34.x86_64 libX11-xcb-1.7.2-3.fc34.x86_64 libXfixes-6.0.0-1.fc34.x86_64 libdrm-2.4.107-1.fc34.x86_64 libedit-3.1-38.20210714cvs.fc34.x86_64 libepoxy-1.5.9-1.fc34.x86_64 libgcc-11.2.1-1.fc34.x86_64 libidn2-2.3.2-1.fc34.x86_64 librsvg2-2.50.7-1.fc34.x86_64 libstdc++-11.2.1-1.fc34.x86_64 libxcrypt-4.4.25-1.fc34.x86_64 llvm-libs-12.0.1-1.fc34.x86_64 mesa-dri-drivers-21.1.8-1.fc34.x86_64 mesa-libEGL-21.1.8-1.fc34.x86_64 mesa-libgbm-21.1.8-1.fc34.x86_64 mesa-libglapi-21.1.8-1.fc34.x86_64 nettle-3.7.3-1.fc34.x86_64 openldap-2.4.57-5.fc34.x86_64 openssl-libs-1.1.1l-1.fc34.x86_64 pango-1.48.9-2.fc34.x86_64

Yup, Fedora ecosystem users will need to run dnf debuginfo-install twice to install everything required, because gdb doesn’t list all required packages until the second time. Next, we’ll run coredumpctl gdb one last time. There will usually be a few more debuginfo packages that are still missing because they’re not available in the Fedora repositories, but now you’ll probably have enough to get a quality backtrace:

(gdb) bt full
#0  0x00007fa23d8b55bf in __GI___poll (fds=0x5636deb06930, nfds=2, timeout=2830)
    at ../sysdeps/unix/sysv/linux/poll.c:29
        sc_ret = -516
        sc_cancel_oldtype = 0
#1  0x00007fa23eee648c in g_main_context_poll
    (priority=, n_fds=2, fds=0x5636deb06930, timeout=, context=0x5636de7b24a0)
    at ../glib/gmain.c:4434
        ret = 
        errsv = 
        poll_func = 0x7fa23ee97c90 
        max_priority = 2147483647
        timeout = 2830
        some_ready = 
        nfds = 2
        allocated_nfds = 2
        fds = 0x5636deb06930
        begin_time_nsec = 30619110638882
#2  g_main_context_iterate.constprop.0
    (context=context@entry=0x5636de7b24a0, block=block@entry=1, dispatch=dispatch@entry=1, self=)
    at ../glib/gmain.c:4126
        max_priority = 2147483647
        timeout = 2830
        some_ready = 
        nfds = 2
        allocated_nfds = 2
        fds = 0x5636deb06930
        begin_time_nsec = 30619110638882
#3  0x00007fa23ee8fc03 in g_main_context_iteration
    (context=context@entry=0x5636de7b24a0, may_block=may_block@entry=1) at ../glib/gmain.c:4196
        retval = 
#4  0x00007fa23e4b599d in g_application_run
    (application=0x5636de7ae260 [ChessApplication], argc=-2105843004, argv=)
    at ../gio/gapplication.c:2560
        arguments = 0x5636de7b2400
        status = 0
        context = 0x5636de7b24a0
        acquired_context = 
        __func__ = "g_application_run"
#5  0x00005636dd7b79a2 in chess_application_main (args=0x7fff827b6438, args_length1=2)
    at src/gnome-chess.p/gnome-chess.c:5623
        _tmp0_ = 0x5636de7ae260 [ChessApplication]
        _tmp1_ = 0x5636de7ae260 [ChessApplication]
        _tmp2_ = 
        result = 0

I removed the last two frames because they are triggering a strange WordPress bug, but that’s enough to get the point. It looks much better! Now the developer can see exactly where the program crashed, including filenames, line numbers, and the values of function parameters and variables on the stack. This is as good as a crash report is normally going to get. In this case, it crashed when running poll() because gnome-chess was not actually doing anything at the time of the crash, since we crashed it by manually sending a SIGSEGV signal. Normally the backtrace will look more interesting.

Special Note for Arch Linux Users

Arch does not ship debuginfo packages, btw. To prepare a quality backtrace, you need to rebuild each package yourself with debuginfo enabled. This is a chore, to say the least. You will probably need to rebuild several system libraries in addition to the application itself if you want to get a quality backtrace. This is hardly impossible, but it’s a lot of work, too much to expect users to do for a typical bug report. You might want to consider attempting to reproduce your crash on another operating system in order to make it easier to get the backtrace. What a shame!

debuginfod for Fedora 34

Again, all of that manual debuginfo installation is no longer required as of Fedora 35, where debuginfod is enabled by default. It’s actually all ready to go in Fedora 34, just not enabled by default yet. You can try it early using the DEBUGINFOD_URLS environment variable:

$ DEBUGINFOD_URLS= coredumpctl gdb

Then you can watch gdb download the required debuginfo for you! Again, this environment variable will no longer be necessary in Fedora 35. (Technically, it will still be needed, but it will be configured by default.)

debuginfod for Debian Users

Debian users can use debuginfod, but it has to be enabled manually:


See here for more information. This requires Debian 11 “bullseye” or newer. If you’re using Ubuntu or other operating systems derived from Debian, you’ll need to wait until a debuginfod server for your operating system is available.


If your application uses Flatpak, you can use the flatpak-coredumpctl script to open core dumps in gdb. For most runtimes, including those distributed by GNOME or Flathub, you will need to manually install (a) the debug extension for your app, (b) the SDK runtime corresponding to the platform runtime that you are using, and (c) the debug extension for the SDK runtime. For example, to install everything required to debug Epiphany 40 from Flathub, you would run:

$ flatpak install org.gnome.Epiphany.Debug//stable
$ flatpak install org.gnome.Sdk//40
$ flatpak install org.gnome.Sdk.Debug//40

(flatpak-coredumpctl will fail to start if you don’t have the correct SDK runtime installed, but it will not fail if you’re missing the debug extensions. You’ll just wind up with a bad backtrace.)

The debug extensions need to exactly match the versions of the app and runtime that crashed, so backtrace generation may be unreliable after you install them for the very first time, because you would have installed the latest versions of the extensions, but your core dump might correspond to an older app or runtime version. If the crash is reproducible, it’s a good idea to run flatpak update after installing to ensure you have the latest version of everything, then reproduce the crash again.

Once your debuginfo is installed, you can open the backtrace in gdb using flatpak-coredumpctl. You just have to tell flatpak-coredumpctl the app ID to use:

$ flatpak-coredumpctl org.gnome.Epiphany

You can pass matches to coredumpctl using -m. For example, to open the core dump corresponding to a crashed process with pid 1234:

$ flatpak-coredumpctl -m 1234 org.gnome.Epiphany

Thibault Saunier wrote flatpak-coredumpctl because I complained about how hard it used to be to debug crashed Flatpak applications. Clearly it is no longer hard. Thanks Thibault!

Update: On newer versions of Debian and Ubuntu, flatpak-coredumpctl is included in the libflatpak-dev subpackage rather than the base flatpak package, so you’ll have to install libflatpak-dev first. But on older OS versions, including Debian 10 “buster” and Ubuntu 20.04, it is unfortunately installed as /usr/share/doc/flatpak/examples/flatpak-coredumpctl rather than /usr/bin/flatpak-coredumpctl due to a regrettable packaging choice that has been corrected in newer package versions. As a workaround, you can simply copy it to /usr/local/bin. Don’t forget to delete your copy after upgrading to a newer OS version, or it will shadow the packaged version.

Fedora Flatpaks

Flatpaks distributed by Fedora are different than those distributed by GNOME or by Flathub because they do not have debug extensions. Historically, this has meant that debugging crashes was impractical. The best solution was to give up.

Good news! Fedora’s Flatpaks are compatible with debuginfod, which means debug extensions will no longer be missed. You do still need to manually install the org.fedoraproject.Sdk runtime corresponding to the version of the org.fedoraproject.Platform runtime that the application uses, because this is required for flatpak-coredumpctl to work, but nothing else is required. For example, to get a backtrace for Fedora’s Epiphany Flatpak using a Fedora 35 host system, I ran:

$ flatpak install org.fedoraproject.Sdk//f34
$ flatpak-coredumpctl org.gnome.Epiphany
(gdb) bt full

(The f34 is not a typo. Epiphany currently uses the Fedora 34 runtime regardless of what host system you are using.)

That’s it!


At this point, you should know enough to obtain a high-quality backtrace on most Linux systems. That will usually be all you really need, but it never hurts to know a little more, right?

Alternative Types of Backtraces

At the top of this blog post, I suggested using bt full to take the backtrace because this type of backtrace is the most useful to most developers. But there are other types of backtraces you might occasionally want to collect:

  • bt on its own without full prints a much shorter backtrace without stack variables or function parameters. This form of the backtrace is more useful for getting a quick feel for where the bug is occurring, because it is much shorter and easier to read than a full backtrace. But because there are no stack variables or function parameters, it might not contain enough information to solve the crash. I sometimes like to paste the first few lines of a bt backtrace directly into an issue report, then submit the bt full version of the backtrace as an attachment, since an entire bt full backtrace can be long and inconvenient if pasted directly into an issue report.
  • thread apply all bt prints a backtrace for every thread. Normally these backtraces are very long and noisy, so I don’t collect them very often, but when a threadsafety issue is suspected, this form of backtrace will sometimes be required.
  • thread apply all bt full prints a full backtrace for every thread. This is what automated bug report tools generally collect, because it provides the most information. But these backtraces are usually huge, and this level of detail is rarely needed, so I normally recommend starting with a normal bt full.

If in doubt, just use bt full like I showed at the top of this blog post. Developers will let you know if they want you to provide the backtrace in a different form.

gdb Logging

You can make gdb print your session to a file. For longer backtraces, this may be easier than copying the backtrace from a terminal:

(gdb) set logging on

Memory Corruption

While a backtrace taken with gdb is usually enough information for developers to debug crashes, memory corruption is an exception. Memory corruption is the absolute worst. When memory corruption occurs, the code will crash in a location that may be far removed from where the actual bug occurred, rendering gdb backtraces useless for tracking down the bug. As a general rule, if you see a crash inside a memory allocation routine like malloc() or g_slice_alloc(), you probably have memory corruption. If you see magazine_chain_pop_head(), that’s called by g_slice_alloc() and is a sure sign of memory corruption. Similarly, crashes in GTK’s CSS machinery are almost always caused by memory corruption somewhere else.

Memory corruption is generally impossible to debug unless you are able to reproduce the issue under valgrind. valgrind is extremely slow, so it’s impractical to use it on a regular basis, but it will get to the root of the problem where gdb cannot. As a general rule, you want to run valgrind with --track-origins=yes so that it shows you exactly what went wrong:

$ valgrind --track-origins=yes my_app

When valgrinding a GLib application, including all GNOME applications, always use the G_SLICE=always-malloc environment variable to disable GLib’s slice allocator, to ensure the highest-quality diagnostics. Correction: the slice allocator can now detect valgrind and disable itself automatically.

When valgrinding a WebKit application, there are some WebKit-specific environment variables to use. Malloc=1 will disable the bmalloc allocator, GIGACAGE_ENABLED=0 will disable JavaScriptCore’s Gigacage feature (Update: turns out this is actually implied by Malloc=1), and WEBKIT_FORCE_SANDBOX=0 will disable the web process sandbox used by WebKitGTK or WPE WebKit:

$ Malloc=1 WEBKIT_FORCE_SANDBOX=0 valgrind --track-origins=yes epiphany

If you cannot reproduce the issue under valgrind, you’re usually totally out of luck. Memory corruption that only occurs rarely or under unknown conditions will lurk in your code indefinitely and cause occasional crashes that are effectively impossible to fix.

Another good tool for debugging memory corruption is address sanitizer (asan), but this is more complicated to use. Experienced users who are comfortable with rebuilding applications using special compiler flags may find asan very useful. However, because it can be very difficult to use,  I recommend sticking with valgrind if you’re just trying to report a bug.

Apport and ABRT

There are two popular downstream bug reporting tools: Ubuntu has Apport, and Fedora has ABRT. These tools are relatively easy to use — no command line knowledge required — and produce quality crash reports. Unfortunately, while the tools are promising, the crash reports go to downstream packagers who are generally either not watching bug reports, or else not interested or capable of fixing upstream software problems. Since downstream reports are very often ignored, it’s better to report crashes directly to upstream if you want your issue to be seen by the right developers and actually fixed. Of course, only report issues upstream if you’re using a recent software version. Fedora and Arch users can pretty much always safely report directly to upstream, as can Ubuntu users who are using the very latest version of Ubuntu. If you are an Ubuntu LTS user, you should stick with reporting issues to downstream only, or at least take the time to verify that the issue still occurs with a more recent software version.

There are a couple more problems with these tools. As previously mentioned, Ubuntu’s apport is incompatible with systemd-coredump. If you’ve read this far, you know you really want systemd-coredump enabled, so I recommend disabling apport until it learns to play ball with systemd-coredump.

The technical design of Fedora’s ABRT is currently better because it actually retrieves your core dumps from systemd-coredump, so you don’t have to choose between one or the other. Unfortunately, ABRT has many serious user experience bugs and warts. I can’t recommend it for this reason, but it if it works well enough for you, it does create some great downstream crash reports. Whether a downstream package maintainer will look at those reports is hit or miss, though.

What is a crash, really?

Most developers consider crashes on Unix systems to be program termination via a Unix signal that triggers creation of a core dump. The most common of these are SIGSEGV (segmentation fault, “invalid memory reference”) or SIBABRT (usually an intentional crash due to an assertion failure). Less-common signals are SIGBUS (“bad memory access”) or SIGILL (“illegal instruction”). Sandboxed applications might occasionally see SIGSYS (“bad system call”). See the manpage signal(7) for a full list. These are cases where you can get a backtrace to help with tracking down the issues.

What is not a crash? If your application is hanging or just not behaving properly, that is not a crash. If your application is killed using SIGTERM or SIGKILL — this can happen when systemd-oomd determines you are low on memory,  or when a service is taking too long to stop — this is also not a crash in the usual sense of the word, because you’re not going to be able to get a backtrace for it. If a website is slow or unavailable, the news might say that it “crashed,” but it’s obviously not the same thing as what we’re talking about here. The techniques in this blog post are no use for these sorts of “crashes.”


If you have systemd-coredump enabled and debuginfod installed and working, most crash reports will be simple.  Memory corruption is a frustrating exception. Encourage your operating system to enable systemd-coredump and debuginfod if it doesn’t already.  Happy crash reporting!

September 18, 2021

The Truth they are not telling you about “Themes”

Before we start, let’s get this out of the way because the week long delirium on social media has dragged enough.

Yes, libadwaita “hardcodes” Adwaita. Yes, applications, as is, will not be following a custom system theme. Yes this does improve the default behavior of application for GNOME when run on other platforms like Elementary OS. However, this is the result of a technical limitation, and not some evil plot as Twitter will keep telling you…

The reason is that in order for High Contrast (and the upcoming Dark Style) to work, libadwaita needs to override the theme name property so it doesn’t fallback to GTK’s “Default” High Contrast style. The “Default” style is an older version of Adwaita, not your system style.

Compared to GTK 3, there isn’t a new way to enforce the “hardcoded” style. The GTK_THEME variable still works, as does gtk.css and probably 3 other ways of doing this. The process to theme your system might be a bit different compared to GTK 3 but it will still work. Likewise, if you are developing a distribution, you have control of the end product and can do anything you want with the code. There is a plethora of options available. Apparently complaining on social media and bullying volunteers into submission was one such option…

And I guess this also needs to be stated: this change only affects apps that choose to use libadwaita and adopt the GNOME Design Guidelines, not “every” GTK 4 application.

As usual, the fact that the themes keep working doesn’t mean they are supported. The same issues about restyling applications when they don’t expect it apply and GNOME can not realistically support arbitrary stylesheets that none of the contributors develop against and test.

Now onto the actual blogpost.

There seems to be some confusion when it comes to libadwaita’s stylesheet and coloring APIs. It’s easy to mix them up when you haven’t heard of libadwaita before, so here is a short introduction on what they are and how they differ.

Keep in mind that the features discussed below are not guaranteed to land. After libadwaita 1.0 the stylesheet will be frozen and treated as an API. That means that if a feature doesn’t make it by 1.0 it will be a breaking change and will have to wait for libadwaita 2.0.

An Application Coloring API / Accent Colors

The idea here is that you can define “accent” colors to be applied for various parts of widgets. You can also recolor any part of a widget however you like. Take a look at Epiphany’s private mode header bar for an example. For this to be possible the whole stylesheet had to be reworked. Extra care was needed to ensure that the functionality wouldn’t conflict with the high contrast preference and wouldn’t need special handling. I hope Alexander will blog about this work in more detail, as it was truly fascinating. I am very excited to see what developers do with the coloring API.

For now the colors can be controlled with the GTK-specific @define-color, similar to CSS variables. Programmatic API will be added later on as the dust settles. The API will be based on AdwStyleManager which is getting introduced by the Dark style preference MR and hasn’t landed yet.

Here’s a quick example:

@define-color accent_color @yellow_5;
@define-color accent_bg_color @yellow_2;
@define-color accent_fg_color black;

.controls {
    color: white;
    background: linear-gradient(to right, shade(@blue_3, .8), @purple_2);

.slider > trough > highlight {
    background: linear-gradient(to left, shade(@red_1, .8), @yellow_4);

.controls textview text {
  background: none;

.controls entry,
.controls spinbutton,
.controls textview {
  background-color: alpha(black, .15);
  color: white;

headerbar {
  background: alpha(white, .1);
  color: white;


GNOME Patterns application showcasing the capabilities of the CSS styles.

For a more detailed example of what you can do check Federico’s recent blogpost.

System Accents

This is heavily inspired by system accent settings in elementary OS, and it’s similar in function. Think of it like a way to set the accent color system-wide, then applications can read it and decide to follow or override it. A case where you want to override would be if your app had a Sepia mode for example.

The coloring API mentioned above is designed in a way that makes this feature easy to implement. The interface and UI for this are not yet fleshed out completely, and it’s debatable if it’s going to be implemented/merged at all. There are a couple of design issues and concerns that need further research. It’s a possibility, but don’t bet on it.

Picture of the Elementary OS 6 Appearance Settings panel.

Vendor Styling

The story behind this idea is extensive and best left for another post, so here’s the current status on this infamous topic.

There have been great accomplishments to reducing the possible fallout of restyling applications with brand colors. Nowadays vendors recognize that arbitrary restyling can be damaging to application developers and have taken some precautions.

Yaru reworked its style and rebased it on Adwaita, this helped reduced the changes to mostly the color palette and minor stylistic tweaks. This got rid of a lot of bugs surfacing in applications, as Yaru now at least has the same spacing, margins and padding as Adwaita. Pop!_OS followed suit shortly after, I believe it’s now based on Yaru.

However, both Ubuntu and Pop also introduced “Dark-modes”, Pop making it the default, which broke applications’ expectations. They did it despite being warned about it. As a result this ended up increasing the issues with theming by about an order of magnitude as now you would frequently end up with black on black, grey on grey and other fun coloring bugs. It should also be noted that neither Ubuntu nor System 76 approached any contributor I know of, about properly implementing a Dark Style preference upstream. Even though GNOME and Elementary contributors had been collaborating in public for the last 3 years.

Screenshot of gedit with Yaru Dark stylesheet, where the selected text becomes invisible.

Yaru developers did some research on the topic and there was a call for engagement by GNOME, but unfortunately ever since the last theming BoF in 2019, the conversation had dried up. The interested parties haven’t provided any details on what the scope of the API would need to be, how it would look like, or the detailed requirements. Nobody stepped up to help with the Adwaita changes that were required either, or with dark mode, or to work on the QA tooling, or to figure out the implementation details. Now we are sadly out of time for libadwaita 1.0 and there isn’t much hope for such a complex thing to be ready in the next 4 months.


For libadwaita 1.0 and GNOME 42 the work on recoloring widgets will likely be completed. A proper Dark Style setting will likely also be implement by then. System-wide accent colors are being discussed and looked at, but there are design related concerns about them, so it’s possible that they will never land. And there won’t be any “Theming API” for libadawaita 1.0. Maybe there will be renewed interest from the vendors that want it in the future, but given the story so far, I won’t hold my breath. I hope to be proven wrong.

September 16, 2021

Cool happenings in Fedora Workstation land

Been some time since my last update, so I felt it was time to flex my blog writing muscles again and provide some updates of some of the things we are working on in Fedora in preparation for Fedora Workstation 35. This is not meant to be a comprehensive whats new article about Fedora Workstation 35, more of a listing of some of the things we are doing as part of the Red Hat desktop team.

NVidia support for Wayland
One thing we spent a lot of effort on for a long time now is getting full support for the NVidia binary driver under Wayland. It has been a recurring topic in our bi-weekly calls with the NVidia engineering team ever since we started looking at moving to Wayland. There has been basic binary driver support for some time, meaning you could run a native Wayland session on top of the binary driver, but the critical missing piece was that you could not get support for accelerated graphics when running applications through XWayland, our compatibility layer. Which basically meant that any application requiring 3D support and which wasn’t a native Wayland application yet wouldn’t work. So over the last Months we been having a great collaboration with NVidia around closing this gap, with them working closely with us in fixing issues in their driver while we have been fixing bugs and missing pieces in the rest of the stack. We been reporting and discussing issues back and forth allowing us a very quickly turnaround on issues as we find them which of course all resulted in the NVidia 470.42.01 driver with XWayland support. I am sure we will find new corner cases that needs to be resolved in the coming Months, but I am equally sure we will be able to quickly resolve them due to the close collaboration we have now established with NVidia. And I know some people will wonder why we spent so much time working with NVidia around their binary driver, but the reality is that NVidia is the market leader, especially in the professional Linux workstation space, and there are lot of people who either would end up not using Linux or using Linux with X without it, including a lot of Red Hat customers and Fedora users. And that is what I and my team are here for at the end of the day, to make sure Red Hat customers are able to get their job done using their Linux systems.

Lightweight kiosk mode
One of the wonderful things about open source is the constant flow of code and innovation between all the different parts of the ecosystem. For instance one thing we on the RHEL side have often been asked about over the last few years is a lightweight and simple to use solution for people wanting to run single application setups, like information boards, ATM machines, cash registers, information kiosks and so on. For many use cases people felt that running a full GNOME 3 desktop underneath their application was either to resource hungry and or created a risk that people accidentally end up in the desktop session. At the same time from our viewpoint as a development team we didn’t want a completely separate stack for this use case as that would just increase our maintenance burden as we would end up having to do a lot of things twice. So to solve this problem Ray Strode spent some time writing what we call GNOME Kiosk mode which makes setting up a simple session running single application easy and without running things like the GNOME shell, tracker, evolution etc. This gives you a window manager with full support for the latest technologies such as compositing, libinput and Wayland, but coming in at about 18MB, which is about 71MB less than a minimal GNOME 3 desktop session. You can read more about the new Kiosk mode and how to use it in this great blog post from our savvy Edge Computing Product Manager Ben Breard. The kiosk mode session described in Ben’s article about RHEL will be available with Fedora Workstation 35.

high-definition mouse wheel support
A major part of what we do is making sure that Red Hat Enterprise Linux customers and Fedora users get hardware support on par with what you find on other operating systems. We try our best to work with our hardware partners, like Lenovo, to ensure that such hardware support comes day and date with when those features are enabled on other systems, but some things ends up taking longer time for various reasons. Support for high-definition mouse wheels was one of those. Peter Hutterer, our resident input expert, put together a great blog post explaining the history and status of high-definition mouse wheel support. As Peter points out in his blog post the feature is not yet fully supported under Wayland, but we hope to close that gap in time for Fedora Workstation 35.

Mouse with hires mouse

Mouse with HiRes scroll wheel

I feel I can’t do one of these posts without talking about latest developments in PipeWire, our unified audio and video server. Wim Taymans keeps working with rapidly growing PipeWire community to fix issues as they are reported and add new features to PipeWire. Most recently Wims focus has been on implementing support for S/PDIF passthrough support over both S/PDIF and HDMI connections. This will allow us to send undecoded data over such connections which is critical for working well with surround sound systems and soundbars. Also the PipeWire community has been working hard on further improving the Bluetooth support with bluetooth battery status support for head-set profile and using Apple extensions. aptX-LL and FastStream codec support was also added. And of course a huge amount of bug fixes, it turns out that when you replace two different sound servers that has been around for close to two decades there are a lot of corner cases to cover :). Make sure to check out two latest release notes for 0.3.35 and for 0.3.36 for details.

Screenshot of Easyeffects

EasyEffects is a great example of a cool new application built with PipeWire

Privacy screen
Another feature that we have been working on as a result of our Lenovo partnership is Privacy screen support. For those not familiar with this technology it is basically to allow you to reduce the readability of your screen when viewed from the side, so that if you are using your laptop at a coffee shop for instance then a person sitting close by will have a lot harder time trying to read what is on your screen. Hans de Goede has been shepherding the kernel side of this forward working with Marco Trevisan from Canonical on the userspace part of it (which also makes this a nice example of cross-company collaboration), allowing you to turn this feature on or off. This feature though is not likely to fully land in time for Fedora Workstation 35 so we are looking at if we will bring this in as an update to Fedora Workstation 35 or if it will be a Fedora Workstation 36 feature.


zink inside

Zink inside the penny

As most of you know the future of 3D graphics on Linux is the Vulkan API from the Khronos Group. This doesn’t mean that OpenGL is going away anytime soon though, as there is a large host of applications out there using this API and for certain types of 3D graphics development developers might still choose to use OpenGL over Vulkan. Of course for us that creates a little bit of a challenge because maintaining two 3D graphics interfaces is a lot of work, even with the great help and contributions from the hardware makers themselves. So we been eyeing the Zink project for a while, which aims at re-implementing OpenGL on top of Vulkan, as a potential candidate for solving our long term needs to support the OpenGL API, but without drowning us in work while doing so. The big advantage to Zink is that it allows us to support one shared OpenGL implementation across all hardware and then focus our HW support efforts on the Vulkan drivers. As part of this effort Adam Jackson has been working on a project called Penny.

Zink implements OpenGL in terms of Vulkan, as far as the drawing itself is concerned, but presenting that drawing to the rest of the system is currently system-specific (GLX). For hardware that already has a Mesa driver, we use GBM. On NVIDIA’s Vulkan (and probably any other binary stacks on Linux, and probably also like WSL or macOS + MoltenVK) we download the image from the GPU back to the CPU and then use the same software upload/display path as llvmpipe, which as you can imagine is Not Fast.

Penny aims to extend Zink by replacing both of those paths, and instead using the various Vulkan WSI extensions to manage presentation. Even for the GBM case this should enable higher performance since zink will have more information about the rendering pipeline (multisampling in particular is poorly handled atm). Future window system integration work can focus on Vulkan, with EGL and GLX getting features “for free” once they’re enabled in Vulkan.

3rd party software cleanup
Over time we have been working on adding more and more 3rd party software for easy consumption in Fedora Workstation. The problem we discovered though was that due to this being done over time, with changing requirements and expectations, the functionality was not behaving in a very intuitive way and there was also new questions that needed to be answered. So Allan Day and Owen Taylor spent some time this cycle to review all the bits and pieces of this functionality and worked to clean it up. So the goal is that when you enable third-party repositories in Fedora Workstation 35 it behaves in a much more predictable and understandable way and also includes a lot of applications from Flathub. Yes, that is correct you should be able to install a lot of applications from Flathub in Fedora Workstation 35 without having to first visit the Flathub website to enable it, instead they will show up once you turned the knob for general 3rd party application support.

Power profiles
Another item we spent quite a bit of time for Fedora Workstation 35 is making sure we integrate the Power Profiles work that Bastien Nocera has been working on as part of our collaboration with Lenovo. Power Profiles is basically a feature that allows your system to behave in a smarter way when it comes to power consumption and thus prolongs your battery life. So for instance when we notice you are getting low on battery we can offer you to go into a strong power saving mode to prolong how long you can use the system until you can recharge. More in-depth explanation of Power profiles in the official README.

I usually also have ended up talking about Wayland in my posts, but I expect to be doing less going forward as we have now covered all the major gaps we saw between Wayland and Jonas Ådahl got the headless support merged which was one of our big missing pieces and as mentioned above Olivier Fourdan and Jonas and others worked with NVidia on getting the binary driver with XWayland support working with GNOME Shell. Of course this being software we are never truly done, there will of course be new issues discovered, random bugs that needs to be fixed, and of course also new features that needs to be implemented. We already have our next big team focus in place, HDR support, which will need work from the graphics drivers, up through Mesa, into the window manager and the GUI toolkits and in the applications themselves. We been investigating and trying out some things for a while already, but we are now ready to make this a main focus for the team. In fact we will soon be posting a new job listing for a fulltime engineer to work on HDR vertically through the stack so keep an eye out for that if you are interested in working on this. The job will be open to candidates who which to work remotely, so as long as Red Hat has a business presence in the country you live we should be able to offer you the job if you are the right candidate for us. Update:Job listing is now online for our HDR engineer.

BTW, if you want to see future updates and keep on top of other happenings from Fedora and Red Hat in the desktop space, make sure to follow me on twitter.

Chafa 1.8: Terminal graphics with a side of everything

The Chafa changelog was growing long again, owing to about half a year's worth of slow accretion. Hence, a release. Here's some stuff that happened.

High-end protocols

With existing choices of the old text mode standby and its friend, the most unreasonably efficient sixel encoder known to man, I threw Kitty and iTerm2 on the pile, bringing our total number of output formats to four. I think that's all the terminal graphics anyone could want (unless you want ReGIS; in which case, tough tty).

Moar terminals

Modern terminal emulators are generally less fickle than their pre-y2k ancestors. However, sometimes it takes a little sleuthing to figure out which extended features might be hiding behind e.g. some mysterious xterm-256color façade so we can do the right thing.

Comparison of Chafa graphics in various terminals
Chafa, friend to all terminals (sample picture mine: Las Canicas, Santa María del Tule)

Luckily, Chafa has a steadily improving handle on terminals of the Unix/GNU/Linux world. A few examples:

Of course, this is forever a work in progress and an area where I receive regular, highly appreciated contributions *chef's kiss – somehow still not an actual emoji*.

Funky lo-fi features

Øyvind Kolås (of GIMP and GEGL fame) swooped in with new builtins for the legacy computing block, meaning Chafa is now PETSCII Ready™ – or as ready as you can be with Unicode 13.0: The standard has a few annoying issues, such as not declaring any code points for the four permutations of black triangle, relying instead on existing code points in the geometric shapes block (U+25E2..U+25E5) which are typically represented by fonts as sitting on the baseline surrounded by empty space and therefore useless next to the legacy computing and block elements blocks.

Still, it's got a sweet 2×3 dot matrix (--symbols sextant) and all those nifty wedge shapes (--symbols wedge).

Symbols for Legacy Computing, excerpt
Can't not have these. Well, most of them, anyway

Øyvind also added an 8-color mode. Used together (-c 8 --symbols legacy+space), these features enable visual emulation of Teletext Level 1 and similar systems widely deployed in the late 1970s until roughly 1990 (technically into the present, albeit perhaps not so widely anymore).

PETSCII parrots rendered by Chafa
PETSCII parrots; left: full color, right: 8 colors

Somewhere along the way I discovered that Øyvind has a Patreon page – and if you're a GIMP user and/or care about the free software graphics ecosystem, you may want to read this article and consider its implications.

A bit of background austerity

I followed up in the retro vein with a foreground-only (--fg-only) modifier, which allows emulation of vintage systems that could only specify a single color per cell against a uniform background color. A popular example is the Commodore 64's standard character mode. It's also useful in terminals where block symbols don't render correctly (for example due to missing font support), since it prevents background color variation from drowning out details in low-coverage symbols used in their place. The Linux console tends to be among these due to hardware and font limitations that are somehow still in play today.

This is also how many classic ASCII art packages did things; so I guess I am once again asking you to party like it's 1999 (…and stay up all night trying to make your Napster killer render with AAlib).

ASCII parrots rendered by Chafa
Left: 16-color ASCII on black, right: same, but on light gray using --invert

When used with --fg-only, the existing --bg option has a greater impact than usual; in addition to being the fade color for alpha blending, it determines the relationship between symbols and blank space, including symbols where the background color "wins" part of a cell. A dark image on a bright background will have more high-coverage symbols that cover up the background as much as possible.

If your terminal has a bright background color, --invert is a shortcut to inverting the sense of --fg and --bg; the white-on-black default then becomes black-on-white.

Since foreground-only mode leaves the background color alone, you can easily experiment with setting it yourself, e.g:

echo -e '\033[41m'; \
chafa -c 16 --symbols ascii --fg-only --bg darkred birbs.png

Weird and wonderful forum art redux

If you read Steam reviews, you may be familiar with this guy:

Oh, hello there to you too

There are layers to this, but I'll keep it brief and simply observe that people seem to like braille. Braille is popular in this context for at least four reasons. It has:

  • A luxurious 2×4 dot matrix at your fingertips.
  • Widespread font support.
  • Consistent glyph width even in variable-width fonts.
  • A special blank symbol (U+2800 BRAILLE PATTERN BLANK) for consistent spacing.

Chafa's supported this kind of output for a long time (-c none --symbols braille), but in some circumstances it could replace cells having identical foreground and background colors with a hardcoded U+0020 as an optimization. This could result in inconsistent spacing, making braille (and probably other symbol combinations) less useful. Fortunately the issue is now a thing of the past; the latest version will instead use a visually blank symbol from the user's symbol set, falling back to the lowest-coverage symbol it can find.

The GPL doesn't regulate netiquette: Please use for good, or in extreme cases, awesome.

The ever elusive practical application in the wild

hb-view screenshot

It's good to be useful. Neofetch was the first project to avail itself of Chafa's incredible power, and the latest is HarfBuzz' hb-view. And – er – I think that's all of them. For now!


September 15, 2021

2021-09-15 Wednesday

  • Mail chew, catch-up with Miklos, Kendy, Eloy; sales call. COOL days call, poked at the COOL developer days schedule to get that straightened a little. Lots of conference related admin at this time of year.

Introducing Emblem 🔥

Emblem is a new design tool that generates project avatars, or emblems if you will, for your git forge or matrix room. To set a GitLab project avatar one can put a logo.png file at the root of the project, if there is no manually set project avatar it will be picked up automatically.

Emblem is powered by gtk4-rs, librsvg and libadwaita, you can get it at Flathub.

Special thanks to Federico Mena Quintero who helped me creating the svg template.

PS: The logo is only temporal, and a reference to Zelda OOT.

September 14, 2021

Insulating a suspended timber floor

In a departure from my normal blogging, this post is going to be about how I’ve retrofitted insulation to some of the flooring in my house and improved its airtightness. This has resulted in a noticeable increase in room temperature during the cold months.

Setting the scene

The kitchen floor in my house is a suspended timber floor, built over a 0.9m tall sealed cavity (concrete skim floor, brick walls on four sides, air bricks). This design is due to the fact the kitchen is an extension to the original house, and it’s built on the down-slope of a hill.

The extension was built around 1984, shortly before the UK building regulations changed to (basically) require insulation. This meant that the floor was literally some thin laminate flooring, a 5mm underlay sheet for that, 22mm of chipboard, and then a ventilated air cavity at outside temperature (which, in winter, is about 4°C).

In addition to that, there were 10mm gaps around the edge of the chipboard, connecting the outside air directly with the air in the kitchen. The kitchen is 3×5m, so that gives an air gap of around 0.16m². That’s equivalent to leaving a window open all year round. The room has been this way for about 36 years! The UK needs a better solution for ongoing improvement and retrofit of buildings.

I established all this initial information fairly easily by taking the kickboards off the kitchen units and looking into the corners of the room; and by drilling a 10mm hole through the floor and threading a small camera (borescope) into the cavity beneath.

Making a plan

The fact that the cavity was 0.9m high and in good structural shape meant that adding insulation from beneath was a fairly straightforward choice. Another option (which would have been the only option if the cavity was shallower) would have been to remove the kitchen units, take up all the floorboards, and insulate from above. That would have been a lot more disruptive and labour intensive. Interestingly, the previous owners of the house had a whole new kitchen put in, and didn’t bother (or weren’t advised) to add insulation at the same time. A very wasted opportunity.

I cut an access hatch in one of the floorboards, spanning between two joists, and scuttled into the cavity to measure things more accurately and check the state of things.

Under-floor cavity before work began (but after a bit of cleaning)

The joists are 145×45mm, which gives an obvious 145mm depth of insulation which can be added. Is that enough? Time for some calculations.

I chose several potential insulation materials, then calculated the embodied carbon cost of insulating the floor with them, the embodied financial cost of them, and the net carbon and financial costs of heating the house with them in place (over 25 years). I made a number of assumptions, documented in the workings spreadsheet, largely due to the lack of EPDs for different components. Here are the results:

Heating scenarioInsulation assemblyU-value of floor assembly (W/m2K)Energy loss to floor (W)Net cost over 25 years (£)Net carbon cost over 25 years (kgCO2e)
Current gas tariff
(3.68p/kWh, 0.22kgCO2e/kWh)
Current floor2.60382308017980
Thermojute 160mm0.22327301700
Thermoflex 160mm0.21308601450
Thermojute 300mm0.121810201190
Thermoflex 240mm0.1117910820
Mineral wool 160mm0.24355401680
ASHP estimate
(13.60p/kWh, 0.01kgCO2e/kWh)
Current floor(as above)(as above)113701140
Thermojute 160mm1420290
Thermoflex 160mm1520110
Thermojute 300mm1410410
Thermoflex 240mm128080
Mineral wool 160mm1290150
Average future estimate (hydrogen grid)
(8.40p/kWh, 0.30kgCO2e/kWh)
Current floor702025090
Thermojute 160mm10602290
Thermoflex 160mm11702010
Thermojute 300mm12001520
Thermoflex 240mm10901140
Mineral wool 160mm8902320
Costings for different floor assemblies; see the spreadsheet for full details

In retrospect, I should also have considered multi-layer insulation options, such as a 20mm layer of closed-cell foam beneath the chipboard, and a 140mm layer of vapour-open insulation below that. More on that below.

In the end, I went with 160mm of Thermojute, packed between the joists and held in place with a windproof membrane stapled to the underside of the joists. This has a theoretical U-value of 0.22W/m2K and hence an energy loss of 32W over the floor area. Over 25 years, with a new air source heat pump (which I don’t have, but it’s a likelihood soon), the net carbon cost of this floor (embodied carbon + heating loss through the floor) should be at most 290kgCO2e, of which around 190kgCO2e is the embodied cost of the insulation. Without changing the heating system it would be around 1700kgCO2e.

The embodied cost of the insulation is an upper bound: I couldn’t find an embodied carbon cost for Thermojute, but its Naturplus certification puts an upper bound on what’s allowed. It’s likely that the actual embodied cost is lower, as the jute is recycled in a fairly simple process.

Three things swung the decision: the availability of Thermojute over Thermoflex, the joist loading limiting the depth of insulation I could install, and the ease of not having to support insulation installed below the depth of the joists.

This means that the theoretical performance of the floor is not Passivhaus standard (around 0.10–0.15W/m2K), although this is partially mitigated by the fact that the kitchen is not a core part of the house, and is separated from it by a cavity wall and some tight doors, which means it should not be a significant heat sink for the rest of the house when insulated. It’s also regularly heated by me cooking things.

Hopefully the attention to detail when installing the insulation, and the careful tracing of airtightness and windtightness barriers through the design should keep the practical performance of the floor high. The windtightness barrier is to prevent wind-washing of the insulation from below. The airtightness barrier is to prevent warm, moisture-laden air from the kitchen escaping into the insulation and building structure (particularly, joists), condensing there (as they’re now colder due to the increased insulation) and causing damp problems. An airtightness barrier also prevents convective cooling around the floor area, and reduces air movement which, even if warm, increases our perception of cooling.

I did not consider thermal bridging through the joists. Perhaps I should have done?

Insulation installation

Installation was done over a number of days and evenings, sped up by the fact the UK was in lockdown at the time and there was little else to do.

Cross sections of the insulation details

The first step in installation was to check the blockwork around each joist end and seal that off to reduce draughts from the wall cavity into the insulation. Thankfully, the blockwork was in good condition so no work was necessary.

The next step was to add an airtightness seal around all pipe penetrations through the chipboard, as the chipboard was to form the airtightness barrier for the kitchen. This was done with Extoseal Magov tape.

Sealing pipe penetrations through the chipboard floor using Extoseal Magov.

The next step in installation was to tape the windproof membrane to the underside edge of the chipboard, to separate the end of the insulation from the wall. This ended up being surprisingly quick once I’d made a cutting template.

The next step was to wedge the insulation batts in the gap between each pair of joists. This was done in several layers with offset overlaps. Each batt was slightly wider than the gap between joists, so could easily be held in place with friction. This arrangement shouldn’t be prone to gaps forming in the insulation as the joists expand and contract slightly over time.

One of the positives of using jute-based insulation is that it smells of coffee and sugar (which is what the bags which the jute fibres came from were originally used to transport). One of the downsides is that the batts need to be cut with a saw and the fibres get everywhere.

Some of the batts needed to be carefully packed around (insulated) pipework, and I needed to form a box section of windproof membrane around the house’s main drainage stack in one corner of the space, since it wasn’t possible to fit insulation or the membrane behind it. I later added closed-cell plastic bubblewrap insulation around the rest of the drainage stack to reduce the chance of it freezing in winter, since the under-floor cavity should now be significantly colder.

As more of the insulation was installed, I could start to staple the windproof membrane to the underside of the joists, and seal the insulation batts in place. The room needed three runs of membrane, with 100mm taped overlaps between them.

With the insulation and membrane in place and taped, the finishing touches in the under-floor cavity were to reinstall the pipework insulation and seal it to the windproof membrane to prevent any (really minor) wind washing of the insulation from draughts through the pipe holes; to label everything; insulate the drainage stack; re-clip the mains wiring; and tie the membrane into the access hatch.

Airtightness work in the kitchen

With the insulation layer complete under the chipboard floor, the next stage in the job was to ensure a continuous airtightness layer between the kitchen walls (which are plasterboard, and hence airtight apart from penetrations for sockets which I wasn’t worried about at the time) and the chipboard floor. Each floor board is itself airtight, but the joints between each of them and between them and the walls are not.

The solution to this was to add a lot of tape: cheaper paper-based Uni tape for joining the floor boards, and Contega Solido SL for joining the boards to the walls (Uni tape is not suitable as the walls are not smooth and flat, and there are some complex corners where the flexibility of a fabric tape is really useful).

Tediously, this involved removing all the skirting board and the radiator. Thankfully, though, none of the kitchen units needed to be moved, so this was actually a fairly quick job.

Finally, with some of the leftover insulation and windproof membrane, I built an insulation plug for the access hatch. This is attached to the underside of the hatch, and has a tight friction fit with the underfloor insulation, so should be windtight. The hatch itself is screwed closed onto a silicone bead, which should be airtight and replaceable if the hatch is ever opened.

The final step was to reinstall the kitchen floor, which was fairly straightforward as it’s interlocking laminate strips. And, importantly, to print out the plans, cross-sections, data sheets, a big warning about the floor being an air tightness barrier, and a sign to point towards the access hatch, and put them in a wallet under the kitchen units for someone to find in future.


This was a fun job to do, and has noticeably improved the comfort of my kitchen.

I can’t give figures for how much of an improvement it’s made, or whether its actual performance matches the U-value calculations I made in planning, as I don’t have reliable measured energy loss figures from the kitchen from before installing the insulation. Perhaps I’d try and measure things more in advance of a project like this next time, although that does require an extra level of planning and preparation which can be hard to achieve for a job done in my spare time.

I’m happy with the choice of materials and installation method. Everything was easy to work with and the job progressed without any unexpected problems.

If I were to do the planning again, I might put more thought into how to achieve a better U-value while being limited by the joist height. Extending the joists to accommodate more depth of insulation was something I explored in some detail, but it hit too many problems: the air bricks would need to be ducted (as otherwise they’d be covered up), the joist loading limits might be hit, and the method for extending the joists would have to be careful not to introduce thermal bridges. The whole assembly might have bridged the damp proof course in the walls.

It might, instead, have worked to consider a multi-layer insulation approach, where a thin layer of high performance insulation was used next to the chipboard, with the rest of the joist depth taken up with the thermojute. I can’t easily change to that now, though, so any future improvements to this floor will either have to add insulation above the chipboard (and likely another airtightness layer above that), or extend below the joists and be careful about it.

2021-09-14 Tuesday

  • Mail chew, catch up with Cor, lunch, sync with Tor, admin, catch up with Tracie, TDF milestone call, helped H. with some programming exercises.

Unlocking the bootloader and disabling dm-verity on Android-X86 devices

For the hw-enablement for Bay- and Cherry-Trail devices which I do as a side project, sometimes it is useful to play with the Android which comes pre-installed on some of these devices.

Sometimes the Android-X86 boot-loader (kerneflinger) is locked and the standard "Developer-Options" -> "Enable OEM Unlock" -> "Run 'fastboot oem unlock'" sequence does not work (e.g. I got the unlock yes/no dialog, and could move between yes and no, but I could not actually confirm the choice).

Luckily there is an alternative, kernelflinger checks a "OEMLock" EFI variable to see if the device is locked or not. Like with some of my previous adventures changing hidden BIOS settings, this EFI variable is hidden from the OS as soon as the OS calls ExitBootServices, but we can use the same modified grub to change this EFI variable. After booting from an USB stick with the relevant grub binary installed as "EFI/BOOT/BOOTX64.EFI" or "BOOTIA32.EFI", entering the
following command on the grub cmdline will unlock the bootloader:

setup_var_cv OEMLock 0 1 1

Disabling dm-verity support is pretty easy on these devices because they can just boot a regular Linux distro from an USB drive. Note booting a regular Linux distro may cause the Android "system" partition to get auto-mounted after which dm-verity checks will fail! Once we have a regular Linux distro running step 1 is to find out which partition is the android_boot partition to do this as root run:

blkid /dev/mmcblk?p#

Replacing the ? for the mmcblk number for the internal eMMC and then for # is 1 to n, until one of the partitions is reported as having 'PARTLABEL="android_boot"', usually "mmcblk?p3" is the one you want, so you could try that first.

Now make an image of the partition by running e.g.:

dd if=/dev/mmcblk1p3" of=android_boot.img

And then copy the "android_boot.img" file to another computer. On this computer extract the file and then the initrd like this:

abootimg -x android_boot.img
mkdir initrd
cd initrd
zcat ../initrd.img | cpio -i

Now edit the fstab file and remove "verify" from the line for the system partition. after this update android_boot.img like this:

find . | cpio -o -c -R 0.0 | gzip -9 > ../initrd.img
cd ..
abootimg -u android_boot.img -r initrd.img

The easiest way to test the new image is using fastboot, boot the tablet into Android and connect it to the PC, then run:

adb reboot bootloader
fastboot boot android_boot.img

And then from an "adb shell" do "cat /fstab" verify that the "verify" option is gone now. After this you can (optionally) dd the new android_boot.img back to the android_boot partition to make the change permanent.

Note if Android is not booting you can force the bootloader to enter fastboot mode on the next boot by downloading this file and then under regular Linux running the following command as root:

cat LoaderEntryOneShot > /sys/firmware/efi/efivars/LoaderEntryOneShot-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f

September 11, 2021

Null Amusement

Null Amusement

Been old-school-pixel-pushin’ recently, for a project I’ll hopefully reveal soon. But because I’ve also been really keen on music recently, a little side diversion happened that I’d like to share.

It started with generating some samples using the good old SXFR. It’s a tiny sythesizer for generating cute oldschool chiptune/game sound effects. Sadly it doesn’t exist as an app, which is a shame.


Then it was off to Polyend Tracker, even if in this case I was sitting in front of my computer so the benefit of having a physical machine to take anywhere was sort of diminished.

Polyend Tracker

In any case the included instrument editor with all its effects and filters came handy. I exported the full song as well as a single pattern to use on the Instagram loop. This time I did no mastering at all, just plain uploaded the track to soundcloud and called it done.

As for the visuals, I’ve used way many more tools than you’d expect. The icon assets were mainly done in Pixaki, a fairly polished pixel editor for iPad. I have numerous beef with it for it being premium priced, mainly in the way it does layered animation, but it absolutely delivers on the immediacy and contrasts with filesystem diving of Aseprite which otherwise beats it bar none. Usually I convert GIFs exported from Aseprite or Pixaki using ffmpeg, but this time I needed to sync the animation to the sound, so I loaded up the GIF and the exported pattern from Tracker into Blender and with the sound wave preview that somehow isn’t on by default, it was a quick job in the VSE.

Blender VSE

Previously, Previously, Previously, Previously, Previously, Previously.

Swappable faceplates for laptops

On the whole, laptops are boring. There are three different types.

  1. Apple aluminium laptops that all look the same.
  2. Other manufacturer's laptops that try to look like Apple laptops that look all the same.
  3. RGB led studded gamer laptops that all look the same.
There is a cottage industry that manufactures stickers that are the size of the entire laptop but those are inconvenient and not particularly slick. You only have one chance to apply and any misalignments or air bubbles are going to be there forever.

But what if some laptop manufacturer chose to do things differently and designed their laptops so that the entire face plate were detachable and replaceable? This would allow everybody to customize their own laptops in exactly the way they want to (this should naturally extend to the inner keyboard plate, but we'll ignore that for now). For example you could create a laptop with the exact Pantone color that you want rather than what the laptop manufacturer saw fit to give you.

Or maybe you'd like to have a laptop cover which lights up not the manufacturer's branding but instead the logo of your favourite band?

Or perhaps you are a fan of classic muscle cars?

The thing that separates these covers from what we currently have is that they would not be pictures of things or stickers. The racing stripe cover could be manufactured in exactly the same way as real car hoods. It would shine like real metal. It would feel different when touched. You could, one imagines, polish it with real car wax. It would be the real thing. Being able to create experimental things in isolation makes all sorts of experimentation possible so you could try holograms, laser engravings, embossings, unusual metals and all sorts of funky stuff easily. In case of failure the outcome is just a lost cover plate you can toss into the recycling bin rather than a very expensive laptop that is either unusably ugly or flat out broken.

But wait, there's more. Corporate PR departments should be delighted with this. Currently whenever people do presentations in conferences their laptops are clearly visible and advertise the manufacturer. The same goes for sales people visiting customers (well, eventually, once Covid 19 passes) and so on. Suppose you could have this instead:

Suddenly you have reclaimed a ton of prime advertising real estate that can be used for brand promotion (or something like that, not really a PR person so I'm not intimately familiar with the terminology). If you are a sufficiently large customer and order thousands of laptops at a time, it might be worth it to get all the custom plates fitted at the main factory during assembly. Providing this service would also increase the manufacturer's profit margin.

September 10, 2021

Cleaning up header bars

Examples of app header bars after the redesign

You might have noticed that a lot of icons in GNOME 41 have been redrawn to be larger. They are still 16×16, but take up more of the space within the canvas. Compare:

Examples of symbolic icons in GNOME 40 and 41

This was a preparation for a larger change that has just landed in the main branch in libadwaita: buttons in header bars and a few other widgets have no background now, matching this mockup.

For example, this is how the recent GTK4 branch of GNOME Software looks like now:

GNOME Software using GTK4 and libadwaita

Making the style feel lighter and reducing visual noise is a major goal for the style refresh we’re doing for libadwaita. While we’ve done lots of smaller changes and cleanups across the stylesheet to bring us closer to that goal, this is probably the highest-impact part of it due to how prominent header bars are in GNOME apps.

This is not a new idea either — pretty much everyone else is doing it, e.g. macOS, Windows, iOS, Android, elementary OS, KDE.

In fact, we’ve been doing this for a long time in view switchers. So this just extends it to the whole header bar.

However, if applied carelessly, it can also make certain layouts ambiguous. For example, a text-only button with no background would look exactly same as a window title. To prevent that, we only remove background from buttons that we can be confident won’t look confusing without it — for example, buttons containing a single icon.

While we avoid ambiguous situations, it also means that apps will need changes to have consistent buttons. In my opinion this is a better tradeoff: since the API is not stable yet, we can break behavior, and if an app hasn’t been updated, it will just get inconsistent look and not accessibility issues.


The exact rules of what has background and what doesn’t are as follows:

The following buttons get no background:

  • Buttons that contain icons (specifically the .image-button style class).
  • Buttons that contain an icon and a label, or rather, the .image-text-button style class.
  • Buttons with the .flat style class.
  • UPDATE: Any GtkMenuButtons with a visible arrow (the .arrow-button style class).
  • UPDATE: Any split buttons (more on that later).

Flat button examples: icon buttons; buttons with icons and text

The following buttons keep their default appearance:

  • Text-only buttons.
  • Buttons with custom content.
  • Buttons with .suggested-action or .destructive-action style classes.
  • Buttons inside a widget with the .linked style class.
  • A new addition: buttons with the .raised style class, as inspired by the elementary OS stylesheet.

Raised button examples: text buttons, linked buttons, custom buttons, suggested and destructive buttons

The appearance of GtkMenuButton and AdwSplitButton (more on that later) is decided as if they were regular buttons.

Icon-only, icon+text and text-only arrow and split buttons

UPDATE: menu buttons with visible arrows and split buttons don’t have a background anymore regardless of their content.

Icon-only, icon+text and text-only arrow and split buttons, updated. Text buttons are flat too now

This may look a lot like the old GtkToolbar, and in a way it is. While GTK4 doesn’t have GtkToolbar, it has the .toolbar style class to replace it, and this style is now shared with header bars and also GtkActionBar.

Special cases

While the simple icon-only buttons are easy, a lot of applications contain more complex layouts. In that case we fall back to the default appearance and apps will need changes to opt into the new style for them. Let’s look at some of those patterns and how to handle them:

Menu buttons with icons and dropdown arrows

A menu button with an icon, GTK3

This case works as is if you use the standard widgets — namely, GtkMenuButton with an icon set via icon-name and always-show-arrow set to TRUE.

A menu button with an icon

The only reason this case is special is because always-show-arrow is relatively new, having only been added in GTK 4.4, so a lot of apps will have custom menu buttons, or, if porting from GTK3, GtkMenuButton containing a GtkBox with an icon and an arrow. Since we don’t remove backgrounds from buttons with custom content, both of them will have backgrounds.

Text-only buttons

A button with text, GTK3

This is the most common case outside icon-only buttons. For these buttons the solution, rather counter-intuitively, is to add an icon. Since the icon has a label next to it, it doesn’t have to be very specific, so if an action is hard to describe with an icon, an only tangentially related icon is acceptable. If you still can’t find anything fitting — open an issue against the GNOME icon development kit.

With GTK widgetry, the only way to create such buttons is to pack a GtkBox inside, and create the icon and the label manually. Then you’ll also have to add the .image-button and .text-button style classes manually, and will need to set the mnemonic-widget property on the label so that mnemonics work.

Since this is tedious and parts like connecting the mnemonic are easy to miss, libadwaita now provides AdwButtonContent widget to do exactly that. It’s intended to be used as a child widget of GtkButton, GtkMenuButton or AdwSplitButton (more on that below), as follows:

<object class="GtkButton">
  <property name="child">
    <object class="AdwButtonContent">
      <property name="icon-name">document-open-symbolic</property>
      <property name="label" translatable="yes">_Open</property>
      <property name="use-underline">True</property>

A button with an icon and text

If it’s a GtkMenuButton, it would also make sense to show a dropdown arrow, as follows:

<object class="GtkMenuButton">
  <property name="menu-model">some_menu</property>
  <property name="always-show-arrow">True</property>
  <property name="child">
    <object class="AdwButtonContent">
      <property name="icon-name">document-open-symbolic</property>
      <property name="label" translatable="yes">_Open</property>
      <property name="use-underline">True</property>

A menu button with an icon and text

UPDATE:Menu buttons with visible arrows don’t have background by default anymore, the step above is not necessary.

Note: the child property in GtkMenuButton is fairly new, and is not in a stable release as of the time of writing. It should land in GTK 4.6.

Notice we didn’t have to add any style class to the buttons or to connect mnemonics like we would have with GtkLabel. AdwButtonContent handles both automatically.

Split buttons

Split buttons in GTK3: with text and icon

This is a fairly rare case, but also a difficult one. Historically, these were implemented as 2 buttons in a .linked box. Without a background, it’s easy to make it look too similar to a regular menu button with a dropdown arrow, resulting in an ambiguous layout.

While existing split buttons will keep their background thanks to the .linked style class, we now have a way to make consistent split buttons – AdwSplitButton.

Examples of split buttons

Whether they get a background or not depends on the content of their button part, while the dropdown part follows the suit – they will have background if it has only a label, will have no background if it has an icon, and will keep their default appearance outside header bars or toolbars. If it has no background, a separator is shown between them and they gain a shared background when hovered, pressed, or the dropdown is open:

They can be adapted the same way as regular buttons — via AdwButtonContent:

<object class="AdwSplitButton">
  <property name="menu-model">some_menu</property>
  <property name="child">
    <object class="AdwButtonContent">
      <property name="icon-name">document-open-symbolic</property>
      <property name="label" translatable="yes">_Open</property>
      <property name="use-underline">True</property>

A split button with an icon and text

UPDATE: split buttons with text or custom content don’t get background by default anymore, so the step above is not necessary.

Meanwhile, buttons like the list/grid selector in Files are as simple as:

<object class="AdwSplitButton">
  <property name="menu-model">some_menu</property>
  <property name="icon-name">view-list-symbolic</property>

A split button with an icon

Otherwise, AdwSplitButton API is mostly a union of GtkButton and GtkMenuButton – the button part can have a label, an icon or a custom child, an action name and target, and a clicked signal if you prefer to use that. Meanwhile, the dropdown part has a menu model or a popover, and a direction that affects where the popover will be shown, as well as where the dropdown arrow will point.

Finally, in a lot of cases layouts that were using split buttons can be redesigned not to use them – for example, to use a simple menu button for opening files like in Text Editor instead of a split button in Apostrophe).

Linked buttons

Linked buttons, GTK3

With visible frames, linking buttons is a nice way to visually group them. For example, we commonly do that for back/forward buttons, undo/redo buttons, mode switching buttons. We also use multiple groups of linked buttons to separate them from each other.

For the most part linked buttons can, well, stop being linked. For example, back/forward buttons like this:

Linked buttons

can become this:

Unlinked buttons

However, when multiple linked groups are present, just unlinking will remove the separation altogether:

Unlinked buttons without spacing, not grouped

In that case, additional spacing can be used. It can be achieved with a GtkSeparator with a new style class .spacer:

<object class="GtkSeparator">
    <class name="spacer"/>

Unlinked buttons with spacing, grouped

Action dialog buttons

A dialog, GTK3

This special case is less special than other special cases (or more special, if you prefer), in that apps don’t need to handle it, but I’ll mention it for the sake of completeness.

The two primary buttons in an action dialog or a similar context (for example, when changing display resolution in Settings, or the Cancel button in selection mode) should keep their current style — that is, they don’t have icons and keep their background. Meanwhile any extra buttons follow the new style.

In most situations this will already be the case so no changes are needed.

A dialog


There will undoubtedly be cases not covered here. The .flat and .raised style classes can always be used to override the default appearance if need be.

Finally, not everything has to have no background. For example, the remote selector in Software is probably best kept as is until it’s redesigned to also make it adaptive.

And in rare cases, the existing layout just doesn’t work and may need a redesign.

Bundled icons

In addition to all of that, if you bundle symbolic icons, there’s a good chance there are updated larger versions in the icon library. It would be a good idea to update them to match the new system icons.


Let’s update a few apps. App Icon Preview and Obfuscate should show off most of the edge cases.

App Icon Preview

The version on Flathub is still using GTK3 as of the time of writing, but it’s GTK4 in main. So let’s start from there.

App Icon Preview, before libadwaita update

App Icon Preview, New App Icon dialog

App Icon Preview has 2 windows, each with its own header bar — the main window and the "New App Icon" dialog.

After the libadwaita update, the dialog hasn’t changed, meanwhile the main window looks like this:

App Icon Preview, no adjustments

It has a custom split button, as well as a text-only Export button when a file is open.

First, let’s replace the split button with an AdwSplitButton:

<object class="GtkBox">
    <object class="GtkButton">
      <property name="label" translatable="yes">_Open</property>
      <property name="use_underline">True</property>
      <property name="tooltip_text" translatable="yes">Open an icon</property>
      <property name="action_name"></property>
    <object class="GtkMenuButton" id="recents_btn">
      <property name="tooltip_text" translatable="yes">Recent</property>
      <property name="icon_name">pan-down-symbolic</property>
    <class name="linked"/>

This will become:

<object class="AdwSplitButton" id="open_btn">
  <property name="label" translatable="yes">_Open</property>
  <property name="use_underline">True</property>
  <property name="tooltip_text" translatable="yes">Open an icon</property>
  <property name="action_name"></property>

Since we’ve changed the class and the GtkBuilder id, we also need to update the code using it. Hence this:

pub recents_btn: TemplateChild<gtk::MenuButton>,

becomes this:

pub open_btn: TemplateChild<adw::SplitButton>,

and the other recents_btn occurences are replaced accordingly.

App Icon Preview, using AdwSplitButton

UPDATE: after the menu button and split button change, the Open and Export buttons don’t get background anymore, so only the previous step is necessary.

Now we need to actually remove the background. For that we’ll add an icon, and it’s going to be just document-open-symbolic.

So we’ll remove the label and use-underline and instead add an AdwButtonContent child with the same lines together with icon-name inside it:

<object class="AdwSplitButton" id="open_btn">
  <property name="tooltip_text" translatable="yes">Open an icon</property>
  <property name="action_name"></property>
    <object class="AdwButtonContent">
      <property name="icon_name">document-open-symbolic</property>
      <property name="label" translatable="yes">_Open</property>
      <property name="use_underline">True</property>

App Icon Preview with an icon on the open button

Now, let’s look at the Export button. It needs an icon as well, but adwaita-icon-theme doesn’t have anything fitting for it. So instead, let’s check out Icon Library (which doesn’t have a lot of edge cases itself).

While it doesn’t have icons for export either, it has a share icon instead:

Icon Library showing the share icon

So that’s what we’ll use. We’ll need to bundle it in the app, and let’s rename it to export-symbolic while we’re here. Now we can do the same thing as for the Open button:

<object class="GtkMenuButton" id="export_btn">
  <property name="always_show_arrow">True</property>
    <object class="AdwButtonContent">
      <property name="icon_name">export-symbolic</property>
      <property name="label" translatable="yes">_Export</property>
      <property name="use_underline">True</property>

App Icon Preview with an icon on the export button

So far so good. See the merge request for the complete changes.


This app has only one header bar, but it can change its state depending on if there’s a file open:

Obfuscate, with no open file, before libadwaita update

Obfuscate, with an open file, before libadwaita update

After building with newer libadwaita, we see there’s quite a lot to update.

First, we need to add an icon to the Open button. It’s done exactly the same way as in App Icon Preview, so I won’t repeat it.

Obfuscate, with no open file, adapted

Instead, let’s look at the other buttons:

Obfuscate, with an open file, after libadwaita update

Here we have two groups of linked buttons — so .linked is used both to group related actions together, and to separate the 2 groups.

So, first we need to unlink those buttons. Since it’s just removing the 2 GtkBox widgets and instead putting the buttons directly into a header bar, I won’t go in details.

Obfuscate, with an open file, with unlinked buttons, no spacing

However, now we’ve lost the separation between the undo/redo group and the tools. So let’s add some spacing:

<object class="GtkSeparator">
    <class name="spacer"/>

And the end result is the following:

Obfuscate, with an open file, with unlinked buttons and spacing

This information is also present in the libadwaita migration guide, and will be in the stylesheet documentation once all changes are finalized.

For now, happy hacking!

UPDATE (on the same day as published): Menu buttons with visible arrows and split buttons don’t get background by default anymore. The steps and examples have been updated accordingly.

September 09, 2021

Software 41: Context Tiles

GNOME 41 is going to be released in a few weeks, and as you may have heard it will come with a major refresh to Software’s interface.

Our goals for this initiative included making it a more appealing place to discover and install new apps, exposing app information more clearly, and making it more reliable overall. We’ve made big strides in all these areas, and I think you’ll be impressed how much nicer the app feels in 41.

There’s a lot of UI polish all across the app, including a cleaner layout for app cards, more consistent list views, a new simplified set of categories, a better layout for category pages, and much more.

Most of the groundwork for adaptiveness is also in place now. There are still a few views in need of additional tweaks, but for the most part the app is adaptive in 41.

However, the most visible change in this release is probably the near-complete overhaul of the app details pages. This includes a prettier header section, a more prominent screenshot carousel, and an all-new way of displaying app metadata.

Introducing Context Tiles

For the app details page we wanted to tackle a number of long-standing tricky questions about how to best communicate information about apps. These include:

  • Communicating app download size in a more nuanced way, especially for Flatpak apps where downloading additional shared runtimes may be required as part of installing an app
  • Showing the benefits of software freedom in a tangible way rather than just displaying the license
  • Making it clearer which files, devices, and capabilities apps have access to on the system
  • Incentivizing app developers to use portals rather than poking holes in the sandbox
  • Warning people about potential security problems
  • Providing information on whether the app will work on the current hardware (especially relevant for mobile)
  • Exposing age ratings more prominently and with more context

The solution we came up with is what we call context tiles. The idea is that each of these tiles provides the most important information about a given area at a glance, and clicking it opens a dialog with the full details.

Context tiles on the app details page


The storage tile has two different states: When the app is not installed, it shows the download size of the app, as well as any additional required downloads (e.g. runtimes). When the app is installed it changes to show the size the app is taking up on disk.


The Safety tile combines information from a number of sources to give people an overall idea of how safe an app is to install and use.

At the most basic level this is about how technically secure an app is. Two important questions here are whether an app is sandboxed (i.e. whether it’s flatpaked or running on the host), and whether it uses Wayland.

However, even if an app is sandboxed it can still have unlimited access to e.g. your home folder or the webcam, if these are defined as static permissions in the Flatpak manifest.

While for some apps there is no alternative to this (e.g. IDEs are probably always going to need access to the file system), in many cases there are more secure portal APIs through which people can allow limited one-time access to various resources.

For example, if you switch an app from using the old GtkFileChooser to the portal-based GtkFileChooserNative you can avoid requiring a sandbox hole for the file system.

All of the above is of course a lot worse if the app also has internet access, since that can make security issues remotely exploitable and allows malicious apps to potentially exfiltrate user data.

While very important, sandboxing is not the entire story here though. Public auditability of the code is also very important for ensuring the security of an app, especially for apps which have a lot of permissions. This is also taken into consideration to assess the overall safety of an app, as a practical advantage of software freedom.

Folding all of these factors into a single rating at scale isn’t easy. I expect we’ll continue to iterate on this over the next few cycles, but I think what we have in 41 is a great step in the right direction.

Hardware Support

With GNOME Mobile progressing nicely and large parts of our app ecosystem going adaptive it’s becoming more important to be able to check whether an app is adaptive before installing it. However, while adaptiveness is the most prominent use case for the hardware support tile at the moment, it’s not the only one.

The hardware support tile is a generic way to display which input and output devices an app supports or requires, and whether they match the currently available hardware. For example, this can also be used to communicate whether an app is fully keyboard-accessible or requires a gamepad.

Age Rating

Age ratings (via OARS) have been in Software for years, but we’ve wanted to present this information in a better way for some time.

The context tile we’re introducing in 41 shows the reasons for the rating at a glance, rather than just a rating.

The dialog shows more information on the exact types of content the app includes, though the current implementation is not quite the design we’d like here eventually. Due to technical constraints we currently list every single type of content and whether or not the app contains it, but ideally it would only show broad categories for things the app doesn’t contain. This will hopefully be improved next cycle to make the list easier to scan.

Metadata Matters

No matter how good an app store itself is, its appeal for people ultimately comes from the apps within it. Luckily GNOME has a sizable ecosystem of cool third party apps these days, exactly the kinds of apps people are looking to discover when they open Software.

However, not all of these apps look as good as they could in Software currently due to incomplete, outdated, or low quality metadata.

If the version of Adwaita on your screenshots is so old that people get nostalgic it’s probably time to take new ones ;)

Additionally, Software 41 comes with some changes to how app metadata is presented (e.g. context tiles, larger screenshots), which make it more prominently visible than before.

This means now is the perfect moment to review and update your app metadata and make a new release ahead of the GNOME 41 release in a few weeks.

Lucky for you I already wrote a blog post walking you through the different kinds of metadata your app needs to really shine in Software 41. Please check it out and update your apps!


Software 41 was a real team effort, and I’d like to thank everyone who helped make it happen, especially Philip Withnall, Phaedrus Leeds, Adrien Plazas, and Milan Crha for doing most of the implementation work, but also Allan Day and Jakub Steiner for helping with various aspects of the design.

This is also a cool success story for cross-company upstream collaboration with people from Endless, Purism, and Red Hat all working together on an upstream-first product initiative. High fives all around!

September 07, 2021

Get your apps ready for Software 41

Software 41 will be released with the rest of GNOME 41 in a few weeks, and it brings a number of changes to how app metadata is presented, including the newly added hardware support information, larger screenshots, more visible age ratings, and more.

If you haven’t updated your app’s metadata in a while this is the perfect moment to review what you have, update what’s missing, and release a new version ahead of the GNOME 41 release!

In this blog post I’ll walk you through the different kinds of metadata your app needs to really shine in Software 41, and best practices for adding it.

App Summary

The app summary is a very short description that gives people an idea of what the app does. It’s often used in combination with the app name, e.g. on banners and app tiles.

If your summary is ellipsized on the app tile, you know what to do :)

In Software 41 we’re using the summary much more prominently than before, so it’s quite important for making your app look good. In particular, make sure to keep it short (we recommend below 35 characters, but the shorter the better), or it will look weird or be ellipsized.

Writing a good summary

The summary should answer the question “What superpower does this app give me?”. It doesn’t need to comprehensively describe everything the app does, as long as it highlights one important aspect and makes it clear why it’s valuable.

Some general guidance:

  • Keep it short (less than 35 characters)
  • Be engaging and make people curious
  • Use imperative if possible (e.g. “Browse the web” instead of “A web browser”)
  • Use sentence case

Things to avoid:

  • Technical details (e.g. the toolkit or programming language)
  • Structures like “GUI for X” or “Client for Y”
  • Mentioning the target environment (e.g. “X for GNOME”)
  • Repeating the app’s name
  • Overly generic adjectives like “simple”, “easy”, “powerful”, etc.
  • Articles (e.g. “A …” or “An …”)
  • Punctuation (e.g. a period at the end)
  • Title case (e.g. “Chat With Your Team”)

Good examples:

  • Maps: “Find places around the world”
  • Shortwave: “Listen to internet radio”
  • Byte: “Rediscover your music”

Code Example

The app summary is set in your appdata XML file, and looks like this:

<summary>Listen to internet radio</summary>

Appstream documentation

Device Support

Hardware support metadata describes what kinds of input and output devices an app supports, or requires to be useful. This is a relatively recent addition to appstream, and will be displayed in Software 41 for the first time.

The primary use case for this at the moment is devices with small displays needing to query whether an app will fit on the screen, but there are all sorts of other uses for this, e.g. to indicate that an app is not fully keyboard-accessible or that a game needs a gamepad.

Code Examples

Appstream has a way for apps to declare what hardware they absolutely need (<require>), and things that are known to work (<recommends>). You can use these two tags in your appdata XML to specify what hardware is supported.

For screen size, test the minimum size your app can scale to and put that in as a requirement. The “ge” stands for “greater or equal”, so this is what you’d do if your app can scale to phone sizes (360px or larger):

  <display_length compare="ge">360</display_length>

Note: The appstream spec also specifies some named sizes (xsmall, small, large, etc.), which are broken and should not be used. It’s likely that they’ll be removed in the future, but for now just don’t use them.

Input devices can be specified like so:


Appstream documentation


If you want your app to make a good impression good screenshots are a must-have. This is especially true in 41, because screenshots are much larger and more prominent now.

The new, larger screenshot carousel

Some general guidance for taking good app screenshots:

  • Provide multiple screenshots showing off the main areas of the app
  • Use window screenshots with a baked-in shadow (you can easily take them with Alt+PrintScr).
  • For apps that show content (e.g. media apps, chat apps, creative tools, file viewers, etc.) the quality of the example content makes the screenshot. Setting up a great screenshot with content takes a ton of time, but it’s absolutely worth it.
  • If you’re only doing screenshots in a single language/locale, use en-US.
  • Don’t force a large size if your app is generally used at small sizes. If the app is e.g. a small utility app a tiny window size is fine.

Before taking your screenshots make sure your system is using GNOME default settings. If your distribution changes these, an easy way to make sure they are all correct is to take them in a VM with Fedora, Arch, or something else that keeps to defaults. In particular, make sure you have the following settings:

  • System font: Cantarell
  • GTK stylesheet: Adwaita
  • System icons: Adwaita Icon Theme
  • Window controls: Close button only, on the right side

Things to avoid:

  • Fullscreen screenshots with no borders or shadow.
  • Awkward aspect ratios. Use what feels natural for the app, ignore the 16:9 recommendation in the appstream spec.
  • Huge window sizes. They make it very hard to see things in the carousel. Something like 800×600 is a good starting point for most apps.

Code Example

Screenshots are defined in the appdata XML file and consist of an image and a caption which describes the image.

    <screenshot type="default">
      <caption>Screenshot caption</caption>

Appstream documentation

Other Metadata

The things I’ve covered in detail here are the most prominent pieces of metadata, but there are also a number of others which are less visible and less work to add, but nevertheless important.

These include links to various websites for the project, all of which are also defined in the appstream XML.

  • App website (or code repository if it has no dedicated website)
  • Issue tracker
  • Donations
  • Translations
  • Online help or user documentation

When making releases it’s also important to add release notes for the new version to your appdata file, otherwise the version history box on your details page looks pretty sad:


I hope this has been useful, and inspired you to update your metadata today!

Most of these things can be updated in a few minutes, and it’s really worth it. It doesn’t just make your app look good, but the ecosystem as a whole.

Thanks in advance :)

September 06, 2021

Wrap-up: All about my Outreachy journey

Hello everyone! It’s officially the end of my Outreachy internship. I can’t believe this is the last blog post I am writing on it. It seems like yesterday, when I received the selection mail and was about to begin my journey as an Outreachy intern with the GNOME organisation.

Outreachy gave me the platform to network with many like-minded fellows who were just as enthusiastic about open source. I met a lot of people and have formed lifelong connections! Can’t be more thankful to the organisers for making the environment so growth-friendly and inclusive. I have liked to share my thoughts on the biweekly chats and listen to some fellow interns’ experiences.

At the beginning of the internship, I was not familiar with writing blogs, which scared me. I thought that the internship will be very hectic with all these, and I will not manage the time properly. But the opportunity to document my internship in the form of blogs and everyone’s appreciation has motivated me to carry on with the writing and having people read them.

About my project at GNOME – Make GNOME asynchronous!

The goal of the internship was to allow language bindings.

Here, the idea of asynchronous operations is to execute a task “in the background” without the user waiting for the job to finish. As we already know, GNOME’s programming platform is based on C libraries. In C, we can only use callbacks to implement asynchronous operations. To complete the goal of the internship, GObject Introspection played an important role. It connects GNOME from the platform libraries written in C to other programming languages like JavaScript, Python, and Vala, which use async/await-style programming with GNOME asynchronous operations. This will allow C language to use async/await-style programming with GNOME asynchronous operations.

In my internship period, the first issue I started with was “Exception handling in promisify function“!

A Promise is an object that represents an intermediate state of an operation where we can execute the function simply by using the async/await keyword. That is without actually making it in an async/await style function.

This issue helped me understand how asynchronous programming works in JavaScript and how to transform callback-style into async/await-style. Below are the primary examples of both the style functions.

Before - Callback style
After - Async/Await style

I completed this task successfully and closed in MR-631. Then, I started with the central issue of the internship. All the changes have been pushed to MR-278.

To enable async/await-style in all the programming languages, GObject Introspection has to know how to pair load_contents_finish with load_contents_async, as seen in the after picture above, which is an async/await style example, the user does not explicitly call load_contents_finish anymore.

So to pair both load_contents_finish and load_contents_async, I added three ‘FINISH_FUNC’, ‘SYNC_FUNC’ and ‘ASYNC_FUNC’ annotations to GObject Introspection that provided this information. I also added corresponding test cases to check the correctness of all the annotations.

Next steps to complete the project!

The remaining thing to do in this task is to add the code to GJS that uses this annotation and makes the following changes.

  • To add two 10-bit-wide slots to FunctionBlob and VFuncBlob in girepository/gitypelib-internal.h, and
  • To add a 1-bit flag to distinguish between the case of an async function and a sync function. (probably)
    • The case of an async function is where slot 1 is ‘finish’ and slot 2 is ‘sync’.
    • And the case of a sync function is where slot 1 is ‘async’ and slot 2 is unused.
  • Then the corresponding C accessor APIs will have to be added. As suggested by @ptomato, You can call them g_callable_info_get_async_function(), g_callable_info_get_sync_function(), and g_callable_info_get_finish_function() (returning NULL if there is no annotation, or in the case of a callback)

The above changes will enable the async/await style in all the programming languages, including C, allowing C to use async/await-style programming with GNOME asynchronous operations.

My growth through the experience!

Outreachy internship has added tremendous value to my up-skilling. I can now comfortably collaborate with diverse teams working remotely through effective communication. I realised how easy things get, especially when you keep at them. Immersing myself in such a large codebase seemed intimidating at first, but I believe it was all worth it. I also learnt how to write clean code, document the changes and improve the quality of my Git commit logs, each of which is required to become a good software developer.

I am very thankful to my mentor Philip Chimento for being so supportive. He always guided me in the right direction and gave constructive feedback on my work. Initially, I was hesitant about coordinating because of the vast timezone gap, but we successfully met and were productive during the internship tenure. He guided me through multiple roadblocks and helped me adapt better approaches to tasks throughout the internship. He helped me in the project and provided inputs to help me improve the blog content.

I am also immensely grateful to the entire GNOME community for providing me with a platform to overcome my fears and giving me a fantastic opportunity to be a speaker at the international conference of GNOME (GUADEC 2021) to present my project.

Outreachy has been a great opportunity for me to interact in the real world FOSS developing environment. I get to expand my network and have formed lifelong connections! It has been a very fulfilling and wonderful experience for me. I learned so much and went through experiences that are – and later will be – helpful in my career development as a person and as a professional.

September 05, 2021

Magic Bricks

My first encounter with a digital camera was around 1996 when my university IT department acquired the Apple Quicktake 100. While the quality of the output was laughable compared to the analog counterparts, the convenience of such a device was clear. It wasn’t too long after that when I got my first digital camera, the Ricoh RDC-5000 which I used to capture the new worlds I vistited. Many devices followed.

Ever since I switched from photo to video as my default media format of capturing places and moments, there have been three major attributes I would seek out of a camera. A nice separation of subjects using shallow depth of field, a dreamy slow motion look using high frame rates and a pleasing image using optical or electronic stabilization.

All of these fields have been getting steady improvements and I always fantasize how my young self would react to seeing footage taken from a tiny little rectangle I carry around in my pocket. It is just incredible that the footage below is hand held and the cameraman is not driving a onewheel or using a gimbal, steadycam or some other aid. Just ninja walking. When do we stop calling these magic bricks phones?

Skate Your Name: Patrik from jimmac on Vimeo. Music arranged on Polyend Tracker.

GNOME Radio 0.4.0 (NPR) for GNOME 41

GNOME Radio 0.4.0 for GNOME 41 is available with National Public Radio (NPR) live audio broadcasts.

GNOME Radio 0.4.0 will be the successor to GNOME Internet Radio Locator built for GNOME 41 with Cairo, Clutter, Champlain, Maps, GStreamer, and GTK+ 4.0.

The core idea behind GNOME Radio is

Map Audio
Locate and select audio from a map

Play Radio
Play and listen to radio from the map

Design Philosophy
C, Cairo, Clutter, Champlain, Maps, GStreamer, GTK+

Stable source release of GNOME Radio 0.4.0 is available from

Fedora Core 34 Binary RPM for x86_64 is available from

Fedora Core 34 Source RPM is available from

Alternatively, you can clone the development source code from GNOME Gitlab at

git clone
cd gnome-radio
sudo make install

Three options for running GNOME Radio 0.4.0 on GNOME 41 from GNOME Shell and GNOME Terminal:

1. Click on Activities and select the GNOME Radio blue dot icon.
2. Search for “gnome-radio” in the search box.
3. Type “gnome-radio” and hit Enter in GNOME Terminal if you are unable to find the GNOME Radio blue dot icon in GNOME 41 and GNOME Shell.

September 01, 2021

GNOME themes, an incomplete status report, and how you can help

"Themes in GNOME" is a complicated topic in technical and social terms. Technically there are a lot of incomplete moving parts; socially there is a lot of missing documentation to be written, a lot of miscommunication and mismatched expectations.

The following is a brief and incomplete, but hopefully encouraging, summary of the status of themes in GNOME. I want to give you an overall picture of the status of things, and more importantly, an idea of how you can help. This is not a problem that can be solved by a small team of platform developers.

I wish to thank Alexander Mikhaylenko for providing most of the knowledge in this post.

Frame of reference

First, I urge you to read Cassidy James Blaede's comprehensive "The Need for a FreeDesktop Dark Style Preference". That gives an excellent, well-researched introduction to the "dark style" problem, the status quo on other platforms, and exploratory plans for GNOME and Elementary from 2019.

Go ahead, read it. It's very good.

There is also a GUADEC talk about Cassidy's research if you prefer to watch a video.

Two key take-aways from this: First, about this being a preference, not a system-enforced setting:

I’m explicitly using the language “Dark Style Preference” for a reason! As you’ll read further on, it’s important that this is treated as a user “preference,” not an explicit “mode” or strictly-enforced “setting.” It’s also not a “theme” in the sense that it just swaps out some assets, but is a way for the OS to support a user expressing a preference, and apps to respond to that preference.

Second, about the accessibility implications:

Clearly there’s an accessibility and usability angle here. And as with other accessibility efforts, it’s important to not relegate a dark style preference to a buried “Universal Access” or “Accessibility” feature, as that makes it less discoverable, less tested, and less likely to be used by folks who could greatly benefit, but don’t consider themselves “disabled.”

Libadwaita and the rest of the ecosystem

Read the libadwaita roadmap; it is very short, but links to very interesting issues on gitlab.

For example, this merge request is for an API to query the dark style and high-contrast preferences. It has links to pending work in other parts of the platform: libhandy, gsettings schemas, portals so that containerized applications can query those preferences.

As far as I understand it, applications that just use GTK3 or libhandy can opt in to supporting the dark style preference — it is opt-in because doing that unconditionally in GTK/libhandy right now would break existing applications.. If your app uses libadwaita, it is assumed that you have opted into supporting that preference, since libadwaita's widgets already make that assumption, and it is not API-stable yet — so it can make that assumption from the beginning.

There is discussion of the accessibility implications in the design mockups.

CSS parity across implementations

In GNOME we have three implementations of CSS:

  • librsvg uses servo's engine for CSS selector matching, and micro-parsers for CSS values based on servo's cssparser.

  • GTK has its own CSS parser and processor.

  • Gnome-shell uses an embedded version of libcroco for parsing, but it does most of the selector matching and cascading with gnome-shell's own Shell Toolkit code.

None of those implementations supports @media queries nor custom properties with var(). That is, unlike in the web platform, GNOME applications cannot have this in their CSS:

@media (prefers-color-scheme: dark) {
  /* styles for dark style */

@media (prefers-color-scheme: light) {
  /* styles for light style */

Or even declaring colors in a civilized fashion:

:root {
  --main-bg-color: pink;

some_widget {
  background-color: var(--main-bg-color);

Or combining the two:

@media (prefers-color-scheme: dark) {
  :root {
    --main-bg-color: /* some nice dark background color */;
    --main-fg-color: /* a contrasty light foreground */;

@media (prefers-color-scheme: light) {
  :root {
    --main-bg-color: /* some nice light background color */;
    --main-fg-color: /* a contrasty dark foreground */;

some_widget {
  background-color: var(--main-bg-color);

Boom. I think this would remove some workarounds we have right now:

  • Just like GTK, libadwaita generates four variants of the system's stylesheet using scss (regular, dark, high-contrast, high-contrast-dark). This would be obviated with @media queries for prefers-color-scheme, prefers-contrast, inverted-colors as in the web platform.

  • GTK has a custom @define-color keyword, but neither gnome-shell nor librsvg support that. This would be obviated with CSS custom properties - the var() mechanism. (I don't know if some "environmental" stuff would be better done as env(), but none of the three implementations support that, either.)

Accent colors

They are currently implemented with GTK's @define-color, which is not ideal if the colors have to trickle down from GTK to SVG icons, since librsvg doesn't do @define-color - it would rather have var() instead.

Of course, gnome-shell's libcroco doesn't do @define-color either.

Look for @accent_color, @accent_bg_color, @warning_color, etc. in the default stylesheet, or better yet, write documentation!

The default style:

Default blue style

Accent color set to orange (e.g. tweak it in GTK's CSS inspector):

Orange accents for widgets

/* Standalone, e.g. the "Page 1" label */
@define-color accent_color @orange_5;

/* background+text pair */
@define-color accent_bg_color @orange_4;
@define-color accent_fg_color white;

Custom widgets

Again, your app's custom stylesheet for its custom widgets can use the colors defined through @define-color from the system's stylesheet.

Recoloring styles

You will be able to do this after it gets merged into the main branch, e.g. recolor everything to sepia:

Adwaita recolored to sepia

@define-color headerbar_bg_color #eedcbf;
@define-color headerbar_fg_color #483a22;

@define-color bg_color #f9f3e9;
@define-color fg_color #483a22;

@define-color dark_fill_color shade(#f9f3e9, .95);

@define-color accent_bg_color @orange_4;
@define-color accent_color @orange_5;

Of course shade() is not web-platform CSS, either. We could keep it, or redo it by implementing calc() function for color values.

Recoloring icons

Currently GTK takes some defined colors and creates a chunk of CSS to inject into SVG for icons. This has some problems.

There is also some discussion about standardizing recolorable icons across desktop environments.

How you can help

Implement support for @media queries in our three CSS implementations (librsvg, gnome-shell, GTK). Decide how CSS media features like prefers-color-scheme, prefers-contrast, inverted-colors should interact with the GNOME's themes and accessibility, and decide if we should use them for familiarity with the web platform, or if we need media features with different names.

Implement support for CSS custom properties - var() in our three CSS implementations. Decide if we should replace the current @define-color with that (note that @define-color is only in GTK, but not in librsvg or gnome-shell).

See the libadwaita roadmap and help out!

Port applications to use the proposed APIs for querying the dark style preference. There are a bunch of hacky ways of doing it right now; they need to be migrated to the new system.

Personally I would love help with finishing to port gnome-shell's styles to Rust - this is part of unifying librsvg's and gnome-shell's CSS machinery.

August 31, 2021

libinput and high-resolution wheel scrolling

Gut Ding braucht Weile. Almost three years ago, we added high-resolution wheel scrolling to the kernel (v5.0). The desktop stack however was first lagging and eventually left behind (except for an update a year ago or so, see here). However, I'm happy to announce that thanks to José Expósito's efforts, we now pushed it across the line. So - in a socially distanced manner and masked up to your eyebrows - gather round children, for it is storytime.

Historical History

In the beginning, there was the wheel detent. Or rather there were 24 of them, dividing a 360 degree [1] movement of a wheel into a neat set of 15 clicks. libinput exposed those wheel clicks as part of the "pointer axis" namespace and you could get the click count with libinput_event_pointer_get_axis_discrete() (announced here). The degree value is exposed as libinput_event_pointer_get_axis_value(). Other scroll backends (finger-scrolling or button-based scrolling) expose the pixel-precise value via that same function.

In a "recent" Microsoft Windows version (Vista!), MS added the ability for wheels to trigger more than 24 clicks per rotation. The MS Windows API now treats one "traditional" wheel click as a value of 120, anything finer-grained will be a fraction thereof. You may have a mouse that triggers quarter-wheel clicks, each sending a value of 30. This makes for smoother scrolling and is supported(-ish) by a lot of mice introduced in the last 10 years [2]. Obviously, three small scrolls are nicer than one large scroll, so the UX is less bad than before.

Now it's time for libinput to catch up with Windows Vista! For $reasons, the existing pointer axis API could get changed to accommodate for the high-res values, so a new API was added for scroll events. Read on for the details, you will believe what happens next.

Out with the old, in with the new

As of libinput 1.19, libinput has three new events: LIBINPUT_EVENT_POINTER_SCROLL_WHEEL, LIBINPUT_EVENT_POINTER_SCROLL_FINGER, and LIBINPUT_EVENT_POINTER_SCROLL_CONTINUOUS. These events reflect, perhaps unsuprisingly, scroll movements of a wheel, a finger or along a continuous axis (e.g. button scrolling). And they replace the old event LIBINPUT_EVENT_POINTER_AXIS. Those familiar with libinput will notice that the new event names encode the scroll source in the event name now. This makes them slightly more flexible and saves callers an extra call.

In terms of actual API, the new events have two new functions: libinput_event_pointer_get_scroll_value(). For the FINGER and CONTINUOUS events, the value returned is in "pixels" [3]. For the new WHEEL events, the value is in degrees. IOW this is a drop-in replacement for the old libinput_event_pointer_get_axis_value() function. The second call is libinput_event_pointer_get_scroll_value_v120() which, for WHEEL events, returns the 120-based logical units the kernel uses as well. libinput_event_pointer_has_axis() returns true if the given axis has a value, just as before. With those three calls you now get the data for the new events.

Backwards compatibility

To ensure backwards compatibility, libinput generates both old and new events so the rule for callers is: if you want to support the new events, just ignore the old ones completely. libinput also guarantees new events even on pre-5.0 kernels. This makes the old and new code easy to ifdef out, and once you get past the immediate event handling the code paths are virtually identical.

When, oh when?

These changes have been merged into the libinput main branch and will be part of libinput 1.19. Which is due to be released over the next month or so, so feel free to work backwards from that for your favourite distribution.

Having said that, libinput is merely the lowest block in the Jenga tower that is the desktop stack. José linked to the various MRs in the upstream libinput MR, so if you're on your seat's edge waiting for e.g. GTK to get this, well, there's an MR for that.

[1] That's degrees of an angle, not Fahrenheit
[2] As usual, on a significant number of those you'll need to know whatever proprietary protocol the vendor deemed to be important IP. Older MS mice stand out here because they use straight HID.
[3] libinput doesn't really have a concept of pixels, but it has a normalized pixel that movements are defined as. Most callers take that as real pixels except for the high-resolution displays where it's appropriately scaled.

Flatpak portals - how do they work?

I've been working on portals recently and one of the issues for me was that the documentation just didn't quite hit the sweet spot. At least the bits I found were either too high-level or too implementation-specific. So here's a set of notes on how a portal works, in the hope that this is actually correct.

First, Portals are supposed to be a way for sandboxed applications (flatpaks) to trigger functionality they don't have direct access too. The prime example: opening a file without the application having access to $HOME. This is done by the applications talking to portals instead of doing the functionality themselves.

There is really only one portal process: /usr/libexec/xdg-desktop-portal, started as a systemd user service. That process owns a DBus bus name (org.freedesktop.portal.Desktop) and an object on that name (/org/freedesktop/portal/desktop). You can see that bus name and object with D-Feet, from DBus' POV there's nothing special about it. What makes it the portal is simply that the application running inside the sandbox can talk to that DBus name and thus call the various methods. Obviously the xdg-desktop-portal needs to run outside the sandbox to do its things.

There are multiple portal interfaces, all available on that one object. Those interfaces have names like org.freedesktop.portal.FileChooser (to open/save files). The xdg-desktop-portal implements those interfaces and thus handles any method calls on those interfaces. So where an application is sandboxed, it doesn't implement the functionality itself, it instead calls e.g. the OpenFile() method on the org.freedesktop.portal.FileChooser interface. Then it gets an fd back and can read the content of that file without needing full access to the file system.

Some interfaces are fully handled within xdg-desktop-portal. For example, the Camera portal checks a few things internally, pops up a dialog for the user to confirm access to if needed [1] but otherwise there's nothing else involved with this specific method call.

Other interfaces have a backend "implementation" DBus interface. For example, the org.freedesktop.portal.FileChooser interface has a org.freedesktop.impl.portal.FileChooser (notice the "impl") counterpart. xdg-desktop-portal does not implement those impl.portals. xdg-desktop-portal instead routes the DBus calls to the respective "impl.portal". Your sandboxed application calls OpenFile(), xdg-desktop-portal now calls OpenFile() on org.freedesktop.impl.portal.FileChooser. That interface returns a value, xdg-desktop-portal extracts it and returns it back to the application in respones to the original OpenFile() call.

What provides those impl.portals doesn't matter to xdg-desktop-portal, and this is where things are hot-swappable. GTK and Qt both provide (some of) those impl portals, There are GTK and Qt-specific portals with xdg-desktop-portal-gtk and xdg-desktop-portal-kde but another one is provided by GNOME Shell directly. You can check the files in /usr/share/xdg-desktop-portal/portals/ and see which impl portal is provided on which bus name. The reason those impl.portals exist is so they can be native to the desktop environment - regardless what application you're running and with a generic xdg-desktop-portal, you see the native file chooser dialog for your desktop environment.

So the full call sequence is:

  • At startup, xdg-desktop-portal parses the /usr/libexec/xdg-desktop-portal/*.portal files to know which impl.portal interface is provided on which bus name
  • The application calls OpenFile() on the org.freedesktop.portal.FileChooser interface on the object path /org/freedesktop/portal/desktop. It can do so because the bus name this object sits on is not restricted by the sandbox
  • xdg-desktop-portal receives that call. This is portal with an impl.portal so xdg-desktop-portal calls OpenFile() on the bus name that provides the org.freedesktop.impl.portal.FileChooser interface (as previously established by reading the *.portal files)
  • Assuming xdg-desktop-portal-gtk provides that portal at the moment, that process now pops up a GTK FileChooser dialog that runs outside the sandbox. User selects a file
  • xdg-desktop-portal-gtk sends back the fd for the file to the xdg-desktop-portal, and the impl.portal parts are done
  • xdg-desktop-portal receives that fd and sends it back as reply to the OpenFile() method in the normal portal
  • The application receives the fd and can read the file now
A few details here aren't fully correct, but it's correct enough to understand the sequence - the exact details depend on the method call anyway.

Finally: because of DBus restrictions, the various methods in the portal interfaces don't just reply with values. Instead, the xdg-desktop-portal creates a new org.freedesktop.portal.Request object and returns the object path for that. Once that's done the method is complete from DBus' POV. When the actual return value arrives (e.g. the fd), that value is passed via a signal on that Request object, which is then destroyed. This roundabout way is done for purely technical reasons, regular DBus methods would time out while the user picks a file path.

Anyway. Maybe this helps someone understanding how the portal bits fit together.

[1] it does so using another portal but let's ignore that
[2] not really hot-swappable though. You need to restart xdg-desktop-portal but not your host. So luke-warm-swappable only

Edit Sep 01: clarify that it's not GTK/Qt providing the portals, but xdg-desktop-portal-gtk and -kde

August 27, 2021

Learning Rust: Interfacing with C

Why Rust? I had spent the last two rainy days of my summer vacation on learning Rust. Rust is becoming ever-more popular and is even making its way into the Linux kernel – so it feels like something I should know a little about. There have been a lot of new languages in the recent years, like Kotlin or Go. None of them are particularly attractive to me personally, as their strenghts and “selling points” just don’t apply enough to what I do – so far, that has been covered rather well between C, Python, and JavaScript.

August 26, 2021

Pango updates

I’ve spent some time on Pango, recently. Here is a little update on the feature work that I’ve done there. All of these changes will appear in Pango 1.50 and GTK 4.6.

The general directions of this work are:

  • Take advantage of the fact that we are now using harfbuzz on all platforms. Among other things, this gives us much easier access to font information.
  • Match CSS where it makes sense. If nothing else, this makes it much easier to connect new Pango features to the CSS machinery in GTK.

CSS features

Lets start with the second point: matching CSS.

Line spacing has historically been a bit painful in GtkTextView. You can set distances before and after paragraphs, and between wrapped lines inside a paragraph. But this does not take font sizes into account—it is a fixed number of pixels.

A while ago, I added a line-spacing factor to Pango, which was meant to help with the font size dependency. You basically tell Pango: I want the baselines of this paragraph spaced apart 1.33 times as wide as they would normally be. The remaining problem is that Pango handles text one paragraph at a time. So as far as it is concerned, there is not previous baseline above the first line in a paragraph, and it does not increase the spacing between paragraphs.

The CSS solution to this problem is to just make the lines themselves taller, and place them flush next to each other.  With this approach, you need to be a little careful to make sure that you still get consistent baseline-to-baseline distances. But at least it solves the paragraph spacing issue.

Pango recently gained a line-height attribute that does just that, and GTK now supports the corresponding CSS property.

Another feature that has come to Pango from the CSS side is support for text transformation (also in the screenshot). This lets you change the capitalization of text. Note that this is just for presentation purposes—if you select STRASSE in the example and copy it to the clipboard,  you get the original straße.

And again, GTK supports the corresponding CSS property.

Font features

As I said, harfbuzz makes it much easier for us to access features of the fonts we use. One of these that I have recently looked into is caret metrics. High-quality italic fonts can contain information about the slope at which the text caret is best drawn to match the text:

I’ve added a new api to get this information, and made GTK use it, with this result:

Another useful bit of font information concerns placement or carets inside ligatures.

 Historically, Pango has just divided the width of the glyph evenly among the characters that are forming the ligature (w and i, in this example), but high-quality fonts can provide this information. This is most relevant for scripts using many ligatures, such as Arabic.I’ve made Pango use the ligature caret data if it is available from the font, and got this result:

The wi ligature in this test is what I could come up with after a struggling for a few hours with fontforge (clearly, font design is not in my future). The only other fonts I’ve found with ligature caret information are Arabic, and I sadly can’t read or write that script.

The last feature closed a 15 year old bug – not something you get to do every day!

“” is Online!

Our Apps for GNOME website is now available at! It features the best applications in the GNOME ecosystem. Let’s quickly get into the most exciting aspects of Apps for GNOME.

    • Focus on participation. The app pages are designed with a focus on getting users involved in the development of the application. Whether it is feedback, translation, or financial support of the project. Apps for GNOME offers a lot of ways to get involved in app development.
    • Internationalization. This website is the first source that provides translated information about apps in the GNOME ecosystem. This is another small step towards lowering the barrier for getting into using and contributing to GNOME.
    • Up-to-date information. Apps for GNOME almost exclusively relies on existing metadata that, for example, are used in Software or on Flathub. Therefore it does not require extra work for app maintainers to keep information up-to-date. Better yet, Apps for GNOME is an additional incentive for maintainers and translators to optimize and translate that information.
    • Featuring apps that don’t fit on Flathub.  It’s not technically feasible to distribute apps like Software, Files, or Terminal on Flathub. Apps for GNOME gives those apps the web presence they are missing out on Flathub.

We certainly hope to extend those aspects in the future as this announcement only concludes a rapid development cycle of fewer than eight weeks from the first idea until today. We are looking forward to your feedback, and ideas, and contributions to this project!

If you are interested in some background information you can check out my previous blog posts. Let’s conclude with an attempt to catch up with acknowledgments: Kind thanks to Bilal and Felix for providing an AppStream library and libflatpak bindings for rust. Thanks also to all the translators that make such internationalized project possible in the first place. Last but not least, a huge shoutout to Andrea and the rest of the infrastructure team for the prompt support in realizing this project and keeping GNOME online. You folks are the best 🙂

August 25, 2021

Publishing your documentation

The main function of library-web, the tool that published the API reference of the various GNOME libraries, was to take release archives and put their contents in a location that would be visible to a web server. In 2006, this was the apex of automation, of course. These days? Not so much.

Since library-web is going the way of the Dodo, and we do have better ways to automate the build and publishing of files with GitLab, how do we replace library-web in 2021? The answer is, unsurprisingly: continuous integration pipelines.

I will assume that you’re already building—and testing—your library using GitLab’s CI; if you aren’t, then you have bigger problems than just publishing your API.

So, let’s start with these preconditions:

If your project doesn’t satisfy these preconditions you might want to work on doing so; alternatively, you can implement your own CI pipeline.

Let’s start with a simple job template:

# Expected variables:
# PROJECT_DEPS: the dependencies for your own project
# MESON_VERSION: the version of Meson you depend on
# MESON_EXTRA_FLAGS: additional Meson setup options
#   you wish to pass to the configuration phase
# DOCS_FLAGS: the Meson setup option for enabling the
#   documentation, if any
# DOCS_PATH: the path of the generated reference,
#   relative to the build root
  image: fedora:latest
    - export PATH="$HOME/.local/bin:$PATH"
    - >
      dnf install -y
    - dnf install -y ${PROJECT_DEPS}
    - >
      pip3 install
    - meson setup ${MESON_EXTRA_FLAGS} ${DOCS_FLAGS} _docs .
    - meson compile -C _docs
    - |
      pushd "_docs/${DOCS_PATH}" > /dev/null 
      tar cfJ ${CI_PROJECT_NAME}-docs.tar.xz .
      popd > /dev/null
    - mv _docs/${DOCS_PATH}/${CI_PROJECT_NAME}-docs.tar .
    when: always
    name: 'Documentation'
    expose_as: 'Download the API reference'
      - ${CI_PROJECT_NAME}-docs.tar.xz

This CI template will:

  • download all the required dependencies for building the API reference using gi-docgen
  • build your project, including the API reference
  • create an archive with the API reference
  • store the archive as a CI artefact that you can easily download

Incidentally, by adding a meson test -C _build to the script section, you can easily test your build as well; and if you have a test() target in your build that runs gi-docgen check, then you can verify that your documentation is always complete.

Now, all you have to do is create your own CI job that inherits from the template inside its own stage. I will use JSON-GLib as a reference:

  - docs

  stage: docs
  extends: .gidocgen-build
  needs: []
    MESON_VERSION: 0.55.3
    DOCS_FLAGS: -Dgtk_doc=true
    DOCS_PATH: docs/json-glib-1.0

What about gtk-doc!”, I hear from the back of the room. Well, fear not, because there’s a similar template you can use if you’re still using gtk-doc in your project:

# Expected variables:
# PROJECT_DEPS: the dependencies for your own project
# MESON_VERSION: the version of Meson you depend on
# MESON_EXTRA_FLAGS: additional Meson setup options you
#   wish to pass to the configuration phase
# DOCS_FLAGS: the Meson setup option for enabling the
#   documentation, if any
# DOCS_TARGET: the Meson target for building the
#   documentation, if any
# DOCS_PATH: the path of the generated reference,
#   relative to the build root
  image: fedora:latest
    - export PATH="$HOME/.local/bin:$PATH"
    - >
      dnf install -y
    - dnf install -y ${PROJECT_DEPS}
    - pip3 install  meson==${MESON_VERSION}
    - meson setup ${MESON_EXTRA_FLAGS} ${DOCS_FLAGS} _docs .
    # This is exceedingly annoying, but sadly its how
    # gtk-doc works in Meson
    - ninja -C _docs ${DOCS_TARGET}
    - |
      pushd "_docs/${DOCS_PATH}" > /dev/null 
      tar cfJ ${CI_PROJECT_NAME}-docs.tar.xz .
      popd > /dev/null
    - mv _docs/${DOCS_PATH}/${CI_PROJECT_NAME}-docs.tar .
    when: always
    name: 'Documentation'
    expose_as: 'Download the API reference'
      - ${CI_PROJECT_NAME}-docs.tar.xz

And now you can use extends: .gtkdoc-build in your api-reference job.

Of course, this is just half of the job: the actual goal is to publish the documentation using GitLab’s Pages. For that, you will need another CI job in your pipeline, this time using the deploy stage:

  - docs
  - deploy

# ... the api-reference job goes here...

  stage: deploy
  needs: ['api-reference']
    - mkdir public && cd public
    - tar xfJ ../${CI_PROJECT_NAME}-docs.tar.xz
      - public
    - master
    - main

Now, once you push to your main development branch, your API reference will be built by your CI pipeline, and the results published in your project’s Pages space—like JSON-GLib.

The CI pipeline and GitLab Pages are also useful for building complex, static websites presenting multiple versions of the documentation; or presenting multiple libraries. An example of the former is libadwaita’s website, while an example of the latter is the GTK documentation website. I’ll write a blog post about them another time.

Given that the CI templates are pretty generic, I’m working on adding them into the GNOME ci-templates repository, so you will be able to use something like:

include: ''


include: ''

without having to copy-paste the template in your own .gitlab-ci.yml file.

The obvious limitation of this approach is that you will need to depend on the latest version of Fedora to build your project. Sadly, we cannot use Flatpak and the GNOME run time images for this, mainly because we are building libraries, not applications; and because extracting files out of a Flatpak build after it has completed isn’t entirely trivial. Another side effect is that if you bump up the dependencies of your project to something on the bleeding edge and currently not packaged on the latest stable Fedora, you will need to have it included as a Meson sub-project. Of course, you should already be doing that, so it’s a minor downside.

Ideally, if GNOME built actual run time images for the SDK, we could install gtk-doc, gi-docgen, and all their dependencies into the SDK itself, and avoid depending on a real Linux distribution for the libraries in our platform.

Private Flatpak installations in Builder

Builder needs to deal with many SDK and SDK extensions for applications built upon Flatpak.

One thing I never liked about how we did this up until now was that we needed to install Flatpak remotes into the user’s personal Flatpak installation. First, because we needed to add Flathub and gnome-nightly repositories. Secondly, once a year we need to add the flathub-beta remote due to post-branch SDKs relying on beta extensions.

Previously this would pollute things like GNOME Software with versions of applications that you might not care about as a user.

In Builder 41, a private Flatpak installation is used in $XDG_DATA_DIRS which contains those remotes. Additionally we set a filter to only allow runtimes and specifically ones matching certain globs.

Milonga in Flathub!

Cambalache 0.7.4 is now available in Flathub here

For the impatient, simply run this to install it:

flatpak install --user flathub ar.xjuan.Cambalache

Cambalache is a new RAD tool that enables the creation of user interfaces for Gtk and the GNOME desktop environment. It’s main target is Gtk 4 but it has been designed from the ground up to support other versions. It is released under LGPL v2.1 license and you can get the source code and file issues here

These are the relevant new features:

  • Interactive introduction
  • Workspace Gtk theme selection
  • Template support
  • New translations, German and Czech
  • Sponsors credit section

Interactive Introduction

Even tough the workflow is similar to Glade, there are some key differences like multiple UI files support in the same project, which means new concepts like import and export where introduced. Since I do not like writing documentation, who does? I made an interactive tutorial to show up the work flow.

Workspace Gtk theme

Would not be nice to have a way to easily change the theme of your app to see how it looks?

Thanks to rendering widgets out of process, setting a different theme for the workspace is trivial to implement. All I had to do is set the theme name in the renderer’s GtkSetting. Keep in mind that flatpak application can not access your host themes, you have to install flatpak Gtk theme extensions!

Template support

All modern applications should use templates extensively! So 0.7.4 has basic support for templates, all you have to do it toggle the Object Id check button of the toplevel widget you want as your template. In the near future I will add the option to use all the templates in a project as just another widget class.

New translations

Thanks to two new contributors Phil and Vojtěch Cambalache now has 3 translations.

  • de by PhilProg
  • cs by Vojtěch Perník
  • es

If you want to see it translated in your language, open an issue with a new MR and I will merge it asap.

Sponsors Credits Section

You can financially support development on Patreon, other options like Liberapay are under consideration.

My goal is to have a steady stream of donations so that I can dedicate at least one day a week to development and hire a part time QA engineer. This would help ensure continuous improvements and keep up with bugs reports.

So many many thanks to all the people that supports Cambalache!

  • Patrick Griffis
  • Platon workaccount
  • Sonny Piers
  • Felipe Borges
  • Javier Jardón


Call for Logo of GNOME.Asia Summit 2021

August 23, 2021

Multi-account support in Fractal-next: GSoC final report

After another month of work and getting a bit of a deeper hang of some GTK4 mechanisms like GtkExpression, the 2021 edition of Google Summer of Code comes to an end.

In previous posts I explained my journey towards being able to implement an account switcher, using the new ListModel machinery in the end. While I already worked on Fractal in 2020, this time I did my task over a clean slate, given that a complete rewrite of Fractal was started.

The design outlined by Tobias Bernard has been implemented as he said, with the exception of multi-window support. However, it is not fully functional yet given the appearance of two weird bugs. The most notable one is that when clicking over any user entry it does not change the GtkStack page even though the signal handler calls the appropriate method. Initially it worked right, but this bug got in the code out of nowhere in the middle of the development process. Another issue is that multiple user entries in the switcher can appear as selected at the same time. Both problems have been diagnosed for days both by me and Julian Sparber and we found nothing clear. We are not even sure if we are hitting a bug in GTK4. I will update this section when we discover what’s going on and fix the issues. Once that is tackled, the main MR will be merged and this work will be part of Fractal.

My greatest discovery in all this process is the one-way data binding mechanism that has been introduced in GTK4: GtkExpression. It is a very convenient way of expressing data relationships between application data and the widget tree, or between widgets on unrelated places of the hierarchy. However, sometimes it becomes a bit unconfortable due to the lack of error reporting when it does not behave as expected.

As I said in my previous post, GTK from Rust does not fully exploit the type system and makes development more error prone, having to rely more on manual testing of edge cases which are difficult to think of when you are now to a library/framework.

Notwithstanding these shortcomings, a few auxiliary MRs have been accepted. Here is the full list:

I cannot thank enough the opportunity and help given by GNOME, Julian, Tobias and Daniel García, my mentor.

GTK 4.4

GTK 4.4.0 is now available for download in the usual places. Here are some highlights of the work that has gone into it.

The NGL renderer and GL support

The NGL renderer has continued to see improvements. This includes speedups, fixes for transformed rendering, avoiding huge intermediate textures, and correct handling of partial color fonts. After some help from driver developers, NGL now works correctly with the Mali driver. We are planning to drop the original GL renderer in the next cycle.

Outside of GSK, our OpenGL setup code has been cleaned up and simplified. We increasingly rely on EGL, and require EGL 1.4 now. On X11 we use EGL, falling back to GLX if needed. On Windows, we default to using WGL.

Our GL support works fine with the latest NVidia driver.


The included themes have been reorganized and renamed. We now ship themes that are called Default, Default-dark, Default-hc and Default-hc-dark. The Adwaita theme is moving to libadwaita.

Among the smaller theme improvements are new error underlines (they are now dotted instead of squiggly) and support for translucent text selections.


Input handling has seen active development this cycle. We’ve matched the behavior of the built-in input method with IBus for displaying and handling compose sequences and dead keys. As part of this, we now support multiple dead keys and dead key combinations that don’t produce a single Unicode character (such as ẅ).

We fully support 32-bit keysyms now, so using Unicode keysyms (e.g. for combining marks) works.


Our Emoji data has been updated to CLDR 39, and we can are looking for translated Emoji data by both language and territory (e.g. it-ch).


The Inspector is now enabled by default, so debugging GTK applications should be a litte easier.


Apart from the WGL improvements that were already mentioned, we now use GL for media playback on Windows. A big change that landed late in 4.4 is that we use the WinPointer API for tablets and other input devices now, replacing the outdated wintab API. DND support on Windows is also improved, and the local DND protocol has been dropped.

The numbers

GTK 4.4 is the result of 5 months of development, with 838 individual commits from 71 developers; a total of 88133 lines were added and 63094 removed.

Developers with the most changesets
Matthias Clasen 456 54.4%
Benjamin Otte 82 9.8%
Emmanuele Bassi 48 5.7%
Alexander Mikhaylenko 35 4.2%
Chun-wei Fan 30 3.6%
Christian Hergert 18 2.1%
Luca Bacci 17 2.0%
Carlos Garnacho 10 1.2%
Bilal Elmoussaoui 10 1.2%
Florian Müllner 7 0.8%
Yuri Chornoivan 6 0.7%
Maximiliano Sandoval R 6 0.7%
Marc-André Lureau 5 0.6%
Marco Trevisan (Treviño) 5 0.6%
Pawan Chitrakar 5 0.6%
Piotr Drąg 4 0.5%
Timm Bäder 4 0.5%
Xavier Claessens 4 0.5%
Zhi 4 0.5%
Sebastian Cherek 4 0.5%

Wrapping up GSoC 2021

This year’s GSoC has been a great opportunity to learn and to contribute to the GNOME project. Let’s recapitulate what has been done in the libadwaita animations project, what is left to do, and how the future looks like.

What has been done


We started by designing how the animation API should look like, taking into account all the pieces we wanted to fit in and the constrains and limitations of the ecosystem. Part of this work was researching how third parties have solved this issue, studying how the APIs of Android, Flutter, and CoreAnimation work.

We ended with this UML diagram, which would loosely guide the development efforts from so on:

UML diagram of the API design


Libadwaita already had code for managing timed animations, but it was all private and not prepared to be exposed as a public API. It also had code Alexander and I prototyped for spring animations, but the same applies, it was not ready to be exposed yet. Refactoring the existing code was a good exercise for learning the intricacies of GLib and GTK. That refactor was already merged and can be checked here:


With that ground work already laid, I started working on implementing the AdwTimedAnimation class:

To test the new code I implemented a quick test in the libadwaita demo:

Demo of the TimedAnimation implementation

The demo is fully interactive albeit it is not hooked to the controls on display, but can be tweaked with the GTK Inspector (the animation is exposed as a property of the parent AdwWindow). The code for this can be checked at:


Once everything seemed to work without issues I started moving all the relevant pieces of code from the base AdwAnimation class to the AdwTimedAnimation subclass:

And while doing so I had to port all the libadwaita codebase to use the new TimedAnimation implementation:

Once all of that was done I started abstracting the target object to its own class, taking inspiration from GtkShortcutAction. The animation target is a delegate which gets called each animation tick, and it’s the responsible to applying whatever value the animation has at each moment. Right now we only have a callback target, but this refactoring will allow to implement, for example, a property target which will automatically bind the animation to an arbitrary property of an arbitrary widget:

There is an open MR with all these changes in


While it usually is not a good idea to document things before they’re fully implemented, I deemed convenient to do it in this case to wrap things up regarding GSoC. With that in mind I’ve been adding docstrings to everything which will be part of the public API:

Note this is also the last commit in the GSoC context, so if you want to test the work done that’s the best point to check it out. The project compiles successfully and without warnings, and should be trivial to test following the instructions at

The work is not yet finished though;

What’s next

My involvement with the GNOME project and with this particular task is not done yet. I have a clear roadmap to finish the animation API, which loosely looks like:

  • Finish writing the other Animation Targets
  • Open the API for Timed Animations
  • Refactor the physics based animations to use the base AdwAnimation
  • Expose them in the public API too
  • Implement Implicit and Multi animations in the same manner
  • Ask GNOME designers for mockups on the relevant demos for the animations
  • Document everything

This was a big project from the beginning, but I’m really happy to have laid the groundwork to implement everything else this summer.

Some final words

What GSoC meant for me

C is a scary language. It should be, it has a very steep learning curve, and while a lot of us are very decent at self-learning things, I personally couldn’t have learned what I learned without someone experienced mentoring me and guiding teaching me all the intricacies of C and GLib/GTK. I’m certainly not new to GTK, but in these last few months I’ve learned more on how it works than in years. And I’m glad I did, as I’ll be able to involve myself more on the project from now on. This leads me to grant my final words to my mentor, Alexander.

My mentor

I think I’m telling no secret when I say GNOME is very, very lucky to have someone like Alexander Mikhaylenko among its contributors, and I’m not only talking about their incredible capacity to bring to life in a matter of minutes the most incredible projects (each time they tweet the price of the bread rises, I swear). They were incredibly patient and incredibly helpful for me, going above and beyond each single time I requested their help. So, thanks, Alexander for all you do for this project and the human quality you always show.

August 22, 2021

GSoC 2021 | Faces of GNOME

As Google Summer of Code’21 comes to an end, the 3-month long journey has been nothing short of amazing. From developing the UI, reading documentations to adding new features and fixing issues. I am ecstatic to share that mostly all of the milestones for the development of the Faces of GNOME are complete and the entire source code can be found at GitLab. This is a summary of all the work done during and before the GSoC period and plans post-GSoC.

Project Abstract 💡

Faces of GNOME was an initiative started by the Engagement team of GNOME to celebrate all kinds of contributions to GNOME with a motive of creating a much stronger people centric community.

Faces of GNOME is a website built using Ruby based site generator Jekyll & JavaScript showcasing past, current GNOME contributors. It allows contributors to add personal custom information in YAML files used for data serialization. Plugins like jekyll-data-page-generator are used to parse the files and generate pages based upon the individual records.

It serves as a personal space allowing contributors having unique IDs to even host and create markdown supported blog posts with various search functionalities. Example of such profile can be found here.

The project aimed to have a full code-complete solution, with documentation, guidelines, and all the pages complete and ready to be launched which would allow the Faces of GNOME to succeed not only as a project and but also as a program.

GUADEC 2021 🎥

GUADEC is the GNOME community’s largest conference, bringing together hundreds of users, contributors, community members, and enthusiastic supporters together for a week of talks and workshops. GSoC Interns are given an opportunity to talk about their project for 3–5 mins in the Intern lightning talks which was planned to happen in Zacatecas (Mexico). But unfortunately due to COVID, the conference was held online. You can hear my project talk here.

Coding Period 👨‍💻

List of all Merge Requests which got merged in the Faces of GNOME project:

  • Refactor: Populated social_networks.json (!24)
  • Feat: Add scripts for Pagination & search feature (!27)
  • Feat: Add slick carousel feature to display projects (!37)
  • Performance: Defer off screen loading of images (!48)
  • Feat/Fix: Blog posting & RSS (!51)
  • Style: Design improvements of Types of Contributors section (!52)
  • Feat: FAQs accordions (!53)
  • Refactor: Rename author_name variables (!54)
  • Feat: Outreach mentors showcase (!55)
  • Feat: Add open to work option for members profile (!58)
  • Feat: Outreach interns/students showcase (!59)
  • Feat: Add voice features to pronounce names (!60)

Merge requests · Teams / Engagement / Websites / People of GNOME

Some project glimpses 📸

Sample member’s profile
Project Maintainers showcase
GSoC Mentors/Students showcase

Roadmap ⏳

I would love to continue to support the project, work on new features and bug fixes. But, the first thing on the agenda is to improve the Search Engine Optimization & complete documentation guidelines for members to create/update profile information or post blogs. It also includes deploying the website to a new and stable URL, replacing the prototype image banners with relative images. So, lots of exciting features are planned for the future including localization features.

Acknowledgements 💯

It was a great experience working with the GNOME Engagement team for GSoC’21. Thanks to my mentor Claudio Wunder and Caroline Henriksen for giving me this wonderful opportunity and mentoring me. I look forward to continuing being a part of the GNOME community, along with helping new contributors find their way around the community.

August 21, 2021

GSoC 2021 Final Report

I have been working on tracker project for the past 10 weeks to improve its support for custom ontologies. It has been a great journey and I gained great software engineering experience by exploring the project and its architecture. Also, the project mentors helped me a lot during the project. In this article I’m going to summarize the work done in the project and the future work.

Work Done

Fix crashes caused by comments in the ontology file

Tracker was only supporting comments inside the ontology file if they appeared at the beginning of triple statements while any comments at the middle of any statement would lead tracker to crash. This problem is fixed in the MR !443 (Merged). It makes tracker support comments anywhere in the ontology file outside an IRIREF or String (the same as stated in RDF Turtle Specifications). Also, added test cases to test this feature.

Track the current line being parsed in the ontology file

In the following MR !447 (MERGED), I added the feature of tracking the current line and column being parsed in the ontology file. This feature will be used later to tell user the specific location of the statement that caused the ontology parsing error.

Fix crash caused by unknown prefixes in the ontology file

In the same MR !447 (MERGED), I fixed the crash that happen when any statement in the ontology uses a prefix which is not defined before (may happen if user misspelled the prefix). Currently, if the ontology contains an unknown prefix, it will propagate a descriptive error message to the process that established the connection with tracker instead of crashing. Also, the MR request contains test cases to test both the feature of tracking the current line and column numbers and the propagation of errors that may happen while parsing the ontology.

Properly rollback the database when an error is occurred while building or updating it

Tracker was saving a half-completed database if an error occurred while building a new one. It made tracker crash the next time it starts even if the user fixed the problem that caused the error in the first place, as tracker tries to read from the non-consistent database. Also, if an error occurred while the database is being updated to integrate with the changes in the ontology, some changes may not completely be rolled back which also leaves the database in an inconsistent state. Also, there were some errors that are ignored while updating the db which also lead to the same problem.

These problems are fixed in the MR !457 (MERGED) and some test cases are added to ensure that the changes in the database are completely rolled back either if the error occurred while building or updating it.

Properly handle errors that may happen while parsing the ontology

The ontology parser wasn’t properly handle many parsing errors that may appear in the ontology file. The parser was ignoring these errors; it only shows warnings to user, but it continues parsing the ontology and builds the database that is described by the malformed ontology file.

These errors are properly handled in the MR !452 (OPEN), it shows warnings to user about all the parsing errors in the ontology file and propagate a descriptive error message to the process that established the connection with tracker. Also, a test case is added to test the error propagation.

Prefix errors and warnings with line and column numbers that caused the error

In the same MR !452 (OPEN), all errors and warnings that are printed while parsing the ontology file are prefixed by the position of the statement that caused the error. Which makes very easy to user to find out the problems in the ontology file (As we can see in the following video).

Future Work

Adding support for out of order definitions in the ontology file

Currently, tracker doesn’t support out of order definition which means using an object before defining it. This will be done by defining priorities for each type of statements in the ontology file. So, the definition statements (which define a new object) are parsed first then the statements that has lower priority then the more lower priority statements until parsing the statements in all priority levels.

Final Words

It was a great experience working with GNOME community. I learned a lot about exploring, debugging and understanding the structure of already built project. I learned a lot of best practices in software engineering and in C language in specific. I would like to thank my mentors Carlos and Sam who helped me a lot before and after I get accepted in the project. Also, I enjoyed giving a talk in GUADEC conference. Looking forward to continue contributing to GNOME projects.

GSoC 2021: Overview

Over the summer I worked on implementing the new screenshot UI for GNOME Shell as part of Google Summer of Code 2021. This post is an overview of the work I did and work still left to do.

The project was about adding a dedicated UI to GNOME Shell for taking screenshots and recording screencasts. The idea was to unify related functionality in a discoverable and easy to use interface, while also improving on several aspects of existing screenshot and screencast tools.

Over the summer, I implemented most of the functionality:

  • Capturing screen and window snapshots immediately, letting the user choose what to save later.
  • Area selection, which can be resized and dragged after the first selection.
  • Screen selection.
  • Window selection presenting an Overview-like view.
  • Mouse cursor capturing which can be toggled on and off inside the UI.
  • Area and screen video recording.
  • Correct handling of HiDPI and mixed DPI setups.

I opened several merge requests:

I expect that Mutter merge requests won’t require many further changes before merging. The screenshot UI however still has some work that I will do past GSoC, detailed in the main merge request. This work includes adding window selection support for screen recording, ensuring all functionality is keyboard- and touch-accessible, and working with the designers to polish the final result. GNOME 41 is already past the UI freeze, but GNOME 42 seems to me like a realistic target for finishing and landing the screenshot UI.

For the purposes of GSoC, I additionally made two frozen snapshots of work done over the GSoC period that I will not update further: three commits in this mutter tag and 16 commits in this gnome-shell tag.

I also wrote several blog posts about my work on the screenshot UI:

Additionally, I gave a short presentation of my work at GUADEC, GNOME’s annual conference.

Over the course of this GSoC project I learned a lot about GNOME Shell’s UI internals which will help me with GNOME Shell contributions in the future. I enjoyed working on an awesome upgrade to taking screenshots and screencasts in GNOME. For me participating in the GNOME community is a fantastic experience and I highly recommend everyone to come hang out and contribute.

I would like to once again thank my mentor Jonas Dreßler for answering my questions, as well as Tobias Bernard, Allan Day and Jakub Steiner for providing design feedback.

Screen recording in the new screenshot UI

GSoC final submission

It has been a great journey working on the Tracker project. In the past 10 weeks, I got to learn a lot about the project and its architecture. This is the final submission of the project. For the weekly updates, check out my previous posts here.

Proposed project goals

:heavy_check_mark: Add support of file-creation time in Tracker-miners.
:heavy_check_mark: Add feature of search by file-creation time in Nautilus.
:heavy_check_mark: Improve nautilus-search engine tests suite.


Major contributions were done in Tracker-miners and Nautilus projects during the coding period.

Add support of file-creation time in Tracker-miners

This was the major primary goal of the project. After adding this feature, Tracker-miners now supports storing and querying file-creation time. While working on this feature, I also discovered and eventually fixed a double free bug in the indexer.

Pull Requests !440 (Merged), !340 (Merged)

Issues #158 (Closed)

Add feature of search by file-creation time in Nautilus

This feature depends on previous work done in Tracker-miners. After adding this feature, Nautilus now supports searching files by file-creation time.

Pull Requests !693 (Open under review)

Issues #1761 (Open)

Improve nautilus-search engine tests suite.

Initially, there were only two proposed goals. After having chat with the mentors, we decided to extend the project by adding one more goal to improve the nautilus-search engine test suite. While writing tests I found a bug in the nautilus-tracker-search-engine due to improper date-time format. After fixing this bug, I added tests for searching files by modification and access time in all search engines.

Pull Requests !697 (Merged), !701 (Open under review)

Issues #1933 (Closed)

Other miscellaneous contributions

This work was done during the GSoC period but is not part of the project goals.

Pull Requests !446 (Merged), !336 (Merged)

Issues #317 (Closed)

Future Goals

Resolve any unresolved threads in the open Merge Requests. Write more tests for search by file creation time in Nautilus. Continue contributing to the projects.

Closing thoughts

It was an amazing experience working in GNOME Community. I would like to thank my mentors Carlos and Sam for giving constructive suggestions and guiding me through the program, also thanks to António for quick code reviews. I enjoyed attending and giving a talk at GUADEC conference and got to learn about different projects in the GNOME Circle. Looking forward to continue contributing to GNOME projects.

August 20, 2021

GSoC 2021 · Part IV - Final Review


For the past 10 weeks, I have been working on implementing active resource management in GNOME as a part of Google Summer of Code 2021. On the surface level, this entailed setting up mechanisms to track states of different applications and then making allocation decisions based on this information. To give a brief idea about my contributions throughout this period I have presented them in the form of tasks along with relevant code and their current status.

Work Product - Task List

Task 1 - Creating a GNOME extension

My first task was to track the active or currently open window in gnome-shell and allocate more CPU weight to it. After discussing with my mentors I built an extension which on getting notified when the focus-window has changed, gets the PID of the respective window. Based on this PID we find the cgroup directory for that particular application. cgroups (control groups) is a Linux kernel feature that limits and isolates the resource usage of a collection of processes.

We then set an “inactive-since” timestamp which can be used to determine when the application was last active or if it is currently active(-1) as an extended attribute(xattr) on the cgroup directory. An extended attribute is a name:value pair associated permanently with a file or directory. Resource management daemons can then monitor the cgroup directories for these changes and make allocation decisions accordingly.

Window Tracker Extension Gitlab Repo:

Current Status: Complete

Task 2 - Updating uresourced

uresourced is the place where most of the changes have taken place, from implementing the basic structure to working with experimental ideas. Hence I have given a more detailed module-wise breakdown for individual changes. However, you can check out the overall Merge request below.

uresourced Gitlab Merge Request:

Current Status: Code review completed, pending merge.

Task 2A - App monitoring (RAppMonitor)

This change allows us to recursively monitors changes to the app.slice directory and sub-directories, I.e. the cgroups inside app.slice and emits a changed signal whenever the xattr on a directory have changed. Such a notification can also happen programatically by code monitoring other parts of the system, such as audio playback. It tracks all information using watch descriptors and app paths, stored in Hash Tables for convenience and fast access.

RAppMonitor code commit:

Task 2B - Resource allocation policy (RAppPolicy)

It is responsible for actually making the policy decisions and setting resources accordingly. Hence, allowing for a cleaner separation between tracking and resource allocation. It acts only when a changed signal is received from RAppMonitor. On receiving a changed signal it calculates the CPUWeight and makes a systemd DBus method call. Currently, only CPUWeight is adjusted for a particular cgroup and the allocation decision is based on 2 indicators

  1. Timestamp - Active window (timestamp = -1) gets a weight of 1000 while non-active window gets default weight of 100.
  2. Boosted - If boosted the application gets an additional weight of 500 irrespective of the applications current state (focussed or not)

RAppPolicy code commit:

Task 2C - Audio monitoring using pipewire (RPwMonitor)

After having the basic structure in place I started working on using an additional indicator to boost applications. In this case pipewire is monitored for audio playback so that all applications using audio have their boosted flag set. This serves as a heuristic for detecting realtime applications so that they aren’t as throttled as non-active applications.

It adds custom pipewire GSource to the main loop and listens to node events from pipewire-pulse API, after receiving state change from idle or suspended to running we set the boosted flag on RAppinfo for the associated cgroup and emit a changed signal.

RPwMonitor code commit:

Task 2D - Boosting games using GameMode (RGameMonitor)

Another area where we can provide additional boost is games. We use a similar mechanism to boosting applications playing audio, taking advantage of a background utility called GameMode by Feral Interactive.

Every time a game is registered or unregistered we receive a signal from the dbus interface, we then make the app boosted depending on the respective signal.

RGameMonitor code commit:

Task 3 - Updates to mutter

Having an extension that does the job of setting xattr on cgroup directories is fine for experimentation but we want these changes to happen more subtly and that’s where mutter comes into play. Like the PID associated with every MetaWindow we now have a cgroup associated with it. For now, it’s a GFile identifying the cgroup directory for that particular MetaWindow and hence the application. Whenever there’s a focus update detected the code takes care of updating the timestamp xattr on that application’s cgroup directory.

mutter Merge Request:

Current Status: Code review completed, pending merge.

Other Tasks and possible Future work

Talking about more places where we can utilize these cgroup features, we have currently put up a proposal to implement a way for large applications to manage multiple worker processes using an xdg-portal. This will be beneficial in providing better resource distribution and isolating bad actors.

xdg-desktop-portal Issue:

There are also other ideas for experimentation provided by my mentor like detecting battery draining applications using CPU Pressure information and collecting hints from .desktop file to make more policy decisions. However I could not address these during the stipulated time but thanks to the basic structure now in place, addressing them should be easy and is something I look forward to doing in the future.

Events - GUADEC

I gave my very first presentation at this years GUADEC Intern Lightning Talks and the whole event was an amazing experience for me! Right from getting to know what other interns have been working on, to the positive feedback from people in this community. I did attend a few other talks and BoFs and was truly fascinated by the work that has been going on.

GUADEC presentation:

Slides used:

Previous Blog Posts

If you want to know more, you can also checkout the following blog posts where I have documented my journey throughout GSoC 2021.

Blog Title Link
Part III - Merge Requests and GUADEC
Part II - Tracking windows and monitoring files
Part I - Turning over a new leaf with GNOME

Final thoughts

The entire GSoC experience was surreal, it has helped me learn a lot about the internals of how various parts of a desktop environment work, along with how I can utilise some kernel features. I feel part of a larger community in GNOME and feel more compelled to contribute to the various projects under it.

None of this would have been possible without the support from my mentors, there were a lot of areas that would have been difficult to understand if not for the right guidance from my mentors and I would like to thanks them for it.

August 19, 2021

The GTK Documentation

As you may have noticed, there have been various changes in the GNOME developer documentation website, as of late. These changes also affected the API references for GTK and its core dependencies.

What has changed

The main change is that GTK moved to a new documentation tool for its API reference and ancillary documentation, called gi-docgen. Unlike the previous documentation tool, gtk-doc, gi-docgen uses the introspection data that is generated by GObject-based libraries to build the API reference. This has multiple benefits:

  • gi-docgen is simpler to run and integrate within an existing library, as it only has a single project description file and relies on the introspection data for everything else; additionally, it can be easily included as a Meson sub-project
  • gi-docgen uses Markdown everywhere, instead of DocBook
  • gi-docgen is considerably faster, as it does not perform an additional source code parsing step; it does not have the bottleneck of the XML to HTML conversion via xsltproc; and it does not have to parse Devhelp files to fix cross-references to other libraries after generating the reference
  • gi-docgen can infer much more information about an API, as it has access to the entire introspection data for a library, including its dependencies; this allows the automatic generation of more accurate and consistent documentation, instead of relying on humans to do this job
  • gi-docgen generates stable URLs for all the API entry points and additional documentation, which means it’s easier to link to and from it without using obscure references
  • the default documentation template is usable on different form factors and layouts; it respects the dark theme options on web browsers that support it; and provides an in-tree live search functionality that does not depend on third party services
  • gi-docgen can also be run out of tree—this will come in handy later

Outside of these improvements, using the introspection data as the source for the documentation has additional benefits: it keeps us honest in the type of API we expose to non-C users; and it makes the C API reference closer to the reference in other languages that consume the same data.

GTK4, Pango, and GdkPixbuf have been migrated to this new tool, and while we were at it, we also reviewed the documentation to improve its consistency and accuracy—especially for the older sections of the API.

The new API references can be used offline through Devhelp 41, which will be released next September alongside GNOME 41.

Online documentation

The canonical online location for the GTK references is now There you will find the API references for:

The API references for GTK3 and ATK have been moved to as well.

The website is generated by the GTK CI pipeline, so it is always up to date with the state of the repository; thanks to gi-docgen supporting out of tree builds, the website can also generate documentation for various libraries that have not been ported to gi-docgen yet, like GLib, GTK3, and ATK.

Known issues

Of course, with any large change come side effects.

The main issue is the change in the URLs for the documentation; existing documentation referencing locations on will have to be fixed. Thanks to the GNOME system administrators we have some redirects in place, and there are ideas on how to improve them without creating an unmaintainable mess of static redirections.

The new documentation website is in the process of being indexed by various search engines; the more you use it, and link to it, the easier it will be for the new references to raise in ranking. In any case, we strongly encourage you to use the search feature: simply press ‘s’ to start searching for symbols and types, or even content inside the extra documentation pages.

Unfortunately, GLib’s introspection data has some issues, given how low level the C API is; we are working on improving that, which will have an impact not only in the documentation but also in the overall bindability of the API in other languages.

The documentation for GLib, GObject, GIO, and GTK3 is also still written for gtk-doc; this means that cross-links in the documentation may not work; the content may not be rendered as nicely; or there can be redundant paragraphs. This will be fixed in the future, both by changes in gi-docgen (wherever possible) and by updating the documentation inside the libraries themselves. This will also improve the language bindings documentation, as they consume the same introspection data as gi-docgen. Help in this effort is very much welcome.

GSoC 2021 Final Report

Porting GNOME Design tools to GTK 4

As described in a previous post, the goal in this GSoC was to port Icon Library and App Icon Preview to the GTK 4 toolkit, with a corresponding port from libhandy to libadwaita.

Work Done

Both applications were successfully ported (icon-library/!16, app-icon-preview/!62). As far as we known, there is only one know regression in App Icon Preview where symbolic icons do not follow the Icon Theme, there is already a MR in place to address this.

Some additional work was done in icon-library/!17, app-icon-preview/!64, and app-icon-preview/!63; the latest MR is in the latest stages of submission and does a small refactor on how icons are loaded and cached, using a proper Icon Theme, this allows to load icons directly from their name rather than passing around images, Pixbufs or textures and enforces the Icon Theme on symbolic icons, fixing the last remaining regression.

Future Work

Now that both apps are ported, and some widgets have been subclassed, it would be easier to implement mockups, and in the case of Icon Library, implement a system to update the icons at runtime. We plan to release new versions of both apps around the release of GNOME 41.

Final Words

Working on GNOME Design tools proved to be a very positive experience which allowed me to work on two very interesting projects that will surely help fellow developers and designers.  I want to thank Bilal Elmoussaoui, who has been guiding and helping me since I joined the GNOME community.