WebKitGTK and WPEWebKit recently released a new stable version 2.46. This version includes important changes in the graphics implementation.
Skia
The most important change in 2.46 is the introduction of Skia to replace Cairo as the 2D graphics renderer. Skia supports rendering using the GPU, which is now the default, but we also use it for CPU rendering using the same threaded rendering model we had with Cairo. The architecture hasn’t changed much for GPU rendering: we use the same tiled rendering approach, but buffers for dirty regions are rendered in the main thread as textures. The compositor waits for textures to be ready using fences and copies them directly to the compositor texture. This was the simplest approach that already resulted in much better performance, specially in the desktop with more powerful GPUs. In embedded systems, where GPUs are not so powerful, it’s still better to use the CPU with several rendering threads in most of the cases. It’s still too early to announce anything, but we are already experimenting with different models to improve the performance even more and make a better usage of the GPU in embedded devices.
Skia has received several GCC specific optimizations lately, but it’s always more optimized when built with clang. The optimizations are more noticeable in performance when using the CPU for rendering. For this reason, since version 2.46 we recommend to build WebKit with clang for the best performance. GCC is still supported, of course, and performance when built with GCC is quite good too.
HiDPI
Even though there aren’t specific changes about HiDPI in 2.46, users of high resolution screens using a device scale factor bigger than 1 will notice much better performance thanks to scaling being a lot faster on the GPU.
Accelerated canvas
The 2D canvas can be accelerated independently on whether the CPU or the GPU is used for painting layers. In 2.46 there’s a new setting WebKitSettings:enable-2d-canvas-acceleration to control the 2D canvas acceleration. In some embedded devices the combination of CPU rendering for layer tiles and GPU for the canvas gives the best performance. The 2D canvas is normally rendered into an image buffer that is then painted in the layer as an image. We changed that for the accelerated case, so that the canvas is now rendered into a texture that is copied to a compositor texture to be directly composited instead of painted into the layer as an image. In 2.46 the offscreen canvas is enabled by default.
There are more cases where accelerating the canvas is not desired, for example when the canvas size is not big enough it’s faster to use the GPU. Also when there’s going to be many operations to “download” pixels from GPU. Since this is not always easy to predict, in 2.46 we added support for the willReadFrequently canvas setting, so that when set by the application when creating the canvas it causes the canvas to be always unaccelerated.
Filters
All the CSS filters are now implemented using Skia APIs, and accelerated when possible. The most noticeable change here is that sites using blur filters are no longer slow.
Color spaces
Skia brings native support for color spaces, which allows us to greatly simplify the color space handling code in WebKit. WebKit uses color spaces in many scenarios – but especially in case of SVG and filters. In case of some filters, color spaces are necessary as some operations are simpler to perform in linear sRGB. The good example of that is feDiffuseLighting filter – it yielded wrong visual results for a very long time in case of Cairo-based implementation as Cairo doesn’t have a support for color spaces. At some point, however, Cairo-based WebKit implementation has been fixed by converting pixels to linear in-place before applying the filter and converting pixels in-place back to sRGB afterwards. Such a workarounds are not necessary anymore as with Skia, all the pixel-level operations are handled in a color-space-transparent way as long as proper color space information is provided. This not only impacts the results of some filters that are now correct, but improves performance and opens new possibilities for acceleration.
Font rendering
Font rendering is probably the most noticeable visual change after the Skia switch with mixed feedback. Some people reported that several sites look much better, while others reported problems with kerning in other sites. In other cases it’s not really better or worse, it’s just that we were used to the way fonts were rendered before.
Damage tracking
WebKit already tracks the area of the layers that has changed to paint only the dirty regions. This means that we only repaint the areas that changed but the compositor incorporates them and the whole frame is always composited and passed to the system compositor. In 2.46 there’s experimental code to track the damage regions and pass them to the system compositor in addition to the frame. Since this is experimental it’s disabled by default, but can be enabled with the runtime feature PropagateDamagingInformation. There’s also UnifyDamagedRegions feature that can be used in combination with PropagateDamagingInformation to unify the damage regions into one before passing it to the system compositor. We still need to analyze the impact of damage tracking in performance before enabling it by default. We have also started an experiment to use the damage information in WebKit compositor and avoid compositing the entire frame every time.
GPU info
Working on graphics can be really hard in Linux, there are too many variables that can result in different outputs for different users: the driver version, the kernel version, the system compositor, the EGL extensions available, etc. When something doesn’t work for some people and work for others, it’s key for us to gather as much information as possible about the graphics stack. In 2.46 we have added more useful information to webkit://gpu, like the DMA-BUF buffer format and modifier used (for GTK port and WPE when using the new API). Very often the symptom is the same, nothing is rendered in the web view, even when the causes could be very different. For those cases, it’s even more difficult to gather the info because webkit://gpu doesn’t render anything either. In 2.46 it’s possible to load webkit://gpu/stdout to get the information as a JSON directly in stdout.
Sysprof
Another common symptom for people having problems is that a particular website is slow to render, while for others it works fine. In these cases, in addition to the graphics stack information, we need to figure out where we are slower and why. This is very difficult to fix when you can’t reproduce the problem. We added initial support for profiling in 2.46 using sysprof. The code already has some marks so that when run under sysprof we get useful information about timings of several parts of the graphics pipeline.
Next
This is just the beginning, we are already working on changes that will allow us to make a better use of both the GPU and CPU for the best performance. We have also plans to do other changes in the graphics architecture to improve synchronization, latency and security. Now that we have adopted sysprof for profiling, we are also working on improvements and new tools.
I am pleased to announce a new Cambalache stable release, version 0.92.0!
This comes with two major dependencies changes, the first one is a very basic port to Adwaita and webkit/broadway replacement with a custom Wayland compositor widget based on wlroots.
What’s new:
Basic port to Adwaita
Use Casilda compositor widget for workspace
Update widget catalogs to SDK 47
Improved Drag&Drop support
Improve workspace performance
Enable workspace animations
Fix window ordering
Support new desktop dark style
Support 3rd party libraries
Streamline headerbar
Lots of bug fixes and minor improvements
Adwaita
The port to Adwaita gives Cambalache the new modern look and enables dark mode support. The headerbar is simplified only keeping most common used actions, everything else was moved to the main menu.
Cambalache editing Cambalache UICambalache editing Cambalache UI in dark mode
Casilda Compositor
Up until this release, Cambalache showed windows from a different process in its workspace running broadwayd or gtk4-broadwayd backend depending on the gtk version of your project and using a WebView to connect to it and show the windows in an HTML canvas.All of this was replaced with a simple Wayland compositor widget which reduces hard dependencies a lot.
On top of that we get all the optimizations from using Wayland instead of a protocol meant to go over the internet.
With Broadway, the client would render the window in memory, the broadway backend would compress the image and sent it over TCP to the webview which has to uncompress it and render it on an HTML5 canvas.
Now, the client just renders in shared memory which is directly available to the compositor widget to use. This also leave the option to further improve performance by adding support for dmabuf which would allow to offload the composition to the host compositor reducing the number of memory copies to show the windows on the screen.
This allowed me to re enable Gtk animations since they no longer impact the workspace performance.
Special thanks to emersion, kennylevinsen, vyivel and the wlroots community for their support and awesome project, I would not have been able to do this without wlroots and their help.
Cambalache now loads 3rd party catalogs from GLib.get_system_data_dirs()/cambalache/catalogs and ~/.cambalache/catalogs
These catalog files are generated from Gir data with a new tool bundled in Cambalache calledcmb-catalog-gen. This used to be an internal and still lacks proper documentation but you can see an example of how its used internally here
So what is a catalog anyway?
A catalog is a XML file with all the necessary data for Cambalache to produce UI files with widgets from a particular library, this includes the different GTypes, with their properties, signals and everything else except the actual object implementations.
Runtime objects are created in the workspace by loading the GI namespace specified in the catalog.
Feel free to contact me on matrix if you are interested in adding support for a 3rd party library.
Improved Drag&Drop
After the extensive rework done porting the main widget hierarchy from GtkTreeView to GtkColumnView and implementing several GListModel interfaces to avoid maintaining multiple lists I was able to reimplement and extend Drag&Drop code so now its possible to drop widgets in different parents.
Data Model
History handling for Undo/Redo was simplified from multiple history tables (one per table tracked) into one history table by adding a few extra columns to store data change in JSON format.
AdwNavigationSplitView now has the same :sidebar-position: property as AdwOverlaySplitView, inverting the navigation when collapsed (content as the root page, sidebar as the subpage)
AdwNavigationView got horizontal and vertical homogeneous properties, meaning it will preallocate the size needed to display any of the added pages, as well as any pages within the navigation stack, rather than just the currently visible page
AdwAboutDialog now has API for linking to your other apps directly from the dialog
GLib
The low-level core library that forms the basis for projects such as GTK and GNOME.
Delineate, formerly known as Dagger, has just been released. This sleek new app allows is designed for editing and viewing graphs using the DOT Language. For all the details and features, take a look at the release blog post.
Compared to V2024.9.0-beta3, this release adds a new Preferred Video Codec option in Preferences, an improved format selection backend, a new subtitles selection interface for individual downloads, and the ability to copy the command used to run a download when viewing its log.
These are the final changes that we intend to make for this release cycle and are aiming to release a stable version on Wednesday October 2.
Here’s the full changelog for this release cycle:
Parabolic has been rewritten in C++ for faster performance
The Keyring module was rewritten. As a result, all keyrings have been reset and will need to be reconfigured
Audio languages with audio description are now correctly recognized and handled separately from audio languages without audio description
Audio download qualities will now list audio bitrates for the user to choose from
Playlist downloads will now be saved in a subdirectory with the playlist’s title within the chosen save folder
When viewing the log of a download, the command used to run the download can now also be copied to the clipboard
The length of the kept download history can now be changed in Preferences
On non-sandbox platforms, a browser can be selected for Parabolic to fetch cookies from instead of selecting a txt file in Preferences
Added an option in Preferences to allow for immediate download after a URL is validated
Added an option in Preferences to pick a preferred video codec for when downloading video media
Fixed validation issues with various sites
Fixed an issue where a specified video password was not being used
Back to school, and Fractal is back too! The leaves are starting to cover the floor in our part of the globe, but you don’t have to shake a tree to get our goodness packed into Fractal 9.beta:
We switched to the glycin library (the same one used by GNOME Image Viewer) to load images, allowing us to fix several issues, like supporting more animated formats and SVGs and respecting EXIF orientation.
The annoying bug where some rooms would stay as unread even after opening them is now a distant memory.
The media cache uses its own database that you can delete if you want to free some space on your system. It will also soon be able to clean up unused media files to prevent it from growing indefinitely.
Sometimes the day separators would show up with the wrong date, not anymore!
We migrated to the new GTK 4.16 and libadwaita 1.6 APIs, including CSS variables, AdwButtonRow and AdwSpinner.
As usual, this release includes other improvements, fixes and new translations thanks to all our contributors, and our upstream projects.
As the version implies, there might be a slight risk of regressions, but it should be mostly stable. If all goes well the next step is the release candidate!
If you have a little bit of time on your hands, you can try to fix one of our newcomers issues. Anyone can make Fractal better!
Shell Extensions
Auto Activities
Show activities overview when there are no windows, or hide it when there are new windows.
Hey all, I had a fun bug this week and want to share it with you.
numbers and representations
First, though, some background. Guile’s numeric operations are defined over the complex numbers, not
over e.g. a finite field of integers. This is generally great when
writing an algorithm, because you don’t have to think about how the
computer will actually represent the numbers you are working on.
In practice, Guile will represent a small exact integer as a
fixnum,
which is a machine word with a low-bit tag. If an integer doesn’t fit
in a word (minus space for the tag), it is represented as a
heap-allocated bignum. But sometimes the compiler can realize that
e.g. the operands to a specific bitwise-and operation are within (say)
the 64-bit range of unsigned integers, and so therefore we can use
unboxed operations instead of the more generic functions that do
run-time dispatch on the operand types, and which might perform heap
allocation.
Unboxing is important for speed. It’s also tricky: under what
circumstances can we do it? In the example above, there is information
that flows from defs to uses: the operands of logand are known to be
exact integers in a certain range and the operation itself is closed over
its domain, so we can unbox.
But there is another case in which we can unbox, in which information
flows backwards, from uses to defs: if we see (logand n #xff), we know:
the result will be in [0, 255]
that n will be an exact integer (or an exception will be thrown)
we are only interested in a subset of n‘s bits.
Together, these observations let us transform the more general logand
to an unboxed operation, having first truncated n to a u64. And
actually, the information can flow from use to def: if we know that n
will be an exact integer but don’t know its range, we can transform the
potentially heap-allocating computation that produces n to instead
truncate its result to the u64 range where it is defined, instead of
just truncating at the use; and potentially this information could
travel farther up the dominator tree, to inputs of the operation that
defines n, their inputs, and so on.
needed-bits: the |0 of scheme
Let’s say we have a numerical operation that produces an exact integer,
but we don’t know the range. We could truncate the result to a u64
and use unboxed operations, if and only if only u64 bits are used. So
we need to compute, for each variable in a program, what bits are needed
from it.
I think this is generally known a needed-bits analysis, though both
Google and my textbooks are failing me at the moment; perhaps this is
because dynamic languages and flow analysis don’t get so much attention
these days. Anyway, the analysis can be local (within a basic block),
global (all blocks in a function), or interprocedural (larger than a
function). Guile’s is global. Each CPS/SSA variable in the function
starts as needing 0 bits. We then compute the fixpoint of visiting each
term in the function; if a term causes a variable to flow out of the
function, for example via return or call, the variable is recorded as
needing all bits, as is also the case if the variable is an operand to
some primcall that doesn’t have a specific needed-bits analyser.
Currently, only logand has a needed-bits analyser, and this is because
sometimes you want to do modular arithmetic, for example in a hash
function. Consider Bon Jenkins’ lookup3 string hash
function:
#define rot(x,k) (((x)<<(k)) | ((x)>>(32-(k))))
#define mix(a,b,c) \
{ \
a -= c; a ^= rot(c, 4); c += b; \
b -= a; b ^= rot(a, 6); a += c; \
c -= b; c ^= rot(b, 8); b += a; \
a -= c; a ^= rot(c,16); c += b; \
b -= a; b ^= rot(a,19); a += c; \
c -= b; c ^= rot(b, 4); b += a; \
}
...
(define (jenkins-lookup3-hashword2 str)
(define (u32 x) (logand x #xffffFFFF))
(define (shl x n) (u32 (ash x n)))
(define (shr x n) (ash x (- n)))
(define (rot x n) (logior (shl x n) (shr x (- 32 n))))
(define (add x y) (u32 (+ x y)))
(define (sub x y) (u32 (- x y)))
(define (xor x y) (logxor x y))
(define (mix a b c)
(let* ((a (sub a c)) (a (xor a (rot c 4))) (c (add c b))
(b (sub b a)) (b (xor b (rot a 6))) (a (add a c))
(c (sub c b)) (c (xor c (rot b 8))) (b (add b a))
...)
...))
...
These u32 calls are like the JavaScript |0
idiom,
to tell the compiler that we really just want the low 32 bits of the
number, as an integer. Guile’s compiler will propagate that information
down to uses of the defined values but also back up the dominator tree,
resulting in unboxed arithmetic for all of these operations.
All that was just prelude. So I said that needed-bits is a fixed-point
flow analysis problem. In this case, I want to compute, for each
variable, what bits are needed for its definition. Because of loops, we
need to keep iterating until we have found the fixed point. We use a
worklist to represent the conts we need to visit.
Visiting a cont may cause the program to require more bits from the
variables that cont uses.
Consider:
(define-significant-bits-handler
((logand/immediate label types out res) param a)
(let ((sigbits (sigbits-intersect
(inferred-sigbits types label a)
param
(sigbits-ref out res))))
(intmap-add out a sigbits sigbits-union)))
This is the sigbits (needed-bits) handler for logand when one of its
operands (param) is a constant and the other (a) is variable. It
adds an entry for a to the analysis out, which is an intmap from
variable to a bitmask of needed bits, or #f for all bits. If a
already has some computed sigbits, we add to that set via
sigbits-union. The interesting point comes in the sigbits-intersect
call: the bits that we will need from a are first the bits that we
infer a to have, by forward type-and-range analysis; intersected with
the bits from the immediate param; intersected with the needed bits
from the result value res.
If the intmap-add call is idempotent—i.e., out already contains
sigbits for a—then out is returned as-is. So we can check for a
fixed-point by comparing out with the resulting analysis, via eq?.
If they are not equal, we need to add the cont that defines a to the
worklist.
The bug? The bug was that we were not enqueuing the def of a, but
rather the predecessors of label. This works when there are no
cycles, provided we visit the worklist in post-order; and regardless, it
works for many other analyses in Guile where we compute, for each
labelled cont (basic block), some set of facts about all other labels
or about all other
variables.
In that case, enqueuing a predecessor on the worklist will cause all
nodes up and to including the variable’s definition to be visited,
because each step adds more information (relative to the analysis
computed on the previous visit). But it doesn’t work for this case,
because we aren’t computing a per-label analysis.
GUADEC was in Denver this year! I meant to write an update right after the conference, but Real Life got in the way and it took a while to finish this post. I finally found a little spare time to collect my thoughts and finish writing this.
It was a smaller crowd than normal this year. There were ~100 people registered, though unfortunately a number of people were unable to make it at the last minute due to Cloudstrike– and visa– related issues.
Denver City Hall
I gave two talks: Crosswords, Year Three (slides) and a spur-of-the-moment lightning talk on development docs. The first talk was nominally about authoring crosswords, but I also presented the architecture we used to create the game. Although rushed, I hope I got most of the points about our design across. It’s definitely worth a full blog post at a future date.
Other highlights of the conference included Martin’s very funny (and brave) live demo of gameeky, Scott’s talk about being bold with design, the AGM, and a fabulous Thunderbird keynote about the power of money. That last one spurred conversations about putting a fundraising request popup in GNOME itself to raise funds. The yearly popup in Thunderbird appears to continue being wildly successful. Since GUADEC, I see that KDE has attempted to do that as well. I’d love for GNOME to do something similar. Maybe this is something the new board can pick up.
Original Nikolai Tesla generator in the Tivoli Brewing Co.
It was a very chill GUADEC, and I enjoyed the change of pace. I had never spent time in Denver (other than at the airport), and found it to be a surprisingly intimate city with a very walkable downtown. The venue was absolutely fabulous. Every conference should have a pub on-site, and the Tivoli Brewing Co definitely surpassed expectations. It even has an original Nikolai Tesla generator in its basement.
Reflections
It was really nice having GUADEC relatively close to me for once. There was a different crowd than normal: there were long-time GNOME people I haven’t seen in a very long time (Hi Owen, Behdad, and Michael!) as well as numerous new folks (welcome Richard!) Holding it in North America opened us up to different contributors, and maybe let us reengage with long-time gnomies.
Let’s not pretend that a video conference or a hybrid BOF is the same as an in-person meetup. Once you’ve sung karaoke with someone, or explored Meow Wolf, or camped in the desert in Utah together, your relationship is richer than when you only interacted via Gitlab pull requests and BigBlueButton. You have more empathy and you can resolve conflicts better.
Fragmentation is always a danger with distributed endeavors and any group bigger than two will have politics, but it feels like our best tool to deal with those issues is fragmenting too.
Personally, as someone who has schlepped across the Atlantic for over two decades to meet with other folks, it doesn’t feel great to have comparatively few people come the other direction. There are a plenty of good individual decisions that lead to this, but collectively it felt like a misfire.
I also really appreciate the commitment of our South American / Asian / African developers who have tough travel routes to get to the Euro/American events.
The first GUADEC poster
In some sense, it feels like we’ve gone full-circle. When GNOME started, development was strongly centered in North America. The GIMP started in Berkeley, and GNOME itself was founded in Mexico, and there were quite a few other pockets of GNOME activity (Boston, North Carolina, etc). Proportionally, Europe was underrepresented — so GUADEC was proposed as a way to build a European community. It took sustained engagement to build it up. Twenty-four years on, it appears we need to do the reverse.
What’s next? Well for me, it’s time to look more local. We used to have a Bay Area GNOME community and it has fallen on hard times. Maybe it’s worth trying to push some local enthusiasm. If you’re a Bay Area GNOME person, drop me a note. We should hold a release party!
Nonograms
While in Denver, ptomato and I nerd-sniped each other into writing a nonogram game. Nonograms are a popular puzzle-type, and are quite common on existing mobile platforms. Conceptually, they’re pen-and-paper grid-based games and could easily be implemented as an .ipuz extension.
I’ve been slowly changing the libipuz API over the summer to work with gobject-introspection, and was excited at the chance to get someone to test it out. Meanwhile, Philip had been wanting to write an app with typescript. So, I sketched out an extension and put together an API for Philip to use. With a little back-and-forth, he got something to render. Exciting!
I don’t think it is playable yet but it’s lovely to see the potential emerging. There’s a lot of great pixel art floating around GNOME. Some of it might make the basis for a really fun nonogram game.
As a bonus, Philip has been experimenting with using the stateless design we use in Crosswords. I’m hoping he’ll be able to provide additional validation and feedback to our architectural approach.
Lets take a Fedora beta for a spin and see what looks out of place!
This time, I made the mistake of typing. Oops!
Input Performance
That lead me to an unreasonable amount of overhead in ibus-daemon. Seems odd that something so critical to our typing latency would require so many CPU samples.
So I dove into the source of ibus to see what is going on and made a few observations.
It is doing a lot of D-Bus. That’s okay though, because if it were busy in logic that’d be a different thing to optimize.
It uses a GDBusMessageFilterFunction on the GDBusConnection which can run on the D-Bus I/O thread. At this point the GDBusMessage it wants to modify are locked and require a copy.
After recording the messages on the private ibus D-Bus socket I can tell it’s doing a lot of communication with GNOME Shell which implements the wayland text protocol for applications.
There seems to be a huge amount of memory allocations involved seen when running sysprof-cli --memprof and that needs investigating.
Quite a bit of time spent in GVariantTypeInfo and hashtables.
TL;DR
Before I go any further, there are many merge requests awaiting review now which improve the situation greatly.
Preparing by Prototyping
The first thing I do when diving into a problem like this is to record a few things with Sysprof and just have a look around the flamegraphs. That helps me get a good feel for the code and how the components interact.
The next thing I do after finding what I think is a major culprit, is to sort of rewrite a minimal version of it to make sure I understand the problem with some level of expertise.
Last time, I did this by writing a minimal terminal emulator so I could improve VTE (and apparently that made people … mad?). This time, my eyes were set on GVariantBuilder.
A faster GVariantBuilder
I was surprised to learn that GVariantBuilder does not write to a serialized buffer while building. Instead, it builds a tree of GVariant*.
That sets off my antennae because I know from experience that GVariant uses GBytes and GBytes uses malloc and that right there is 3 separate memory allocations (each aligned to 2*sizeof(void*)) just to maybe store a 4-byte int32.
So for a rectangle of (iiii) serialized to GVariant and built with GVariantBuilder you’re looking at 3 * 4 + 3 minimum allocations, if you ignore the handful of others in the builder structures too.
So first step, prototype the same API but writing to a single buffer.
It’s about 3-4x faster in a tight loop. There are things you could do with a different API that would easily get you into the 10x range but what use is that if no software out there is using it.
So that is what we’re shooting for to guide our changes. I’ll stop when I get in that range as it’s likely diminishing returns.
A faster (slower) GVariantBuilder
I hesitate to just replace something like GVariantBuilder for the same reason we don’t just rewrite the world in a new language. There are so many corner cases and history built up here that if you can get close without a complete rewrite, that is the less risky solution.
Instead, lets just shake out a bunch of things and see how fast we can make the venerable GVariantBuilder.
GVariantTypeInfo
I saw a lot of time spent in g_variant_type_info_get() specifically inside a GHashTable. A quick printf() later we can see that for some reason the GVariantTypeInfo which contains information about a GVariantType (a glorified type string) is being re-created over and over.
There is a cache, so that cache must not be working very well. Specifically, I see types being created a dozen times or more on every key-press or release.
Caches generally don’t work well if you release items from them when the last reference is dropped. This is because the only way for them to last across uses is for multi-threading to be in play keeping them alive.
So here we patch it to keep a number of them alive to improve the serial use-case.
Side Quests through HashTables
While looking at that code it was clear we were making copies of strings to do lookups in the hashtable. It’s not great when you have to malloc() to do a hashtable lookup so maybe we can write custom hash/equal functions to handle that that.
We have knobs to keep release builds fast. Sometimes that means compiling expensive assertions out which are there to help us catch bugs during development or to make the next bug hunters life easier.
For example, in my career I’ve done that to assert correctness of B-trees on every change, ensure file-system state is correct, validate all sorts of exotic data-structures and more. These are things that are helpful in debug builds that you very much don’t want in release code.
So I made some old-GVariant code which didn’t really follow the standard macros we use elsewhere do what you’d expect. Here and here.
Format String Fast-Paths
Sometimes you have a code-path that is so extremely hot that it warrants some sort of fast-path in and out. The format strings you use for building a GVariant using GVariantBuilder is one such example. We can see that normally that function will malloc() and it would be a whole lot faster if it didn’t in our fast-path.
Another fun fact about GVariantType is that a simple definite type is a sub-type of itself. That is fancy lingo for saying "i" is an "i".
Given that all GVariant are eventually made up of these sort of basic definite types, they probably get hammered a lot when type-checking things at runtime.
It is extremely common in the GVariant code-base to walk strings multiple times. Sometimes, those are type strings, sometimes those are user-provided UTF-8 encoded strings.
The API boundary checks for valid UTF-8 which requires walking the string and pulling it into memory. You can see that as basically:
Then later, it does strlen() on string. Maybe just skip the strlen() and do (endptr - string).
Playing Nicely with Malloc
GVariantBuilder does this nice there where if you’re building an array of children, it will grow your destination array by powers of two so that it doesn’t need to call realloc() each time. That is great for dynamically sized arrays.
At the end, it does this even nicer thing in that it shrinks the allocation to only what you needed by calling realloc() again with the final size. Nice not to waste space, surely!
However, in extremely common cases we know the exact number of children up front so the realloc() is pure overhead. Just check for that condition and save some more cycles.
Locks, Everywhere
I was surprised to find that GVariant uses the g_bit_lock()/g_bit_unlock() API to manage whether or not a GVariant is “floating”. I have things to say about floating references but that is for another day.
What I noticed though is that I see way too much g_bit_lock() samples on my profiles considering there virtually never a valid reason to have a race condition in sinking a GVariants floating reference. In fact, g_variant_take_ref() doesn’t even bother with it.
So save a heaping amount of time doing nothing by being clever.
More Ref Counting
With g_variant_ref_sink() faster we still spend a lot of samples in g_bytes_ref(). That sounds odd because we really only need a single reference and then ultimately g_bytes_unref() when the variant is released.
A quick look and yup, we’re referencing only to immediately unref afterwards. Pick up a few more percent by transferring ownership to the callee.
Reducing Allocations
We still haven’t gotten rid of all those allocations and that makes me sad. Lets do something about those 3 allocations for a simple 32-bit integer.
Step One, GBytes
Our GVariant structures are referencing a GBytes which makes buffer management easy. It allows you to slice into smaller and smaller buffers and reference count the same larger data buffer safely.
What if we made it embed small allocations within its own allocation using flexible arrays? We just need to be careful to try to match the whole 2*sizeof(void*) expectations that people get from malloc() so we don’t break any code with that assumption.
Finally, from 3 allocations to two. This one was a whopping 10% off wallclock time alone.
But that of course should give us a good idea of what would happen if we did the same thing to GVariant directly to go from 2 allocations to 1.
Step Two, GVariant
So here we go again and that’s another 10% on top of the previously stacked patches. Not bad.
Bonus Round
Calling g_variant_builder_init() makes a copy of GVariantType because theoretically you could free that type string before you call g_variant_builder_end(). Realistically, nobody does that and it probably is just there to deal with bindings who may use g_variant_builder_new().
Either way, we don’t break ABI here so add g_variant_builder_init_static() with different requirements for a few more percent by doing less malloc().
This is the one spot where if you want a few extra cycles, you gotta change your code.
How’d we do?
For my contrived benchmarks this gets us a 2.5x speedup over what was released in GNOME 47 when everything is built in “release mode fashion”. If you implement the bonus round you can get closer to 2.75x.
If you’d like to recreate the benchmark, just create a loop with a few million iterations generating some rather small/medium sized GVariant with GVariantBuilder.
Still a bit off from that 4x though, so if you like challenges perhaps replacing GVariantBuilder with a serialized writer is just the project for you.
Does the fediverse have a vibe? I think that yes, there’s a flave, and
with reason: we have things in common. We all left Twitter, or refused
to join in the first place. Many of us are technologists or
tech-adjacent, but generally not startuppy. There is a pervasive do-it-yourself ethos. This last point often expresses itself as a
reaction: if you don’t like it, then do it yourself, a different way.
Make your own Mastoverse agent. Defederate. Switch instances. Fedi is
the “patches welcome” of community: just fork it!
Fedi is freedom, in the sense of “feel free to send a patch”, which is
also hacker-speak for “go fuck yourself”. We split; that’s our thing!
Which, you know, no-platform the nazis and terfs, of course. It can
be good and even necessary to cut ties with the bad. And yet, this is
not a strategy for winning. What’s worse, it risks creating a feedback
loop with losing, which is the topic of this screed.
alembics
Fedi distills losers: AI, covid, class war, climate, free software, on
all of these issues, the sort of people that fedi attracts are those
that lost. But, good news everyone: in fedi you don’t have to engage
with the world, only with fellow losers! I know. I include myself in
these sets. But beyond the fact that I don’t want to be a loser, it is
imperative that we win: we can’t just give up on climate or class war.
Thing is, we don’t have a plan to win, and the vibe I get from fedi is
much more disengaged than strategic.
Twitter—and I admit, I loved Twitter, of yore—Twitter is now for the
winners: the billionaires, the celebrities, the politicians. These
attract the shills, the would-be’s. But winner is just another word for
future has-been; nobody will gain power in the world via Twitter any
more. Twitter continues to be a formidable force, but it wanes by the
day.
Still, when I check my feed, there are some people I follow doing
interesting work on Twitter: consider Tobi
Haslett, Erin
Pineda, Bree
Newsome, Cédric
Herrou, Louis
Allday, Gabriel
Winant, Hamilton
Nolan, James
Butler, Serge
Slama: why there and not fedi? Sure,
there is inertia: the network was woven on Twitter, not the mastoverse.
But I am not sure that fedi is right for them, either. I don’t know
that fedi is the kind of loser that is ready to get back in the ring and
fight to win.
theories of power
What is fedi’s plan to win? If our model is so good, what are we
doing to make it a dominant mode of social discourse, of using it as a
vector to effect the changes we want to see in the world?
From where I sit, I don’t see that we have a strategy. Fedi is fine and
all, but it doesn’t scare anyone. That’s not good enough. Twitter was
always flawed but it was a great tool for activism and continues to be
useful in some ways. Bluesky has some of that old-Twitter vibe, and
perhaps it will supplant the original, in time; inshallah.
In the meantime, in fedi, I would like to suggest that with regards to
the network itself, that we stop patting ourselves on the back. What we
have is good but not good enough. We should aim to make the world a
better place and neither complacency nor splitting are going to get us there. If fedi is to thrive, we need to get out of our own heads and make our community a place to be afraid of.
With GNOME 47 out, it’s time for my bi-annual wallpaper deep dive. For many, these may seem like simple background images, but GNOME wallpapers are the visual anchors of the project, defining its aesthetic and identity. The signature blue wallpaper with its dark top bar remains a key part of that.
In this release, GNOME 47 doesn’t overhaul the default blue wallpaper. It’s more of a subtle tweak than a full redesign. The familiar rounded triangles remain, but here’s something neat: the dark variant mimics real-world camera behavior. When it’s darker, the camera’s aperture widens, creating a shallower depth of field. A small but nice touch for those who notice these things.
The real action this cycle, though, is in the supplemental wallpapers.
We haven’t had to remove much this time around, thanks to the JXL format keeping file sizes manageable. The focus has been on variety rather than cutting old designs. We aim to keep things fresh, though you might notice that photographic wallpapers are still missing (we’ll get to that eventually, promise.
In terms of fine tuning changes, the classic, Pixels has been updated to feature newer apps from GNOME Circle.
The dark variant of Pills also got some love with lighting and shading tweaks, including a subtle subsurface scattering effect.
As for the new wallpapers, there are a few cool additions this release. I collaborated with Dominik Baran to create a tube-map-inspired vector wallpaper, which I’m particularly into. There’s also Mollnar, a nod to Vera Molnar, using simple geometric shapes in SVG format.
Most of our wallpapers are still bitmaps, largely because our rendering tools don’t yet handle color banding well with vectors. For now, even designs that would work better as vectors—like mesh gradients—get converted to bitmaps.
We’ve introduced some new abstract designs as well – meet Sheet and Swoosh. And for fans of pixel art, we’ve added LCD and its colorful sibling, LCD-rainbow. Both give off that retro screen vibe, even if the color gradient realism isn’t real-world accurate.
Lastly, there’s Symbolic Soup, which is, well… a bit chaotic. It might not be everyone’s cup of tea, but it definitely adds variety.
Preview
If you’re wondering about the strange square aspect ratio, take a look at the wallpaper sizing guide in our GNOME Interface Guidelines.
Also worth noting is the fact that all of these wallpapers have been created by humans. While I’ve experimented with image generation for some parts of the workflow in some of of my personal projects, all this work is AIgen-free and explicitly credited.
As a JavaScript engine developer at Igalia I don’t find myself writing much plain C code anymore. I’m either writing JS or TypeScript, or hacking on large compiler codebases in C++1, or writing ECMAScript specification language. Frankly, that is fine with me. C’s time may not be over yet, but I wouldn’t be sad if I never had to write another line of it. (Hopefully this post conveys why.)
However, while working on modernizing an app written in C for the GNOME platform, that I hack on in my spare time, I wanted to copy a folder recursively using the GIO async APIs. Like cp -R at the shell, but without freezing up your GUI while it works.
C’s callback style for async programming, combined with lack of capturing variables in closures, is like going back to the dark ages if you’ve gotten used to languages with async/await style or even C++’s lambdas. I would’ve avoided writing this if I could, but apparently no one else had done it publicly on the internet that I could find.2 So here it is for your enjoyment.
typedef struct {
GFile *dest_folder;
GQueue *files_to_copy;
GQueue *folders_to_copy;
GFileCopyFlags flags;
} CopyRecursiveClosure;
/* Pre-declare so we can read them in the order they are executed: */
static void on_recursive_make_dir_finish(GFile *file, GAsyncResult *res, GTask *data);
static void on_recursive_file_enumerate_finish(GFile* file, GAsyncResult *res, GTask *data);
static void on_recursive_file_next_files_finish(GFileEnumerator *children, GAsyncResult *res, GTask *data);
static void copy_file_queue_async(GTask *task);
static void on_recursive_file_copy_finish(GFile *file, GAsyncResult *result, GTask *data);
static void on_recursive_folder_copy_finish(GFile *file, GAsyncResult *result, GTask *data);
static void copy_folder_queue_async(GTask *task);
static void copy_recursive_closure_free(CopyRecursiveClosure *ptr);
/**
* copy_recursive_async:
* @src: The source folder
* @dest: Destination folder in which to place the copy of @src
* @flags: #GFileCopyFlags to apply to copy operations
* @prio: I/O priority, e.g. #G_PRIORITY_DEFAULT
* @cancel: #GCancellable that will interrupt the operation when triggered
* @done_cb: Function to call when the operation is finished
* @data: Pointer to pass to @done_cb
*
* Copy the folder @src and all of the files and subfolders in it into the
* folder @dest, asynchronously.
*
* The only @flags supported are #G_FILE_COPY_NONE and #G_FILE_COPY_OVERWRITE.
*/
void
copy_recursive_async(GFile *src, GFile *dest, GFileCopyFlags flags, int prio, GCancellable *cancel,
GAsyncReadyCallback done_cb, void *data)
{
g_return_if_fail(G_IS_FILE(src));
g_return_if_fail(G_IS_FILE(dest));
g_return_if_fail(flags == G_FILE_COPY_NONE || flags == G_FILE_COPY_OVERWRITE);
g_return_if_fail(!cancel || G_IS_CANCELLABLE(cancel));
g_autoptr(GTask) task = g_task_new(src, cancel, done_cb, data);
g_task_set_priority(task, prio);
CopyRecursiveClosure *task_data = g_new0(CopyRecursiveClosure, 1);
g_autofree char *basename = g_file_get_basename(src);
task_data->dest_folder = g_file_get_child(dest, basename);
task_data->files_to_copy = g_queue_new();
task_data->folders_to_copy = g_queue_new();
task_data->flags = flags;
g_task_set_task_data(task, task_data, (GDestroyNotify)copy_recursive_closure_free);
g_file_make_directory_async(task_data->dest_folder, prio, cancel,
(GAsyncReadyCallback)on_recursive_make_dir_finish, g_steal_pointer(&task));
}
/**
* copy_recursive_finish:
* @src: The source folder
* @result: The #GAsyncResult passed to the callback
* @error_out: (nullable): Return location for a #GError
*
* Complete the asynchronous copy operation started by copy_recursive_async().
*
* Returns: %TRUE if the operation completed successfully, %FALSE on error.
*/
bool
copy_recursive_finish(GFile *src, GAsyncResult *result, GError **error_out)
{
g_return_val_if_fail(G_IS_FILE(src), false);
g_return_val_if_fail(G_IS_TASK(result), false);
g_return_val_if_fail(g_task_is_valid(result, src), false);
return g_task_propagate_boolean(G_TASK(result), error_out);
}
static void
on_recursive_make_dir_finish(GFile *file, GAsyncResult *result, GTask *task_ptr)
{
g_autoptr(GTask) task = g_steal_pointer(&task_ptr);
g_autoptr(GError) error = NULL;
GCancellable *cancel = g_task_get_cancellable(task);
int prio = g_task_get_priority(task);
if (!g_file_make_directory_finish(G_FILE(file), result, &error)) {
/* With the OVERWRITE flag, don't error out when the folder already
* exists. (Hopefully plopping all the files in the existing folder is
* sufficient. If not, another way to do this would be to delete the
* existing folder recursively, so that extra existing files not in the
* source don't remain in the destination.) */
CopyRecursiveClosure *data = g_task_get_task_data(task);
bool overwrite = !!(data->flags & G_FILE_COPY_OVERWRITE);
if (!overwrite || !g_error_matches(error, G_IO_ERROR, G_IO_ERROR_EXISTS)) {
g_autofree char *path = g_file_get_path(file);
g_task_return_prefixed_error(task, g_steal_pointer(&error),
"Error creating destination folder %s: ", path);
return;
}
}
GFile *src = g_task_get_source_object(task);
g_file_enumerate_children_async(src, "standard::*", G_FILE_QUERY_INFO_NONE, prio, cancel,
(GAsyncReadyCallback)on_recursive_file_enumerate_finish, g_steal_pointer(&task));
}
static void
on_recursive_file_enumerate_finish(GFile *file, GAsyncResult *result, GTask *task_ptr)
{
g_autoptr(GTask) task = g_steal_pointer(&task_ptr);
g_autoptr(GError) error = NULL;
GCancellable *cancel = g_task_get_cancellable(task);
int prio = g_task_get_priority(task);
g_autoptr(GFileEnumerator) children = g_file_enumerate_children_finish(G_FILE(file), result, &error);
if (!children) {
g_autofree char *path = g_file_get_path(file);
g_task_return_prefixed_error(task, g_steal_pointer(&error),
"Error reading folder %s: ", path);
return;
}
g_file_enumerator_next_files_async(children, 10, prio, cancel,
(GAsyncReadyCallback)on_recursive_file_next_files_finish, g_steal_pointer(&task));
}
static void
on_recursive_file_next_files_finish(GFileEnumerator *children, GAsyncResult *result, GTask *task_ptr)
{
g_autoptr(GTask) task = g_steal_pointer(&task_ptr);
g_autoptr(GError) error = NULL;
GCancellable *cancel = g_task_get_cancellable(task);
int prio = g_task_get_priority(task);
g_autolist(GFileInfo) next_files = g_file_enumerator_next_files_finish(children, result, &error);
if (error) {
g_autofree char *path = g_file_get_path(g_file_enumerator_get_container(children));
g_task_return_prefixed_error(task, g_steal_pointer(&error),
"Error reading files from folder %s: ", path);
return;
}
CopyRecursiveClosure *data = g_task_get_task_data(task);
if (next_files) {
for (GList *iter = next_files; iter != NULL; iter = g_list_next(iter)) {
GFileInfo *info = G_FILE_INFO(iter->data);
GFileType type = g_file_info_get_file_type(info);
g_autoptr(GFile) file = g_file_enumerator_get_child(children, info);
switch (type) {
case G_FILE_TYPE_DIRECTORY:
g_queue_push_tail(data->folders_to_copy, g_steal_pointer(&file));
break;
case G_FILE_TYPE_REGULAR:
g_queue_push_tail(data->files_to_copy, g_steal_pointer(&file));
break;
default:
g_warning("Unhandled file type %d in recursive copy: %s", type, g_file_info_get_name(info));
continue;
}
}
g_file_enumerator_next_files_async(children, 10, prio, cancel,
(GAsyncReadyCallback)on_recursive_file_next_files_finish, g_steal_pointer(&task));
return;
}
copy_file_queue_async(g_steal_pointer(&task));
}
static void
copy_file_queue_async(GTask *task_ptr)
{
g_autoptr(GTask) task = task_ptr;
CopyRecursiveClosure *data = g_task_get_task_data(task);
g_autoptr(GFile) file = g_queue_pop_head(data->files_to_copy);
if (file) {
GCancellable *cancel = g_task_get_cancellable(task);
int prio = g_task_get_priority(task);
g_autofree char *basename = g_file_get_basename(file);
g_autoptr(GFile) dest = g_file_get_child(data->dest_folder, basename);
g_file_copy_async(file, dest, data->flags, prio, cancel,
/* progress_callback = */ NULL, NULL,
(GAsyncReadyCallback)on_recursive_file_copy_finish, g_steal_pointer(&task));
return;
}
copy_folder_queue_async(g_steal_pointer(&task));
}
static void
on_recursive_file_copy_finish(GFile *file, GAsyncResult *result, GTask *task_ptr)
{
g_autoptr(GTask) task = task_ptr;
g_autoptr(GError) error = NULL;
if (!g_file_copy_finish(file, result, &error)) {
g_autofree char *path = g_file_get_path(file);
g_task_return_prefixed_error(task, g_steal_pointer(&error),
"Error copying file %s: ", path);
return;
}
copy_file_queue_async(g_steal_pointer(&task));
}
static void
copy_folder_queue_async(GTask *task_ptr)
{
g_autoptr(GTask) task = task_ptr;
CopyRecursiveClosure *data = g_task_get_task_data(task);
g_autoptr(GFile) folder = g_queue_pop_head(data->folders_to_copy);
if (folder) {
GCancellable *cancel = g_task_get_cancellable(task);
int prio = g_task_get_priority(task);
copy_recursive_async(folder, data->dest_folder, data->flags, prio, cancel,
(GAsyncReadyCallback)on_recursive_folder_copy_finish, g_steal_pointer(&task));
return;
}
g_task_return_boolean(task, true);
}
static void
on_recursive_folder_copy_finish(GFile *folder, GAsyncResult *result, GTask *task_ptr)
{
g_autoptr(GTask) task = task_ptr;
g_autoptr(GError) error = NULL;
if (!copy_recursive_finish(folder, result, &error)) {
g_autofree char *path = g_file_get_path(folder);
g_task_return_prefixed_error(task, g_steal_pointer(&error),
"Error copying folder %s: ", path);
return;
}
copy_folder_queue_async(g_steal_pointer(&task));
}
static void
copy_recursive_closure_free(CopyRecursiveClosure *ptr) {
g_object_unref(ptr->dest_folder);
g_queue_free_full(ptr->files_to_copy, g_object_unref);
g_queue_free_full(ptr->folders_to_copy, g_object_unref);
g_free(ptr);
}
You are welcome to take this code and customize it to your needs. I’m putting it into the public domain so hopefully nobody else has to go through this.
Although if you really want to, it could be improved by implementing progress callbacks like g_file_copy_async() has.
Just so you can understand what’s going on at a glance, here’s what it would look like in about 30 lines of JavaScript, with async/await style:
(This excludes the imports and calls to Gio._promisify that you would have to do; hopefully we’ll get native async operations in GNOME 48!)
[1] C++ before C++11 used to be a worse experience than C. However, I don’t have to deal with that because the three major JS engines use C++17. It’s … its own category of special, but better.
[2] No, ChatGPT couldn’t do it either; it made up GIO APIs that don’t exist. If that programming technique is on the table, then sure, it’d have been a lot easier.
I have resurrected my camera old raw thumbnailer so that I can browse
directories full of of camera raw images in Nautilus. This is version
47.0.1, because GNOME 47 is out.
Like the old one, it uses libopenraw to extract the previews from the
raw files.
But, it now supports more raw formats, and if needed will render the
raw image to generate a preview, like it has to for my old Ricoh GR
Digital II images (the one from 2007). This leverage libopenraw
0.4.0 (still in
alpha stage) that has been rewritten in Rust.
Sadly to get is in the hands of users the only good solution is a
distribution package. At the time of writing there is none, but I put
together smething that allowed me to build a package for Fedora 40 to
install on my big rig.
If you feel like it you can download the source
code from
GNOME, and the
repository is
on GNOME gitlab.
This is how it looks with Nautilus 46 with a bunch of images from my
old Olympus E-P1:
Short update this month, which has been full of travels and new things.
I spent a few weeks in the UK. Most importantly I got to see Altin Gün in the Manchester Psych Festival, but I also visited family and friends, spent a while in the Codethink offices, and so on.
It’s been a month since I started using the Fairphone 5 so I wrote up a short review the other day.
My trip back from the UK was by train, it took over two days, and I posted a bunch of photos in a Mastodon thread. It all went relatively smoothly, much better than my last attempt where I missed the train in Irún through my own incompetence. (This time I rented a room on the same street as the train station to avoid any risk).
There’s still *plenty* of room for improvement to this journey on the part of the train operators. It’s hard to enjoy a twenty minute metro ride across Paris during rush hour when you’re travelling with a huge rucksack. Very few organisations care to improve the state of cross border rail. The French national train operator SNCF certainly don’t give a shit (and are even making cross border travel harder), and while they do run night trains, all of them terminate at remote French border towns with poor connections into Spain. The Spanish train operator Renfe abandoned night trains completely in 2020, on the assumption that you can get to Madrid quickly from anywhere and that’s all that matters.
The European Sleeper company are still trying to introduce a night train across France to Barcelona, but SCNF don’t want them to. So we’ll see what happens. I’m happy that I have an alternative to flying to get to the UK, especially having come home to a Galicia where the sky is full of smoke from forest fires.
Loupe is GNOME’s default image viewer since GNOME 45. It is powered by the newly written safe image loading and editing library glycin.
What’s new in 47
With GNOME 47, Loupe version 47 is available as well. This release mostly consists of a lot of subtle changes. For JPEGs, the image rotation feature now writes the new orientation to the image file. While Loupe 46 was still defaulting to an older GTK renderer, the new version is using the same defaults as all other apps. Thanks to work by Benjamin Otte in GTK, Loupe now also handles very large images (larger than 256 megapixels) reliably on systems with limited VRAM while also increasing the loading speed.
Loupe and the underlying image loading and editing library glycin now support much better error reporting if it is not possible to load an image. The new glycin version uses a different decoder for JPEG images, improving loading speed and fixing all known compatibility issues. As part of my work on the GNOME STF grant, glycin now also provides bindings for other programming languages than Rust including C, GJS, Python, and Vala. If necessary, glycin now automatically disables its sandbox features in Flatpak development environments, simplifying development.
What we are working on
But there is more! We have already merged the first GNOME 48 features, which will be released in March 2025. Allan Day worked on a new design for overlay controls, especially zoom. This is already implemented and merged. It allows the selection of zoom levels like 100% without using keyboard shortcuts like Ctrl+1 and additionally gives the option to select arbitrary zoom levels. There is also a new experimental design for dragging images into the Loupe window.
On the more technical side, Hubert Figuière has written an initial loader implementation for raw image formats which is now merged into glycin. Last but not least, I’m planning to finally have some initial image editing features beyond image rotation in Loupe 48. I’m currently working on all the basics and an image cropping feature.
A huge thanks goes out to everyone who contributed to this work including all the people that are kind enough to support my work financially! If you want to get weekly behind-the-scenes development updates or just support my work financially, you can do so via Patreon, Ko-Fi, GitHub, or OpenCollective.
Focus stealing prevention exists for two main reasons: One is security, since we need to prevent rogue apps from deceiving users into e.g. typing their password into another window. If apps can silently claim keyboard focus and open their own window over the currently focused one, this enables phishing and other similar attacks. The other is user experience: Even if an app isn’t maliciously taking over your focus, it can be annoying to have a new window popping up while you’re typing something and have half your sentence end up in the wrong app.
At the same time there are cases where you want apps to be able to request focus, for example when clicking a link in a chat app and wanting it to open in the browser. In this case you want the focus to move to the browser window.
This is why our compositor library mutter implements focus stealing prevention mechanisms, which allow the currently focused app to request that a specific other app be allowed to claim focus now.
<App> is ready??
Most users have probably seen an “<App> is ready” notification in GNOME Shell at some point. Unfortunately this notification doesn’t really explain why it’s being shown and what’s happening, which may cause confusion.
Because of this there have been proposals to disable focus stealing prevention until it works better (mutter issue 673), and a number of GNOME Shell extensions).
These are the main cases where the notification is shown:
An app requests focus for one of its windows, but was not activated in a valid way (e.g. because it wasn’t started by a user action)
An app requests focus for a new window, but it’s slow to start and in the meantime there are additional user interactions. In this case we don’t want to interrupt, and show the notification so people can switch at their convenience.
An app is launched from an environment that isn’t able to use the XDG Activation protocol (e.g. a terminal)
The protocol responsible for this, XDG Activation, the Wayland equivalent to the X11-specific startup notification spec was introduced somewhat recently (2020), and needs to be adopted by UI toolkits. GNOME 46 and 47 saw a few fixes and the feature was polished both in the client toolkit side (GTK and xdg-desktop-portal, as well as in the compositor implementation mutter, but there are still cases where XDG activation isn’t hooked up properly.
How XDG activation works
XDG activation flow for moving focus between two existing windows
The way the protocol works is that the currently focused app asks the compositor to create a token linked to the focused window (Wayland surface) and the most recent user interaction (an input event serial associated with a seat).
This token is then used by the app that should receive focus when it requests to be activated. In GNOME Shell, activation means that the the window receives focus and is placed on top of other windows. An activation token may still be rejected, for example if the window linked to the token doesn’t have focus or when the linked user interaction isn’t recent enough.
In addition to handling focus, GNOME Shell also tracks app launching. Until the new app window is actually shown, GNOME Shell uses a “loading spinner” mouse cursor to indicate to the user that the app is loading. If the app doesn’t implement the XDG Activation protocol, the loading indicator only disappears after a timeout because GNOME Shell doesn’t know that the application finished loading and has presented the target window.
The protocol doesn’t define how tokens are given to the target app. One reason for this is because it depends on how the app is started. The main options are:
Setting the XDG_ACTIVATION_TOKEN environment variable
D-Bus Activation using the platform-data field, which contains the activation token
XDG portals that will launch an app (e.g. the OpenURI or OpenFile portals)
The target app then needs to collect the token and use it to have its window activated to receive focus and to signal to the compositor that it started successfully.
Not smart enough
When I started looking into how our focus prevention mechanism works to investigate the issues mentioned above, I was initially pretty confused. There were a lot of cases where the focus window switch worked fine, but other times it wouldn’t. I realized quickly that with existing windows, the “<App> is ready” notification is shown, but new window would get focus immediately.
This struck me as odd: Why are new windows allowed to do whatever, but existing windows are restricted in the way they can take over focus?
I first thought this was some sort of bug, but then I discovered that the behavior was by design: Mutter has a gsettings property called focus-new-windows that controls the focus stealing prevention mechanism. This property can be strict or smart (the latter being the default).
smart means that in most cases new windows get focus (even without asking for it) and are raised to the top of the window stack
strict means they get focus (are “activated”, in technical terms) only when they are actually supposed to
The smart mode exists in part because there are some cases where our current focus prevention system does not work well. These issues include:
Launching apps via terminal (vte issue #2788). The main issue is that the terminal executing a command does not know whether that process will present a window or not. For example, if you launch vim there’s no new window, but if you launch firefox there is.
Launching apps via Run a Command in GNOME Shell (gnome-shell issue #7704) shares similar issues as running apps from the terminal
Apps launched via custom keyboard shortcut (e.g. set up in Settings > Keyboard > Keyboard Shortcuts)
The lack of implementation of the appropriate protocols in apps or toolkits
Because the cases where a new window is opened are a significant percentage of the overall cases where focus prevention is triggered, this smart mode is making it appear as though apps actually implement the XDG Activation protocol, even if they don’t. While it does somewhat reduce annoyance for users, it gives developers the false impression that they don’t have to do anything.
It also makes it harder to debug issues where something doesn’t work as expected or is missing the correct implementation. For example, even in GTK4 the focus transferring is broken in some cases and took a long time to be discovered (gtk issue #6711).
Security implications
Unfortunately the current situation with smart as the default means that we’re not getting most of the benefits of focus stealing prevention. Apps are able to spawn a new window over your current one and grab keyboard focus, because the smart mode just gives the new window focus, circumventing the safety measures. This is trivial to exploit by malicious apps: All they need to do is open a new window, and focus stealing prevention doesn’t apply.
Next steps
While some people have asked for focus stealing prevention to be disabled completely until it’s implemented by most apps and toolkits, I’m not sure this is the best way forward. If we did that, nobody would notice which apps don’t implement it, so there’d be no reason for toolkits to do so.
On the other hand, there are some remaining issues around terminal applications and similar use cases that we don’t have a plan for yet, so just switching to strict to flush out app bugs isn’t ideal either at the moment.
There is currently no consensus in the team as to how to proceed. The two main directions we could take are:
Switch to strict mode by default (mutter issue #3486) once a few remaining issues are resolved, perhaps with a “flag day” deadline so apps have time to implement it.
Slowly make the smart mode stricter over time.
Either way we need to raise more awareness of the issue to get app and toolkit developers interested in improving things in this area, which this blogpost is a part of
It’d also be helpful if more people (especially developers) turn on strict mode on their system, so we get more testing for which apps work and which don’t. This is the relevant gsetting:
gsettings set org.gnome.desktop.wm.preferences focus-new-windows 'strict'
Thanks
Thanks to the Sovereign Tech Fund for allowing me to take the time to properly work through this as part of my broader effort around improving notifications. Thanks also to Sonny Piers and Tobias Bernard for organizing the STF project, Florian Müllner, Sebastian Wick, Carlos Garnacho, and the rest of the GNOME Shell team for reviewing my MRs, and Jonas Dreßler and Jonas Ådahl for reviewing the blogpost.
Update on what happened across the GNOME project in the week from September 13 to September 20.
This week we released GNOME 47!
This new major release of GNOME is full of exciting changes, including accent colours, better open/save dialogs, an improved Files app, better support for small screen sizes, new dialog styles, and much more! See the GNOME 47 release notes and developer notes for more information.
Readers who have been following this site will already be aware of some of the new features. If you’d like to follow the development of GNOME 48 (Spring 2025), keep an eye on this page - we’ll be posting exciting news every week!
The changes for Loupe 47 have mostly been subtle and in the background. But for Loupe 48, we are already full steam ahead of making a lot of more noticeable changes, including work on image editing. You can learn more in my latest blog post or even get weekly updates as a backer on Patreon or Ko-fi.
the new development cycle has started, so libadwaita now has toggle groups as a replacement for linked boxes of exclusive toggle buttons. Having a dedicated widget not only provides easier to use API, but also uses a less ambiguous style that wouldn’t be possible with a generic box.
There’s also an inline view switcher using a toggle group. It works with AdwViewStack instead of GtkStack, and so AdwViewStack has an optional crossfade transition now, as it’s commonly needed in contexts where inline view switchers are used.
Meanwhile, the bottom bar in AdwBottomSheet can now be hidden, which may be useful for empty states in music players
Additionally, James Westman added a property to add banner to a preferences page, while Emmanuele Bassi added a few cubic bezier easing functions for AdwTimedAnimation
GNOME Shell
Core system user interface for things like launching apps, switching windows, system search, and more.
The search activation was delayed. This avoids spamming the search backends, which improves the overall performance, reduces power consumption, and eliminates flickering in the interface.
The Event popover was reworked and redesigned. It introduces a padlock icon for read-only events, properly separates each section, and it wraps/ellipsizes text properly.
The about dialog was ported to AdwAboutDialog.new_from_appdata. This makes it easy for us to include release notes without putting any effort. This means, starting from 47, it will now be easy to view release notes directly in Calendar.
GLib
The low-level core library that forms the basis for projects such as GTK and GNOME.
This week Binary was accepted into GNOME Circle. Binary makes working with numbers of different bases (e.g. binary, hexadecimal) a breeze. Congratulations!
The first version of Flood It has been released! It is a simple strategy game in which you need to flood the entire board with a single color in as few moves as possible.
Linux App Summit 2024 is two weeks away! This year’s conference will take place on Oct 4-5 in Monterrey, Mexico and all main track talks will be live-streamed for remote attendees. Registration is still open for both in-person and remote attendance, make sure to let us know how you plan to attend. More event details including the full talk schedule can be found on linuxappsummit.org.
The GNOME Asia 2024 Call for Participation is still open! If you would like to submit a talk or workshop for this year’s summit make sure to apply online by September 30. This year’s conference will take place in Bengaluru, India from Dec 6-8 and allow attendees and speakers to participate remotely. Learn more about GNOME Asia 2024.
The GNOME Foundation is searching for applicants for our open Executive Director position. We’ve extended the application deadline until September 25 and encourage qualified individuals who share our vision of promoting software freedom and innovation to apply. Learn more about the position and how to apply here.
That’s all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
The GNOME Project is proud to announce the release of GNOME 47, ‘Denver’.
This release brings support for customizable Accent Colors, improved support for small screens, persistent remote sessions, and new-style dialog windows. Like many other core apps, Files has received improvements and now also used for file open and save dialogs. Once again, a whole slew of new apps have joined the GNOME Circle initiative: find GNOME apps for anything from currency conversion to resource monitoring.
To learn more about the changes in GNOME 47 you can read the release notes:
GNOME 47 will be available shortly in many distributions, such as Fedora 41 and Ubuntu 24.10. If you want to try it today, you can look for their beta releases, which will be available very soon:
We are also providing our own installer images for debugging and testing features. These images are meant for installation in a vm and require GNOME Boxes with UEFI support. We suggest getting Boxes from Flathub.
If you are interested in building applications for GNOME 47, look for the GNOME 47 Flatpak SDK, which is available in the http://www.flathub.org/ repository.
This six-month effort wouldn’t have been possible without the whole GNOME community, made of contributors and friends from all around the world: developers, designers, documentation writers, usability and accessibility specialists, translators, maintainers, students, system administrators, companies, artists, testers, the local GUADEC team in Denver, and last, but not least, our users.
GNOME would not exist without all of you. Thank you to everyone!
We hope to see some of you at GNOME Asia 2024 in Bengaluru, India!
Our next release, GNOME 48, is planned for March 2025. Until then, enjoy GNOME 47.
I mentioned in my last status update post that I had just received a Fairphone 5. Here are my thoughts on it after a month of use.
The predecessor
For the last 4 years I’ve been using this as my main phone, the Oukitel WP5:
The WP5 was quite a device in 2020: for the retail price of 100€, I got a clean, up-to-date Android 10 OS, a battery that lasts a full week, and a metal case which allows the phone to double as a small battering ram.
However, taking a cue from the mainstream phone manufacturers, Oukitel never updated the OS beyond Android 10, making it a laughably insecure place to install bank apps these days. Also, in the last 4 years many Android teams somehow made their apps less efficient, so things that ran fine in 2020 are now frustratingly slow on the WP5.
The newcomer
When I was checking out the Fairphone 5 I saw various reviews and comments criticizing the camera and the loudspeaker. Having now used the phone for a month, I can guess that those commentators were not comparing the camera and loudspeaker against those on the Oukitel WP5, whose loudspeaker sounds like two bees buzzing around in a bucket.
For a mid range smartphone, the camera is excellent and the speaker sounds perfectly fine as well.
What’s good about the Fairphone 5?
After a month of regular use, I have no major complaints at all, everything about the phone is pretty good.
The battery lasts about two days under normal use, which is good enough for me. I deactivate location services unless actually navigating somewhere, which helps to save power (and avoids Google tracking my location all day). Fairphone’s own sales pitch claims one day of battery life but perhaps they tested with the GPS activated. The battery is actually swappable, unlike pretty much any other modern smartphone, so I did buy a second one but haven’t used it yet.
The Android 14 OS is great, everything works, with minimal crapware. There are regular updates, although the Linux kernel version is currently 5.4.242 from April 2023. I guess they are limited by the chipset manufacturer here as to what they can do, but I would like to see something that’s less than one year out of date.
It’s great to know that I could run postmarketOS on the Fairphone, if I wasn’t so attached to being able to make calls with my phone and use regular apps. Unfortunately, life in 2024 more or less requires various proprietary Android or iOS apps, unless you get a kick from accessing services in the least convenient way every time. Hopefully we’ll make some progress on this as a society over the next 10 years.
Talking of which, the “elevator pitch” of the Fairphone 5 is that it’s manufacturer supported for the next 10 years, which is way more than companies such as Apple, Google or Samsung are willing to provide for their phones. The 700€ price tag is much more reasonable if you figure that it’s 70€ a year. Let’s see how the phone holds up into the 2030s, but this is a huge selling point for me.
What’s not so good about the Fairphone 5?
I’m happy using this as my main phone but a few things piss me off. Firstly, it’s slightly too big. Older phones are just a much more comfortable size unless you have huge monkey paws. My hands aren’t even that small. This seems to be a problem in the wider mobile phone industry though. Anyway, soon enough phones will go back to being small enough that we can accidentally swallow them, as was forseen in Futurama.
There is a pricey protective case available from Fairphone, which bizarrely does not have any bevel to protect the screen when dropped on the floor. As a person who spends the day just dropping my stuff onto the floor repeatedly, I took the time to research alternative cases and got a more practical one off Aliexpress for a fraction of the price.
The fingerprint reader is built into the power button, which has some advantages, but it’s a lot more finnicky about reading fingerprints than the fingerprint reader on the back of the WP5. Also, if you’re left handed then there isn’t a comfortable way to unlock the phone with one hand. Again, I think many modern phones have this issue, not just the Fairphone 5.
Finally, it’s great that the phone has dual SIM support, which is kind of a dealbreaker when your life is split between two different countries, but I only noticed on receiving the phone that the 2nd SIM card is actually not a physical slot but an eSIM. I can get an eSIM from my Spanish phone provider but so far I’ve been too lazy to do that, or let’s just say I have higher priorities, so I still have one SIM card in the old phone. Probably a sensible design choice on the part of Fairphone but something to be aware of.
Should I buy a Fairphone 5?
If you’re looking for a good quality, mid range ethical phone, and you’re in Europe, then I can certainly recommend it. Consider that it works out at 70€ per year over 10 years, so it’s not really fair to compare it to other phones in the 700€ price range that will be declared obsolete by the manufacturer within 3 or 4 years.
It’s not as mind blowing as the latest iPhone and it’s not as cheap as a 70€ no-brand phone from Aliexpress, but I think it stands up on its own terms, even before you consider that Fairphone are doing more than any other company to avoid child slavery and conflict minerals during the phone production, and that there are significant CO² emissions associated with buying a new smartphone vs. keeping the same one running for a decade.
So far I’m very impressed with what Fairphone have managed to achieve here. Hopefully they aren’t too far away from world domination. Meanwhile, who is up for adding support in postmarketOS for the old Oukitel WP5?
The problem without plymouth and AMD GPUs is that the amdgpu driver is a really really big driver, which easily takes up to 10 seconds to load on older PCs. The delay caused by this may cause plymouth to timeout while waiting for the GPU to be initialized, causing it to fallback to the 3 dot text-mode boot splash.
There are 2 workaround for this depending on the PCs configuration:
1. With older AMD GPUs the radeon driver is actually used to drive the GPU but even though it is unused the amdgpu driver still loads slowing things down.
To check if this is the case for your PC start a terminal in a graphical login session and run: "lsmod | grep -E '^radeon|^amdgpu'" this will output something like this:
amdgpu 17829888 0 radeon 2371584 37
The second number after each is the usage count. As you can see in this example the amdgpu driver is not used. In this case you can disable the loading of the amdgpu driver by adding "modprobe.blacklist=amdgpu" to your kernel commandline:
2. If the amdgpu driver is actually used on your PC then plymouth not showing can be worked around by telling plymouth to use the simpledrm drm/kms device created from the EFI framebuffer early on boot, rather then waiting for the real GPU driver to load. Note this depends on your PC booting in EFI mode. To do this run:
This is a regular meson package and can be installed the usual way.
# Configure project in _build directory
meson setup --wipe --prefix=~/.local _build .
# Build and install in ~/.local
ninja -C _build install
How to use it
To add a Wayland compositor to your application all you have to do is create a CasildaCompositor widget. You can specify which UNIX socket the compositor will listen for clients connections or let it will choose one automatically.
compositor = casilda_compositor_new ("/tmp/casilda-example.sock");
gtk_window_set_child (GTK_WINDOW (window), GTK_WIDGET (compositor));
Once the compositor is running you can connect to it by specifying the socket in WAYLAND_DISPLAY environment variable.
Last cycle wasn’t particularly exciting, only featuring the new dialogs and a few smaller changes, but this one should be more interesting. So let’s look at what’s new.
Bottom sheet
Last cycle libadwaita got new dialogs, which can be presented as bottom sheets on mobile, and I mentioned that they will also be available as a standalone widget in future – so AdwBottomSheet exists and is public now.
As a standalone widget, bottom sheets work a bit differently from dialogs – they are persistent instead of being destroyed upon closing, more like the sidebar of AdwOverlaySplitView.
They also have a few new features, such as a drag handle, or a bottom bar presentation. This is useful for apps like music players.
AdwHeaderBar also integrates with bottom sheets – it hides the title when used in a bottom sheet with a drag handle.
Spinner
Libadwaita also has a new spinner widget – AdwSpinner. It both refreshes visuals and addresses various problems with GtkSpinner.
GtkSpinner is a really simple widget. Both the spinner itself and the animation are set in CSS. The spinner is just a symbolic icon, and the animation is a CSS animation. This approach has a few problems, however.
First, the old spinner has a gradient. Symbolic icons don’t actually support gradients, so it has to resort to dithering, as Jakub Steinerexplained in his blog a few years ago. This works well if the spinner is small enough (16×16 – 32×32), but becomes very noticeable at larger sizes. This means that the spinner didn’t work well for loading screens, or status pages.
Meanwhile, CSS animations are entirely disabled when system animations are off. Usually that makes sense, except here it means the spinner freezes, defeating the entire point of having it (indicating that the app isn’t frozen during long operations).
And, while CSS animations are pretty sophisticated, you can only do so much with a single element – so it’s literally a spinning icon. elementary OS does a more interesting thing – it spins it in steps, while the icon consists of 12 dashes, so it looks like they change color instead. Even then, more complex animations are impossible.
AdwSpinner avoids all of these issues. Since it’s in libadwaita and not in GTK, it can be more opinionated with regard to styling, so instead of using an icon and CSS, it’s just custom drawing. And since it’s not using CSS animations, it can keep spinning with animations off, and can animate in a more involved way than a simple spinning icon.
It still has a size limit – 64×64 pixels. While it can scale further, we don’t really need larger sizes and capping the size makes it easier to use – to make a loading screen using GtkSpinner, you have to set the :halign and :valign properties to CENTER, as well as :width-request and :height-request properties to 32. If you fail to do these steps, the spinner will either be too large, or too small respectively:
Meanwhile if you just put an AdwSpinner into a large bin, it will look right by default.
Oh, and GtkSpinner is invisible by default and you have to set the :spinning property to true as well. This made sense back in the age of foot and dinosaur spinners, where the spinner would stay in place when not animating, but that’s not really a thing anymore.
(though Nautilus wasn’t actually using GtkSpinner)
It also didn’t help that until this cycle, GtkSpinner would continue to consume CPU cycles even when not visible if the :spinning property is left enabled, so you had to start the spinner in the ::map signal and stop it in ::unmap. That is fixed now, but it was a major source of lag in, say, Epiphany in the past (which had a spinner in every tab, another spinner in every mobile tab switcher row and another one in the floating bar that shows URLs on hover, copied from Nautilus).
Spinner paintable
In addition to AdwSpinner, there’s also AdwSpinnerPaintable. It can be used with GtkImage, any other place that accepts paintables (such as status pages) or just manually drawn. It is a bit more awkward to use than the widget, as it needs to reference another widget in order to animate (since paintables cannot access the frame clock on their own), but it allows to use spinners in contexts that wouldn’t be possible otherwise.
AdwStatusPage even has a special style for spinner paintable – similar to the .compact style, but applied automatically.
Button row
Another widget we have now is AdwButtonRow – a list row that looks more or less like a button. It has a label, optionally icons on either side, and can use destructive and suggested style classes.
This pattern isn’t new – it has been used in mockups for a while (at least as early as 2021) – but it varied quite a bit between different mockups and implementations and so having a standard widget for it wasn’t viable. This cycle Jamie Gravendeel and kramo took time to standardize the existing designs into a tangible proposal – so it exists as a standard widget now.
Most of the time these rows aren’t meant to be linked together, so AdwPreferencesGroup has a new property :separate-rows. When enabled, the rows within will appear separately. This is mostly useful for button rows, but also e.g. entry rows. When not using AdwPreferencesGroup, the same effect can be achieved by using the .boxed-list-separate style class instead of .boxed-list.
Multi-layout view
Libadwaita 1.4 introduced AdwBreakpoint, which allowed to easily set properties on window size changes. However, a lot of apps need layout changes that can’t be expressed via simple properties – say, switching between a sidebar and a bottom sheet. While it is possible to do it programmatically anyway, it’s fairly involved and not a lot of apps went to those lengths.
Back then I also prototyped a widget for automatically reparenting children between different layouts via using a property mentioned a future widget for automatically reparenting children between different layouts, and now it’s finished and available for use as AdwMultiLayoutView.
It has changed somewhat since the prototype, e.g. it doesn’t dynamically create or destroy layouts anymore, just parents/unparents them, but the gist is still the same:
Put one or more AdwLayoutSlot into each layout, give them IDs
Define children matching those IDs
Then those children will be placed into the slots for the current layout. When you switch the layout, they will be reparented into slots from that layout instead.
So now it’s possible to define completely different layouts for desktop and mobile entirely via UI files.
CSS variables and colors
I’ve already talked about this in a lot of detail in my last blog post, but GTK has a lot of new CSS goodies, and libadwaita 1.6 makes full use of them.
Libadwaita now provides CSS variables for all of its old named colors, with a docs page to go with it, as well as new variables: --dim-opacity, --disabled-opacity, --border-opacity and --window-radius.
This also allowed to have matching focus ring color on .destructive-action buttons, as well as matching accent color for the .error, .warning and .success style classes. And because overriding accent color for a specific widget is now possible, .opaque button style class has been deprecated in favor of overriding accent colors on .suggested-action. Meanwhile, the white accent color of .osd is now more reliable and automatically works for custom widgets, instead of trying (and often failing) to manually override it for every standard widget.
I mentioned that it might be possible to generate standalone accent/error/etc colors from their respective background colors. However, the question was how to make that automatic, so at the time we didn’t actually integrate that. Now it is integrated, though it’s not completely automatic – only for :root.
Specifically, there’s a new variable: --standalone-color-oklab, corresponding to the correct color transformation for the current style.
So, when overriding accent color for a specific widget, there is a bit of boilerplate to copy:
It’s still an improvement over calculating the color manually, both for light and dark styles (which a lot of apps didn’t do at all, resulting in poor contrast), so still worth it. Maybe one day we’ll be able to make it completely automatic – e.g. by ensuring that using variables with wildcards doesn’t regress performance.
Another big feature is system accent color support. While it’s not a strictly libadwaita change, this is the developer-facing part, so it makes sense to talk about it here.
Behind the scenes it’s using the settings portal which provides a standardized key for the system accent color. Many other environments support it as well, so libadwaita apps will follow their accent color preferences too, while non-GNOME apps that follow the preference will follow it on GNOME too. Note that while the portal exposes arbitrary sRGB colors, libadwaita will pick the closest color from a list of nine colors, as visible on the screenshot above. This is done in the Oklch color space, mostly based on hue, so should work even for really dull colors.
Accent colors are also supported when running on Windows and macOS, and like with the color scheme and high contrast, the libadwaita page in GTK inspector allows to toggle the system accent color now.
Apps are still free to set their own accent color. CSS always takes priority over the system accent.
A lot of people helped push this over the finish line, with particular thanks to Jamie Murphy, kramo and Jamie Gravendeel.
API
AdwStyleManager provides new properties for fetching the system color – :accent-color and :accent-color-rgb, as well as :system-supports-accent-colors for querying whether the system has accent color preferences – same as for color scheme.
The :accent-color property returns a color from the AdwAccentColor enum, so that individual colors can be special cased (say, when using bitmap assets). This color can be converted both to background color RGBA (using adw_accent_color_to_rgba()) and to standalone color (adw_accent_color_to_standalone_rgba()).
All of these colors use white foreground color, so there’s no API for fetching it, at least for now.
Note that :accent-color-rgba will still return the system color even if the app overrides its accent color using CSS. It only exists for convenience and is equivalent to calling adw_accent_color_to_rgba() on the :accent-color value.
While we still don’t have a general replacement for deprecated gtk_style_context_lookup_color(), the new accent color API can replace at least some of its uses.
On CSS side, there are new variables corresponding to each accent color: --accent-blue for blue and so on. Additionally, every system color, along with their standalone colors for both light and dark, is documented and can be used as a reference.
Destructive buttons
Having accent color that’s not always blue means having to rethink other style choices. In particular, .destructive-action buttons were just a red version of .suggested-action, same as in GTK3. This was already questionable from accessibility perspective, but breaks entirely with accent colors, since suggested buttons would look exactly same as a destructive ones with red accent. And so .destructive-action has a distinct style now, less prominent than suggested.
Alert dialogs
Old and new alert dialogs side by side
Another area that needed updates was AdwAlertDialog – it was also using color for differentiating suggested and destructive buttons.
Coincidentally, the alert dialog style went almost unchanged from GTK3 days, and looked rather out of place with the rest of the platform. So kramo came up with an updated design.
AdwMessageDialog and GtkAlertDialog received the same style, or at least an approximation – it’s not possible to replicate it entirely in GTK dialogs. Even though neither is recommended for use (when using libadwaita, anyway – nothing wrong with using GtkAlertDialog in plain GTK), regressing apps that aren’t fully up to date with the platform wouldn’t be very good.
Adapting apps
Accent colors are supported automatically, and in most cases apps don’t need any changes to make use of them. However, here’s a checklist to ensure it works well:
Make use of the accent color variables in custom CSS, like --accent-bg-color. Using the old named colors like @accent_bg_color works as well. Don’t assume accent color will be blue.
Conversely, don’t use accent color when you mean blue. We have variables like --blue-3 for that – or even --accent-blue.
When using accent color in custom drawing (say, drawing a graph), make sure to redraw it when AdwStyleManager:accent-color value changes – same as for color scheme and high contrast.
Deprecations
Last cycle we introduced new dialog widgets that are based on AdwDialog rather than GtkWindow. However, that happened right at the end of the cycle, without giving apps a lot of time to port their existing dialogs. Because of that, the old widgets (AdwMessageDialog, AdwPreferencesWindow, AdwAboutWindow) weren’t deprecated and I mentioned that they will be deprecated in future instead. So, they are now.
If you haven’t migrated to the new dialogs yet, see the migration guide for how to do so.
Other changes
As always, there are smaller changes that don’t warrant their own sections, so let’s look at those:
AdwWindow and AdwApplicationWindow now have a default minimum size (360×200 px), meaning you don’t have to set it manually to use breakpoints or dialogs anymore. Apps can still override it if they need a different size, but it works out of the box now.
GtkTextView now supports the .inline style class, removing its background and resetting its foreground color. This allows to use it in contexts like cards.
Future
As usual, there are changes that didn’t make it this cycle and will land the next cycle instead. Most notably, the old toggle groups branch by Maximiliano is finally finished and will land early next cycle.
Big thanks to STF for funding a lot of this work (GTK CSS improvements, bottom sheets, finishing multi-layout view and toggle groups, general maintenance), as well as people organizing the initiative and all contributors who made this release happen.
The GNOME Foundation is excited to announce that we have officially opened the search for a new Executive Director. This is an exciting time for our organization as we seek a dynamic leader to guide us into the future, continuing our mission to foster the growth of GNOME and the wider free software community.
As the cornerstone of our leadership team, the Executive Director will play a critical role in shaping the strategic direction of the Foundation, working closely with staff, community members, and partners to expand our reach and impact. The ideal candidate will have professional experience working with nonprofits, a strong passion for open-source software, a deep commitment to our community values, and the vision to drive the next phase of GNOME’s growth and development.
The position offers a unique opportunity to lead a pivotal project in the open-source ecosystem, collaborating with a global network of contributors and partners. Interested candidates can find more details on the role and how to apply on our careers page.
We encourage qualified individuals who share our vision of promoting software freedom and innovation to apply. We are looking forward to finding the next Executive Director who will carry forward the mission of the GNOME Foundation, driving positive change within the tech world and beyond.
A couple of weeks ago, I went to see Alien: Romulus. While many of my friends were disappointed, I actually enjoyed it. In fact, it exceeded my expectations — mainly because I didn’t expect much! :)
Fede Alvarez delivered exactly what producer Ridley Scott asked of him, leaning heavily on the nostalgia of the original masterpiece while skirting the edge of a reboot. The world of Prometheus wasn’t ignored, but purposedly avoided referencing too deeply.
The dystopian world of corporate feudalism set a tone even darker than the original, to the point where the xenomorph didn’t seem like the worst thing that could happen. I’m still holding out hope for 90-minute movies as the gold standard, but the two-hour runtime was manageable—though my aging buttocks may disagree. The slow-burn first act was actually the most enjoyable part, as that’s where the fresh world-building took center stage. Even as the familiar plot unfolded, Alvarez delivered memorable suspense and action scenes.
Of course, it’s never going to feel the same as seeing Alien or Aliens as a teenager. I can’t fully dive into my minor criticisms without spoilers, but let’s just say the movie understood that “less is more” — except in one area. Other than that, Alien: Romulus proved that going to the movies can still be a pretty great experience.
I wrote a post about so called "M type" and "S type" processes in software development. Unfortunately it discusses the concept of human sexuality. Now, just to be sure, it does not have any of the "good stuff" as the kids might say. Nonetheless this blog is syndicated in places where such topics might be considered controversial or even unacceptable.
Thus I can't really post the text here. Those of you who are of legal age (whatever that means in your jurisdiction) and out of their own free will want to read such material, can access the PDF version of the article via this link.
As it's now aproaching mid-September and the autumn release of GNOME, so is there also a new release of Maps approaching for GNOME 47.
Switch to Vector-based Map
The biggest change since last release is that we now use the vector-based map by default and the old raster map has also been retired since we wanted to move forward with things like enabling, and relying on clickable POIs directly in the map view so we could the remove the old tedious “What's here?” context menu doing a reverse geocoding to get details about a place (which is also a bit hit-and-miss with regards to how close to where you point the actual result is).
Apart from this other benefits we get (and this has already been mentioned in earlier posts) localized names (when tagged in OpenStreetMap) and finally a proper dark mode with our new GNOME map style.
Light (default) theme variant of the map in 47
Dark theme variant of the map in 47
Redesigned Search Bar
The “explore” button to open the search for nearby POIs by category menu has now been integrated into the entry to avoid a theme issue when using the linked button style where the rounded corners disappears on the “other side” when the results popover is showing.
Search bar with explore button
This also looks a bit sleeker I think…
Improved Public Transit Routing
Public transit routing is now using the Transitous project (https://transitous.org) to provide transit routing for new regions. And as this is a crowd sourcing initiativ you can also help out by adding additional missing GTFS feeds to the project and it should automatically get supported by Maps.
For the time being, the regions we already supported by third-party APIs (such as Resrobot for Sweden, and OpenData.ch for Switzerland) will still use those providers to avoid possible regressions. It also gives us some leeway to improve MOTIS (the backend used by Transitous). The implementation in Maps also lacks e.g. support for specifying via locations.
Showing some travel itinerary options in Prague
Showing a sample of an itinerary from Lund, Sweden to Hamburg, Germany
Showing a sample of an itinerary in Denver, Colorado
Some changes had to be made to our internal shield rendering library (we couldn't use the OSM Americana implementation directly, as we had to implement ours using Cairo rendering and so on) to support the new convenience shortcut for a “pill” shield shape, and also being able to define hard-coded route references “ref” directly in the shield definition rather than getting from the tile data.
Rochester Inner Loop in Rochester, New York, using a fixed “LOOP” reference label
And on that note, a funny bug I discovered during some testing is that we currently always assume the “pointsUp” attribute in the shield definitions would always default to true when not explicitly set, while the OSM Americana code has different defaults depending on the following shape. So specifically for the “fishhead” shape it should actually be false when not set. I guess this is a bit odd to assume different values depending on following JSON elements, but…
Highway shields in New Zeeland
It was a bit funny I discovered this bug when “browsing around“ in New Zeeland considering Northern Hemisphere-centric jokes about people “Down-under” walking upside-down 😀. But this actually also affects some highways in the US…
I guess this should be fixed before the 47.0 release.
The WebRTC nerds among us will remember the first thing we learn about WebRTC, which is that it is a specification for peer-to-peer communication of media and data, but it does not specify how signalling is done.
Or put more simply, if you want call someone on the web, WebRTC tells you how you can transfer audio, video and data, but it leaves out the bit about how you make the call itself: how do you locate the person you’re calling, let them know you’d like to call them, and a few following steps before you can see and talk to each other.
WebRTC signalling
While this allows services to provide their own mechanisms to manage how WebRTC calls work, the lack of a standard mechanism means that general-purpose applications need to individually integrate each service that they want to support. For example, GStreamer’s webrtcsrc and webrtcsink elements support various signalling protocols, including Janus Video Rooms, LiveKit, and Amazon Kinesis Video Streams.
However, having a standard way for clients to do signalling would help developers focus on their application and worry less about interoperability with different services.
(author’s note: the puns really do write themselves :))
As the names suggest, the specifications provide a way to perform signalling using HTTP. WHIP gives us a way to send media to a server, to ingest into a WebRTC call or live stream, for example.
Conversely, WHEP gives us a way for a client to use HTTP signalling to consume a WebRTC stream – for example to create a simple web-based consumer of a WebRTC call, or tap into a live streaming pipeline.
WHIP and WHEP
With this view of the world, WHIP and WHEP can be used both for calling applications, but also as an alternative way to ingest or play back live streams, with lower latency and a near-ubiquitous real-time communication API.
We know GStreamer already provides developers two ways to work with WebRTC streams:
webrtcbin: provides a low-level API, akin to the PeerConnection API that browser-based users of WebRTC will be familiar with
webrtcsrc and webrtcsink: provide high-level elements that can respectively produce/consume media from/to a WebRTC endpoint
At Asymptotic, my colleagues Tarun and Sanchayan have been using these building blocks to implement GStreamer elements for both the WHIP and WHEP specifications. You can find these in the GStreamer Rust plugins repository.
Our initial implementations were based on webrtcbin, but have since been moved over to the higher-level APIs to reuse common functionality (such as automatic encoding/decoding and congestion control). Tarun covered our work in a talk at last year’s GStreamer Conference.
Today, we have 4 elements implementing WHIP and WHEP.
Clients
whipclientsink: This is a webrtcsink-based implementation of a WHIP client, using which you can send media to a WHIP server. For example, streaming your camera to a WHIP server is as simple as:
whepclientsrc: This is work in progress and allows us to build player applications to connect to a WHEP server and consume media from it. The goal is to make playing a WHEP stream as simple as:
The client elements fit quite neatly into how we might imagine GStreamer-based clients could work. You could stream arbitrary stored or live media to a WHIP server, and play back any media a WHEP server provides. Both pipelines implicitly benefit from GStreamer’s ability to use hardware-acceleration capabilities of the platform they are running on.
GStreamer WHIP/WHEP clients
Servers
whipserversrc: Allows us to create a WHIP server to which clients can connect and provide media, each of which will be exposed as GStreamer pads that can be arbitrarily routed and combined as required. We have an example server that can
play all the streams being sent to it.
whepserversink: Finally we have ongoing work to publish arbitrary streams over WHEP for web-based clients to consume this media.
The two server elements open up a number of interesting possibilities. We can ingest arbitrary media with WHIP, and then decode and process, or forward it, depending on what the application requires. We expect that the server API will grow over time, based on the different kinds of use-cases we wish to support.
GStreamer WHIP/WHEP server
This is all pretty exciting, as we have all the pieces to create flexible pipelines for routing media between WebRTC-based endpoints without having to worry about service-specific signalling.
If you’re looking for help realising WHIP/WHEP based endpoints, or other media streaming pipelines, don’t hesitate to reach out to us!
GNOME is interested in participating in the Outreachy December-March cohort, and while we already have a few great projects, we are looking for experienced mentors with a couple more project ideas. Hurry up, we have until September 11 to conclude our list of ideas.
As of today, Mutter will style legacy titlebars (i.e. of X11 / Xwayland apps that don’t use client-side decorations) using Adwaita on GNOME.
Shadows match the Adwaita style as well, including shadows of unfocused windows. These titlebars continue to follow the system dark and light mode, even when apps don’t.
Should make using legacy apps a little less unpleasant
We have finally reached the final week of GSoC. It has been an amazing journey! Let’s summarize what was done, the current state of the project and what’s next.
Introduction
This summer, I had the opportunity to work as a student developer under the Google Summer of Code 2024 program with the GNOME Community. I focused on creating a web-based Integrated Development Environment (IDE) specifically designed for writing and executing SPARQL queries within TinySPARQL (formerly Tracker).
This user-friendly interface empowers developers by allowing them to compose and edit multiline SPARQL queries directly in a code editor, eliminating the need for the traditional terminal approach. Once a query is written, it can be easily executed via the HTTP SPARQL endpoint, and the results will be displayed in a visually appealing format, enhancing readability and user experience.
By lowering the barrier to entry for newcomers, boosting developer productivity with visual editing, and fostering collaboration through easier query sharing, this web IDE aims to significantly improve the experience for those using libtracker-sparql to interact with RDF databases.
I would like to express my sincere gratitude to my mentors, Carlos and Sam, for their guidance and support throughout the internship period. Their expertise was invaluable in helping me navigate the project and gain a deeper understanding of the subject matter. I would also like to thank my co-mentee Rachel, for her excellent collaboration and contributions to making this project a reality and fostering a fast-paced development environment.
I’m excited to announce that as the internship concludes, we have a functional web IDE that enables users to run SPARQL queries and view the results directly in their web browser. Here is the working demo of the web IDE that was developed from scratch in this GSoC Project.
Working of TinySPARQL Web IDE
What was done
This project was divided into two primary components: the backend C code, which enabled the web IDE to be served and run from the command line, and the frontend JavaScript code, which enhanced the web IDE’s visual appeal and added all user-facing functionalities. I primarily focused on the backend C side of the project, while Rachel worked on the frontend. Therefore, this blog post will delve into the backend aspects of the project. To learn more about the frontend development, please check out Rachel’s blog.
The work done by me, could be divided into three major phases:
Pre-Development Phase
During the pre-development phase, I focused on familiarizing myself with the existing codebase and preparing it for easier development. This involved removing support for older versions of libraries, such as Libsoup.
TinySPARQL previously supported both Libsoup 2 and Libsoup 3 libraries, but these versions had different function names and macros.
This compatibility requirement could significantly impact development time. To streamline the process, we decided to drop support for Libsoup 2.
The following merge requests document the work done in this phase:
In this phase, I extended the HTTP endpoint exposed by the tinysparql endpoint command to also serve the web IDE. The goal was to enable the endpoint to serve HTML, CSS, and JavaScript files, in addition to RDF data. This was a crucial step, as frontend development could only begin once the basic web IDE was ready.
During this phase, the HTTP module became more complex. To aid in debugging and diagnosing errors, we added a debugging functionality. By running TRACKER_DEBUG=http, one can now view logs of all GET and POST methods, providing valuable insights into the HTTP module’s behavior.
The following merge requests document the work done in this phase:
The web IDE added significant size (around 800KB-1MB) to the libtracker-sparql library. Since not all users might need the web IDE functionality, we decided to separate it from libtracker-sparql. This separation improves efficiency for users who won’t be using the web IDE.
To achieve this isolation, we implemented a dedicated subcommand tinysparql webide for the web IDE, allowing it to run independently from the SPARQL endpoint.
Here’s a breakdown of the process:
Isolating HTTP Code: I started by extracting the HTTP code from libtracker-sparql into a new static library named libtracker-http. This library contains the abstraction TrackerHttpServer over the Libsoup server, which can be reused in the tinysparql webide subcommand.
Creating a Subcommand: Following the isolation of the web IDE into its own library and the removal of relevant gresources from libtracker-sparql, we were finally able to create a dedicated subcommand for the web IDE. As a result, the size of libtinysparql.so.0.800.0 has been reduced by approximately 700KB.”
The following merge requests document the work done in this phase:
This is the web IDE we developed during the internship. Check out this demo video to see some of the latest changes in action.
TinySPARQL Web IDESPARQL Query successfully executedError handling
Future Work
Despite having a functional web IDE and completing many of the tasks outlined in the proposal (even exceeding the original scope due to the collaborative efforts of two developers), there are still areas for improvement.
I plan to continue working on the web IDE in the future, focusing on the following enhancements:
Multi-Endpoint Support: Implement a mechanism for querying different SPARQL endpoints. This could involve adding a text box input to the frontend for dynamically entering endpoint URLs or providing a connection string option when creating the web IDE instance from the command line.
Unified HTTP Handling: Implement a consistent HTTP handler for all cases, allowing TrackerEndpointHttp to handle requests both inside and outside the /sparql path.
SPARQL Extraction: Extract the SPARQL query from POST requests in TrackerEndpointHttp or pass the raw POST data in the ::request signal, enabling TrackerEndpointHttp to determine if it contains a SPARQL query.
Avahi Configuration: Move the Avahi code for announcing server availability or assign TrackerEndpointHttp responsibility for managing the content and type of broadcasted data.
CORS Configuration: Make CORS settings configurable at the API level, allowing for more granular control and avoiding the default enforcement of the * wildcard.
GUADEC Experience
One of the highlights of my GSoC journey was the opportunity to present my project at GUADEC, the annual GNOME conference. It was an incredible experience to share my work with a diverse audience of developers and enthusiasts. Be sure to check out our presentation on the TinySPARQL Web IDE, delivered by Rachel and me at GUADEC.
Final Remarks
Thank you for taking the time to read this. Your support means a great deal to me. This internship was a valuable learning experience, as it was my first exposure to professional-level C code and working with numerous libraries solely based on official documentation. I am now more confident in my skills than ever. I gained a deeper understanding of the benefits of collaboration and how it can significantly accelerate development while maintaining high code quality.
Add support for the latest GIR attributes and GI-Docgen formatting to Valadoc.
Overview
GSoC 2024 has come to an end, so it's time to wrap up. I got the opportunity to contribute to the Vala Project which consists of an awesome programming language called Vala, and it gives me immense sense of accomplishment to know that my work will be beneficial for Vala programmers. I spent the 12 weeks working through the codebase of the Vala compiler, adding features and making the necessary changes to achieve the project goals. It was a valuable experience and I have learnt a lot by working with talented mentors and peers. This has undoubtedly shaped my journey as a developer and I plan to continue working on the project.
Project Summary
This project aimed to add support for the latest features of GObject Introspection to the Vala compiler and Valadoc. The plan was to ensure that the Vala compiler (which generates Vala bindings for the GIR files) parses and utilizes the newer GIR attributes from the introspection data of GObject based C libraries, and outputs them in the generation of Vala GIRs. In Valadoc, this was to be implemented by parsing the GI-Docgen documentation format, and rendering working GI-Docgen links in the HTML documentation generated by Valadoc. Another important step in improving Valadoc was to redesign https://valadoc.org and give it a modernized look, making this one of the milestones that was expected to be acheived in the project.
Contributions (Merge requests)
To acheive these objectives, I opened the following merge requests:
Support for sync-func, async-func, and finish-func attributes for method. (Draft) [!393]
Add support for default-value attribute for property. [394]
libvaladoc: Parse backticks in gi-docgen markdown and display the enclosed text as monospaced. [!402]
libvaladoc: Modernize the HTML documentation pages generated by valadoc. [!403]
Redesign https://valadoc.org and make it mobile-responsive. [#419]
Future Plans
Although the coding period of GSoC 2024 is now over, I feel that this is just the beginning of my contributions to GNOME. We still have to implement support for working GI-Docgen links and many other features of GI-Docgen markdown to Valadoc. I will continue working to meet the project objectives, contribute more, and be more involved within the GNOME community. I got to learn a lot over the past 12 weeks and this has certainly made me a better contributor.
Mentor
I extend my heartfelt gratitude to my mentor Lorenz Wildberg for being a constant source of support and motivation throughout the internship. Their expertise and guidance helped me reach this far into the project and I hope to continue working on Vala and other GNOME projects in the coming months.
GUADEC 2024
I got the opportunity to present my project and participate in the Intern Lightning Talks in GUADEC on 20 July 2024. I had a great experience explaining about my project and answering questions, you can watch my presentation here: https://youtu.be/chKVTgUUVpk?si=LE46ezQX6q3ZqcZ4&t=2220
After GSoC
As I look back and reflect on my journey over the last 12 weeks, I am filled with gratitude for this opportunity and excitement for future work on Vala and related GNOME projects. I want to learn more about GTK, Vala and the GNOME development process so that I can make more impactful contributions and be a valuable member of the community. I had many interactions with numerous GNOME contributors and I'm grateful to each and every one of them for always being ready to guide and for their prompt replies to my doubts. I was a Linux user for a long time but never really used it as a power user until I started contributing to GNOME. I'm glad to say that now Linux will always be my preferred choice of an operating system :). My favourite part of working on this project was being part of a community that is diverse, inclusive, and incredibly welcoming to newcomers. I look forward to being a better GNOME contributor and guiding new contributors in GNOME.
TLDR: GSoC is ending soon and I’ve definitely learned a lot from my time here, if you’re interested in the code I’ve written for my GSoC project feel free to go straight to the end where I’ve linked all the MRs I’ve been involved
Hello GNOME community! Time flies and my time with GSoC working on a new Web-IDE for TinySPARQL is coming to a close. You might have seen my intro post about the project, or the lightning talk my colleague Demigod and I recorded together for GUADEC. In any case, I’m excited to show you guys our final product and talk about the next steps, both in terms of this project and my involvement with open source.
First of all, to reiterate the purpose of this project – we’ve been working the last few months to create a web-IDE to be used with TinySPARQL and LocalSearch for query testing in a more user-friendly environment, our main target audience being fellow developers that for any reasons need to interact with LocalSearch or TinySPARQL databases.
My main work during the last few months involved developing a lightweight TypeScript based UI while my colleague Demigod worked mostly on implementing the backend support necessary.
The first big hurdle of the project for me was figuring out how to include the TS code and necessary npm packages to the TinySPARQL codebase without creating bloat for what is supposed to be a very lightweight and low-level package. We ended up using webpack for bundling and then further compressing them into GResources such that only these GResources need to be included in our releases, to be served when a user starts up the web-ide. This quickly addressed my mentors’ concern about having to include npm packages in our releases and ensured we could work comfortably between the TS code and C backend without any troubles.
In terms of actual design and UX work, this went by relatively smoothly, though it did take quite a few feedback cycles to get it to its current state. Click here for a quick demo of the final product.
Just to go over some of the features that have been fully implemented on the web ide frontend:
code editor with full support for SPARQL syntax highlighting and common keyboard shortcuts
error highlighting at corresponding editor positions according to error messages returned by backend
query “bookmarking” via conversion to links
Neat table format for presenting query results, equipped with ontology prefix adaptations and directin linking to relevant documentations
Options to hide/show columns in result tables
Clear error reporting
And here are some features still in progress/waiting to be merged:
Options to query in other rdf formats: trig, turtle and JSON-LD
Examples box for referencing queries that may be useful for certain endpoints
Quick switching between different SPARQL endpoints from the Web IDE interface itself
In terms of future work, some more work needs to be done on our colour scheme as well as the presentation of query results in the other formats offered other than the default cursor format. There were also some discussions in earlier stages of planning about implementing autocomplete and other editor enhancements that we didn’t have enough time for, so there’s still definitely lots of room for improvement. Nonetheless I’m very satisfied and proud of what I’ve achieved in the past few months and will be looking forward to contributing to future improvements of this tool.
Regarding the overall learning experience, one of the most important thing I learned, in my opinoion, was how to keep my git history clean and work with multiple branches of code at the same time without creating conflicts. I feel like this would completely change the experience of whoever I work with in the future as well as myself when I need to go back to some old code. Other than that, I’ve also had little exposure outside web development and working with the GNOME ecosystem was definitely a nice challenge – I’m definitely a lot more confident about dabbling outside my area of expertise now.
Lastly, here’s a list of useful links to the work I’ve been doing over the summer. Thanks for reading this far
As my Outreachy internship comes to a close, I find myself reflecting on the journey with a sense of gratitude. What began with a mix of excitement and fear has turned into a rewarding experience that has shaped my skills, confidence, and passion for open-source contributions.
Overcoming Initial Fears
When I first started, I had some doubts and fears — among them was whether I would fit into the open-source community, I worried that my skills might not translate well to user research, an area I was eager to explore but had limited experience in. However, those fears quickly disappeared as I got myself into the supportive and inclusive GNOME community. I learned that the community values diverse contributions and that there is always room for growth and learning.
Highlights of the Internship
This internship has been a significant period of growth for me. I’ve developed a stronger understanding of user research methodologies, particularly the importance of crafting neutral questions to avoid bias. This was a concept I encountered early in the internship, and it has since become a cornerstone of my research approach. Additionally, I’ve sharpened my ability to analyze and interpret user feedback, which will be invaluable as I continue to pursue UI/UX design.
Beyond technical skills, I’ve also grown in terms of communication. Learning how to ask the right questions, listen actively, and engage with feedback constructively has been crucial. These skills have given me the confidence to interact more effectively within the open-source community.
Mentorship and Project Achievements
My mentors, Allan Day and Aryan Kaushik played a critical role in my development throughout this internship. Their guidance, patience, and willingness to share their expertise made a great difference. They encouraged me to think critically about every aspect of the user research process, helping me to grow not just as a researcher, but as a contributor to the open-source community.
As for my project, I’m proud of the progress I’ve made. I successfully conducted a series of user research exercises and gathered insights that will help improve the usability of some GNOME Apps. However, my work isn’t finished yet — I’m currently in the process of finalizing the usability research report. This report will be a little resource for the GNOME design team, providing detailed findings and recommendations that will guide future improvements.
Throughout this journey, I’ve leaned heavily on the core values I outlined at the start of the internship: Adventure, Contribution, and Optimism. These values have been my compass, guiding me through challenges and reminding me of the importance of giving back to the community. The adventure of stepping into a new field, the joy of making meaningful contributions, and the optimism that every challenge is an opportunity for growth — these principles have been central to my experience.
Reflecting on My Core Values
As I wrap up my time with Outreachy, I feel both proud of what I’ve learned and excited for what lies ahead. I plan to continue my involvement in open-source projects. The skills and confidence I’ve gained during this internship will undoubtedly serve me well in future projects. Additionally, inspired by the mentorship I received, I hope to help mentor others and help them navigate their journeys in open-source contributions.
Finally, this internship has been a transformative experience that has expanded my skill set, deepened my passion for user-focused design, and strengthened my commitment to open-source work. I’m grateful for the opportunity and look forward to staying connected with the GNOME community as I continue to grow and contribute.
Hey everybody, this is another iteration of my previous posts. It’s been a while since I published any updates about my project.
Before I begin with the updates I’d like to thank all of the people who helped me get this far into the project, it wouldn’t have been as engaging and enjoyable of a ride without your support.
For someone reading this blog for the first time, I am Bharat Tyagi. I am a Computer Science major and I have been contributing to the GNOME Project (Workbench in particular) under Google Summer of Code this year.
Since the updates until Week 3 have already been written in greater detail I will only briefly cover them in this report and focus more on the more recent ones.
Project Title
My project is subdivided into three parts:
Port existing demos to Vala
Redesign the Workbench Library and make QoL improvements
Add code search into Workbench
Mentors
Sonny Piers, Andy Holmes
Part 1:
Workbench has a vast library of demos covering every use case for developers or users who would like to learn more about the GTK ecosystem and how each component is connected and works together in unison.
The demos are available in many programming languages including JavaScript, Python, Vala, Rust, and now TypeScript (Thanks to Vixalien for bringing this to Workbench :) ). The first part of my project was to port around 30 demos into Vala. This required me to learn many functions, signals, and how widgets use them to relay information. Since I ported over 30 demos, I’ll mention a few that were fun to port and the list of all the ports that I made, if you’d like a more in-depth review of the process, the Week 3 update is where you should go!
Map (Libshumate)
Maps and CSS gradients don’t just look cool, their code was also fun to port. Support for maps is provided by libshumate, sourcing the world view from OSM (Open Street Map), and supports functions like dragging across the map showing the location for latitudes and longitudes entered, and allowing you to put markers at any point on the map.
CSS Gradients
CSS Gradients allows you to create custom gradients and generate their CSS as the parameters are generated
Session Monitor and Inhibit was another interesting demo to port, as the name goes it allows you to monitor changes and inhibit the desktop from changing state, based on the current state of your application.
You could use the demo for some interesting warnings
After all the ports were done, I moved to make changes to the library
Part 2:
The second part of this project was to redesign the library and bring about quality-of-life improvements.
Sonny prepared a roadmap, including some changes that had already been made, to help break the project down into actionable targets.
Since we wanted to include some filtering based on both language and category, the first step was to move away from the current implementation of the demo rows based on Adw.PreferencesWindow and related widgets, which are easy to use but don’t provide the necessary flexibility.
So I removed their usage with something more universal and that would allow us to reimplement populating the demos. Adw.PreferenceWindow was replaced with Adw.Window,Adw.PreferencesRowwithAdw.ActionRow, and Adw.PreferencesGroup and Page were replaced with simpler Gtk.ScrolledWindow with nested Gtk.Boxand Label
This is how the library looked after these changes
Not much different right? That's a good sign :)
With these out of the way, we could work on making the current search more prominent. Since the search bar was activated by using the search button on the top left of the Library, there were a few people who were unaware of a search being present at all. To resolve this I shifted the search bar inside the library making it directly accessible and quicker to search.
The subsequent code also needed new logic so only the searched demos were visible. I used hash maps to store the currently visible categories and widgets depending on the search term.
//set the widget to be visible if it exists in the map category_map.forEach((category_widget, category_name) => { category_widget.visible = visible_categories.has(category_name); });
Getting the search to function as expected was relieving as it took a few iterations and changes to polish it enough to merge them. I am happy to report the search works just as expected now
See the search in action!
With these minor improvements, we were ready to add filtering to the demos based on the language and categories.
The logic for filtering was inspired by Sonny’s previous approach towards adding this feature (here if you want to check it out). We have two dropdowns one for the category and one for languages. The filtering was done based on the input provided in all of the three widgets (Search, Language dropdown, and Category dropdown), if and only if the search term matches all three of these, the result would be displayed.
//filtering logic const is_match = category_match && language_match && (search_term === "" || search_match); //set visibility if the term matches all three entry_row.visible = is_match; if (is_match) { results_found = true; visible_categories.add(category_check[category_index]); //also add it to the visible categories map }
This was super close to how we wanted the filtering to work. Here is the final result :D
It works!!If you’ve reached this far into the post, this cookie is for you
These are the commits for this part of the project for anyone curious
Having completed the filtering for our Library, now comes in the third part of my project which was to implement code search. Since we have a bunch of demos, storing and accessing search terms efficiently is a challenge. Sonny, Angelo, and I had a meeting to discuss code search which would then build up the starting point for the feature
Andy and I looked at a few options that could be used to implement this feature, majorly focusing on tools that provide working with large amounts of data. TinySPARQL is one such engine, but it is more preferred for files and directories which is not our goal. We need an API that can interact with the sqlite database and run text searches on it.
There are two major libraries under GNOME, libgomand libgda. libgom is an object-relational mapping library, which allows you to map database tables to GObjects and then run operations on those objects. This is in hindsight simpler than libgda, but it doesn't directly provide text-search functionalities on its own like libgda does.
As of writing this article, I have ported a demo example that makes use of libgom and performs a simple text/ID-based search on a single table. This can be scaled to bigger databases like our Library itself, but it starts showing limitations when it comes to more search functions.
Here is a screengrab of the demo, ported into Modern Gjs (GNOME JavaScript) :)
The example this demo is based on was written over 7 years ago
Now that we’ve seen the demo, let's have a look at the libgom magic that is happening in the background
First, we create a custom class that represents an object with properties id and url that we want to store in our table
We then, initialize the database using Gom.Adapter which also opens an SQLite database (for the simplicity of the demo, we’re only storing the contents in memory). A table is set up and mapped to the ItemClass that we previously created. The id field is set as the Primary key.
Once all the preliminary setup is done, I added the logic for basic text searching using a handy filter function in Gom
I use this to filter out the elements, store them in filtered_items, and display them in the table itself, voila
The PR is approved but yet to be merged, it will take some time before it reaches your Workbench. But if you would like to tinker around and make improvements to it, this is the PR
The plan right now is to implement search into the Library first using libgom and then later moving to libgda which is more versatile and provides full-text search functionalities using SQL queries without having to route them through GObjects.
Acknowledgments and Learnings
I am very thankful for the insights and guidance of my mentors, Andy and Sonny. They were quick to jump in whenever I encountered a blocker. They are awesome people with a strong passion for what they do, it’s been an honor to be able to contribute however little I was able to. I strive to be at their level someday.
This summer has been fruitful and fun for me. The most important thing I learned is to be curious and always ask questions.
A big thank you to Diego and Lorenz for reviewing all of the ports and providing much necessary improvements!
For the readers, I am pleasantly surprised that you were able to reach the end without scrolling away, thank you so much for tuning in and taking out time to read through. I hope this was just as fun for you to read as it was for me to write it :D
I’ll continue to stay in touch with everyone I have met and talked to during these few months because they are simply awesome!
This blog post summarizes the discussions and action items from the Infrastructure and Release Engineering workshop held at Flock 2024 in Rochester, New York, USA.
This post is also an experiment in using AI generated summaries to provide useful, at-a-glance summaries of key Fedora topics. Parts of this content may display inaccurate info, including about people, so double-check with the source material.
Standards for OpenShift app deployments: There’s a need for consistency in deploying applications to OpenShift. The group discussed creating best practices documentation and addressing deployment methods across various applications.
Infra SIG packages: The workshop reviewed the “infra-sig” package group and identified a need to:
Find owners for orphaned packages.
Onboard new maintainers using Packit.
Remove inactive members from the group.
Release engineering packages: The group agreed to add a list of release engineering packages to the infra-sig for better management.
Proxy network: Discussion about potentially migrating the proxy network from httpd to nginx or gunicorn remained inconclusive. Further discussion is needed.
AWS management with Ansible: The feasibility of managing AWS infrastructure with Ansible is uncertain due to limitations with the main Amazon account.
Onboarding improvements: The group discussed ways to improve the onboarding process for new contributors, including documentation updates, marketing efforts, and “Hello” days after each release.
OpenShift apps deployment info: A tutorial on deploying applications to OpenShift was presented and will be incorporated into the documentation.
Future considerations: The group discussed upcoming challenges like GitLab Forge migration, Bugzilla migration, and a new Matrix server.
Retiring wiki pages: The group needs to decide where to migrate user-facing documentation from the wiki. Additionally, someone needs to review and archive/migrate/delete existing wiki pages in the “Category:Infrastructure” section.
Datagrepper access for CommOps: A solution was proposed to provide CommOps with access to community metrics data by setting up a separate database in AWS RDS and populating it with recent Datagrepper dumps.
ARA in infrastructure: While AWX deployment offers similar reporting features, setting up ARA remains an option if someone has the time and interest.
AWX deployment: Roadblocks related to the public/private Ansible repository structure were identified. A proof of concept using AWX will be pursued to determine if repository restructuring is necessary.
Zabbix integration: The group discussed moving forward with Zabbix to replace Nagios. Action items include setting up a bot channel for alerts, adjusting alerts based on comparison with Nagios, and considering an upgrade to the next LTS version.
Action Items
Create comments in each application playbook explaining its deployment method.
Move all apps using deploymentconfig to deployment with OpenShift 4.16.
Look into deploying Advanced Cluster Security (ACS) for improved visibility into container images.
Create a “best practices” guide for deploying applications in OpenShift clusters.
Find individuals interested in helping with orphaned packages and onboarding new maintainers for the infra-sig package group.
Create a list of release engineering packages for inclusion in the infra-sig.
Continue discussions on migrating the proxy network and managing AWS infrastructure with Ansible.
Update onboarding documentation, implement marketing strategies for attracting contributors, and organize “Hello” days for new members.
Archive/migrate/delete wiki pages in the “Category:Infrastructure” section.
Work on tickets to set up a separate database for CommOps Datagrepper access.
Investigate the feasibility of setting up ARA in infrastructure.
Stand up a proof of concept for AWX deployment and discuss potential repository restructuring.
Set up a Zabbix bot channel for alerts, adjust alerts based on comparisons with Nagios, and consider upgrading to the next LTS version.
Overall, the workshop was a success, with productive discussions and a clear list of action items to move forward.
Note: The workshop lacked remote participation due to network limitations. The source material encourages readers to express interest in helping with the action items.
Meson has had togglable options from almost the very beginning. These split into two camps. The first one is "common options" like optimizations, warning level, language standard version and so on. The second one is "per project" options that are specific to each project, such as which backend to use. For a long time things were quite nice but as people started using subprojects more and more, the need to configure common options on a per-subproject basis became more and more important.
Meson added a limited way of setting some options per subproject, but it was never really felt like a proper integrated solution. Doing it properly turns out to have a lot of requirements because you want to be able to:
Override any shared option for any subproject
Do this at runtime from the command line
You want to unset any override given
Convert existing per-project settings to the new override format
Provide an UI that is readable and sensible
Do all of this without needing to edit subproject build files
The last one of these is important. It means that you can use deps directly (i.e. from WrapDB) without any local patches.
What benefits do you get out of it?
The benefits are most easily seen via examples. Let's say you are developing a program that uses a dependency that does heavy number crunching. You need to build that (and only that) subproject with optimizations enabled, otherwise your development experience is intolerably slow. This is done by defining an augment, like so:
meson configure -Acruncher:optimization=2
A stronger version of this would be to compile all subprojects with optimizations but the top level project without them. This is how you'd do it:
This scheme permits you to do all sorts of useful things, like disable -Werror on specific projects, build some subprojects with a different language version (such as gnu99), compiling LGPL deps as shared libraries and everything else as a static library, and so on.
Implementing
This is a big internal change. How big? Big! This is the largest refactoring operation I have done in my life. It is big enough that it took me over two years of procrastination before I managed to gather enough strength to start work on this. Pretty much all of my Meson work in the last six months or so has been spent on this one issue. The feature is still not done, but the merge request already has 80 commits and 1700+ new lines and even that is an understatement. I have chopped off bits of the change and merged them on their own. All in all this meant that the schedule for most days of my summer vacation went like this:
Wake up
Work on Meson refactoring branch until fed up
Work on my next book until fed up
Maybe do something else
Sleep
FTR I don't recommend this style of working for anyone else. Or even to myself. But sometimes you just gotta.
The main reason this change is so complex lies in the architecture. In existing code each built target "knew" the option settings needed for it (options could and can be overridden in build files on a per-target basis). This does not work any more. Instead the code needs one place that encapsulates all option data and provides methods like "what is the value of option X when building target Y in subproject Z". Option code was everywhere, so changing this meant touching the entire code base and that the huge change blob must be landed in master atomically.
The only thing that made this change even remotely feasible was that Meson has an extensive test suite. The main code changes were done months ago, and all work since then has gone into making existing unit tests pass. They still don't pass, so work continues. Without this test suite there would have been hundreds of regressing projects, people would be angry and everyone would pin their Meson to an old version and refuse to update. These are the sorts of breakages that kill projects dead. So, write tests, even if it does not seem fun. Without them every project will eventually end up in a fork in the road where the choice is between "death by stagnation" and "death by breaking end users". Most projects are not Python 3. They probably won't survive a similar level of breakage.
Refactoring, types and Python
Python is, at the same time, my favourite programming language and very much not my favourite programming language. Python in the small is nice, readable, wonderful and productive. As the project size grows, the lack of static types becomes aggravating and eventually you end up debugging cases like "why does this argument that should be a dict is an array one out of 500 times at random". Types make these problems go away and make refactoring easy.
But not always.
For this very specific case the complete lack of types actually made the refactoring easier. Meson currently supports more than one hundred different compilers. I needed to change the way compiler classes work, but I did not know how. Thus I started by just using the GNU C compiler. I could change that (and its base class) as much as I wanted without having to care about any other compiler class. As long as I did not use any other compiler their code was not called and it did not matter that their method signatures were completely different. In a static language all type changed would need to be done up front just to make the dang thing compile.
Still, you can have my types when you drag them from my cold, dead fingers. But maybe this is something for language designers of the future to consider. It would be kind of cool to have a strictly typed language where you could add a compiler flag to say "convert all variables into Python style variant dictionaries and make all type checks, method invocations etc work at runtime". Yes, people would abuse the crap out of this feature, but the same can be said about every new feature.
When will this land?
It is not done yet, so we don't know. At the earliest this will be in the next release, but more likely in the one after that.
If you like trying out new things and living dangerously, you can try the code from this MR. Be sure to post comments on that page if you do.
This blog post has been floating around as a draft for several years. It eventually split off into a presentation at GUADEC 2022, titled Offline learning with GNOME and Kolibri (YouTube). In that presentation, Manuel Quiñones and I explained how Endless OS reaches a unique audience by providing Internet-optional learning resources, and we provided an overview of our work with Kolibri. This post goes into more detail about the technical implementation of the Kolibri desktop app for GNOME, and in particular how it integrates with Endless OS.
Integrating a flatpak app with an immutable OS
In Endless OS, way back with Endless OS 4 in 2021, we added Kolibri, an app created by Learning Equality, as a new way to discover educational content. Kolibri has a rich library of video lessons, games, documents, e-books, and more; as well as tools for guided learning – both for classrooms, and for families learning at home. The curation means it is safe and comfortable to freely explore. And all of this works offline, with everything stored on your device.
Making this all come together was an interesting challenge, but looking back on it with Endless OS 6 alive and well, I can say that it worked out nicely.
The Kolibri app for GNOME
Learning Equality designed Kolibri with offline, distributed learning in mind. While an organization can run a single large Kolibri instance that everyone reaches with a web browser, it is equally possible for a group of people to use many small instances of Kolibri, where those instances connect with each other intermittently to exchange information. The developers are deeply interested in sneaker net-style use cases, and indeed Kolibri’s resilience has allowed it to thrive in many challenging situations.
Despite using Django and CherryPy at its heart, Kolibri often presents itself as a desktop app which expects to run on end user devices. Behind the scenes, the existing Windows and MacOS apps each bundle a Kolibri server, running it in the background for as long as the desktop app is running.
We worked with Learning Equality to create a new Kolibri app for GNOME. It uses modern GTK with WebKitGTK to show Kolibri itself. It also includes a desktop search provider, so you can search for Kolibri content from anywhere.
The Kolibri GNOME app is distributed as a flatpak, so its dependencies are neatly organized, it runs in a well-defined sandbox, and it is easy to install it from Flathub. For Endless OS, using flatpak means it is trivial to update Kolibri independent from Endless OS’s immutable base system.
Kolibri Daemon
But Endless OS doesn’t just include Kolibri. One of my favourite parts of Endless OS is it provides useful content out of the box, which is great for people with limited internet access. So in addition to Kolibri itself, we want a rich library of Kolibri content pre-installed. And with so much already there, ready to be used, we want it to be easy for people to search for that content right away and start Kolibri for the first time.
If we add more users, each with their own Kolibri content, we can imagine the size of that database becoming a problem.
This becomes both a technical challenge and a philosophical challenge. Normally, each desktop user has their own instance of Kolibri, with its own hidden directory full of content. Because it is a flatpak, it normally doesn’t see the rest of the system unless we explicitly give it permission to, and every time we do that we need to think carefully about what it means. Should we really grant a WebView the ability to read and write /run/media? We try to avoid it.
At the same time, we want a way to create new apps which use content from Kolibri, so that library of pre-installed content is visible up front, from the apps grid. But it would beexpensive if each of these apps ran its own instance of Kolibri. And whatever solution we employ, we don’t want to diverge significantly from the Kolibri people are using outside of Endless OS.
To solve these problems, we split the code which starts and stops the Kolibri service into a separate component, kolibri-daemon. The desktop app (kolibri-gnome) and the search provider each communicate with kolibri-daemon using D-Bus.
The desktop app communicates through kolibri-daemon, instead of starting Kolibri itself.
This design is exactly what happens when you start the Kolibri app from Flathub. And with the components neatly separated, on Endless OS we add eos-kolibri, which takes it a step further: it adds a kolibri system user and a service which runs kolibri-daemon on the D-Bus system bus. The resulting changes turn out to be straightforward, because D-Bus provides most of what we need for free.
Kolibri on Endless OS is almost the same, except kolibri-daemon is run by the Kolibri system user.
With this in place, every user on the system shares the same Kolibri content, and it is installed to a single well-known location: /var/lib/kolibri. Now, pre-installing Kolibri content is a problem we can solve at the system level, and in the Endless OS image builder. Independent from the app itself.
Channel apps
Now that we have solved the problem of Kolibri content being duplicated, we can come back to having multiple apps share the same Kolibri service. In Endless OS, we want users to easily see the content they have installed, and we do this by adding launchers to the apps grid.
First, we need to create those apps. If someone has installed content from a Kolibri channel like TED-Ed Lessons or Blockly Games, we want Kolibri to generate a launcher for that channel.
But remember, Kolibri on Endless OS is an unprivileged system service. It can’t talk to the DynamicLauncher portal. That belongs to the user’s session, and we want these launchers to be visible before a user ever starts Kolibri in their own session. Kolibri also can’t be creating files in /usr/share/applications. That would be far too much responsibility.
Instead, we add a Kolibri plugin to generate desktop entries for channels. The desktop entries refer to the Kolibri app using a custom URI scheme, a layer of indirection because Kolibri (potentially inside a flatpak) is unaware of how the host system launches it. The URI scheme provides enough information to start in a channel-specific app mode, instead of in its default configuration.
Finally, instead of placing the desktop entry files in one of the usual places, we place them in a well-known location inside Kolibri’s data directory. That way the channel apps are available, but not visible by default.
In Endless OS, the channel launchers end up in /var/lib/kolibri/data/content/xdg, so in our system configuration we add that directory to XDG_DATA_DIRS. This turns out to be a good choice, because it is trivial to start generating search providers for those apps, as well.
Kolibri channels along with other Education apps in Endless OS.
Search providers
To make sure people can find everything we’ve included in Endless OS, we add as many desktop search providers as we can think of, and we encourage users to explore them. The search bar in Endless OS is not just for apps.
That means we need a search provider for Kolibri. It’s a simple enough problem. We extended kolibri-daemon‘s D-Bus interface with its own equivalents for the GNOME Shell search provider interface. It is capable of reading directly from Kolibri’s database, so we can avoid starting an HTTP server. But we also want to avoid dealing with kolibri-daemon as much as possible. It is a Python process, heavy with web server stuff and complicated multiprocessing code. And, besides, the daemon could be connecting to the D-Bus system bus, and the shell only talks to search providers on the session bus. That’s why the search provider itself is a separate proxy application, written in C.
Kolibri returning search results in GNOME Shell.
But in Endless OS, we don’t just need one search provider, either. We want one for each of those channel apps we generated. So, I mentioned that our Kolibri plugin generates a search provider to go with each desktop file. Of course, loading and searching through Kolibri’s sqlite database is already expensive once, so it would be absurd to do it for every channel that is installed. That’s a lot of processes!
Fortunately, those search providers are all the same D-Bus service, with a different object path for each Kolibri channel. That one D-Bus service receives a lot of identical search queries for a lot of different object paths, but at least the system is only starting one process for it all. In the search provider code, I added a bespoke task multiplexer, which allows the service to run a single search in kolibri-daemon for a given query, then group the results and return them to different invocations from the shell.
Kolibri returning search results through several channel apps in Endless OS.
It is a complicated workaround, but it means search results appear in distinct buckets with meaningful names and icons. For our purpose in Endless OS, it was definitely worth the trouble.
User accounts
There was one last wrinkle here: Kolibri kept asking people to set it up, make a user account (with a password!), and sign in. It is, after all, a standalone learning app with a big complicated database that keeps track of learning progress and understands how to sync content between devices. But this isn’t a great experience if you’re just here to watch that lecture about cats.
What we want is for Kolibri to already know who is accessing it. They’re already signed in as a desktop user. And most of the time, we want to blaze right through that initial “set up your device” step, or at least make it as smooth as possible.
To do that, we added an interface in kolibri-daemon so the desktop app can get an authentication token to use over HTTP. On the other side, kolibri-daemon privately communicates with Kolibri itself to verify an authentication token, and it communicates with logind to build a profile for the authenticating user.
It was ugly at first, with a custom kolibri-desktop-auth-plugin which sat on top of Kolibri’s authentication system. But after some iteration, upstream Kolibri now has its own understanding of desktop users. On the surface, it uses Kolibri’s app interface plugin for platform integration. With the newest version of Kolibri we have been able to solve authentication in a way that I am properly happy with.
My favourite part of the feature has been seeing it come together with Kolibri’s first run wizard. Given a working authentication token, Kolibri knows to skip creating an initial user account, leaving only some simple questions about how the user is planning to use Kolibri; independently or connecting to an existing classroom.
That’s it!
It has been great to work on the Kolibri desktop app, and I expect to take some of the approaches and lessons here over to other projects. It is the first big new Python desktop app I have worked with, and it was interesting using some modern Python tools in tandem with the GNOME ways of doing things. The resulting codebase has some fun details:
The source repository includes a Flatpak manifest, so it builds and runs out of the box in GNOME Builder. As soon as that was working, I used Builder for everything.
Meson is truly indispensable for this kind of thing. We’re sharing build configuration between a bunch of Python modules, all sorts of configuration and data files, and a pair of C projects – one of which is imported by a Python module using GObject introspection. This all works (in a mere 577 lines of meson.build, if you’re counting) because the build system is language-agnostic, and I love it for that. I know that isn’t a lot to ask, but the go-to for Python is decidedly not language-agnostic, and I do not love it.
We added pre-commit to automatically clean up source files and run quick tests against them. It doesn’t actually require you have a Python codebase, but it is written in Python and I think people are afraid of how Pythony it looks? It’s really convenient, and it does a good job taking care of the usual nightmare of setting up a virtual environment to run all its tools. I often don’t bother with the actual git hook part, and instead I remember to run the thing manually, and we use the pre-commit github action to be sure.
At some point, I added Python type hinting to every part of the project. This tremendously improved the development experience with Builder, and it allowed me to add a mypy pre-commit hook to catch mistakes.
I got annoyed at the problem of needing to write release notes in the appdata file before knowing what the next release is called, so I devised a fun scheme where we add notes under "{current_version}+next", and then bump-my-version (another tool that looks very Pythony but everyone should use it) knows to mark that release entry as released, setting the date and version appropriately. I wish it didn’t involve regex, but as a concept it has been nice to use. I was tempted to write a pre-commit hook which actually insists on an up to date “next release” entry in appdata, but I should find another project to try it with.
With that said, a better workflow probably involves appstream-util news-to-appdata.
Managing history in WebKit can be tricky because the BackForwardList is read-only. That was an issue with the Kolibri app because we (with our UI consisting almost entirely of a WebView) need to communicate about Kolibri’s state before its HTTP server is running. Kolibri upstream provides a static HTML loading screen for this purpose, which is fine, but now we have this file in our WebView’s back / forward list. I solved it by swapping between different WebViews, and later showing one in a dialog just for Kolibri’s setup wizard. At first, that was all to keep the history stack organized, but at the same time I found it made the app feel a little less like a web browser in a trench coat. We can switch from the loading WebView to the real thing with a nice crossfade, and only when the UI is actually for real finished loading.
This whole project uses a lot of GObject throughout. At some point I finally read the pygobject manual and found myself happily doing property binding, signals and async functions and all those good things from Python. It was a much better experience than earlier in the project’s life where there was a type of angry mishmash between vanilla Python and GObject. (The thing that really freed this up was when I moved a lot of D-Bus code over to a C helper library with gdbus-codegen, which allowed me to delete the equivalent duplicative Python code, and also introduced a bunch more GObject). It’s easy to see why GObject works best with a language that doesn’t carry its own big standard library, but I was happy with how productive I could be in Python once I started actively preferring GObject, especially with the various magic helpers provided by PyGObject. In a future starting-from-scratch project, I would be tempted to make that a rule when adding imports and writing new classes.
I have to admit I got carried away with certain aspects of this. In the end there is a certain discontent to be had spending creative energy on what is, from many angles, a glorified web browser. It’s frustrating when the web stack leads us to treat an application as a black box behind an HTTP interface, which makes integration difficult: boot it up (in its own complex runtime environment which is heroically not a Docker container); wait until it is ready (Kolibri is good at this, but sometimes you’re just watching a file or polling some well-known port); authenticate; ask it (over HTTP) some trivial question that amounts to a single SQL command; return None. But look at that nice framework we’re using!
At the same time, it isn’t lost on me that a software stack like Kolibri’s simply is a popular choice for a cross-platform app. It’s worth understanding how to work with it in a way that still does the best we can to be useful, efficient, and comfortable to use.
Beyond all the tech stuff, I want to emphasize that Kolibri is an exceptionally cool project. I truly admire what Learning Equality are doing with it, and if you’re interested in offline-first content, data sovereignty, or just open source learning in general, I highly recommend checking it out – either our app on Flathub, or at learningequality.org/kolibri.
Most end-user platforms have something they call an intent system or something
approximating the idea. Implementations vary somewhat, but these often amount
to a high-level desktop or application action coupled to a URI or mime-type.
There examples of fancy URIs like sms:555-1234?body=on%20my%20way that can do
intent-like things, but intents are higher-level, more purposeful and certainly
not restricted to metadata shoehorned into a URI.
I'm going to approach this like the original proposal by David Faure and the
discussions that followed, by contrasting it with mime-types and then
demonstrating what the files for some real-world use cases might look like.
Let's start with the mime-apps Specification. For desktop environments
mime-types are, most of all, useful for associating content with applications
that can consume it. Once you can do that, the very next thing you want is
defaults and fallback priorities. Now can you double-click stuff to have your
favourite application open it, or right-click to open it with another of your
choice. Hooray.
We've also done something kind of clever, by supporting URI handlers with the
special x-scheme-handler/* mime-type. It is clever, it does work and it was
good enough for a long time. It's not very impressive when you see what other
platforms are doing with URIs, though.
Moving on to the Implements key in the Desktop Entry Specification, where
applications can define "interfaces" they support. A .desktop file for an
application that supports a search interface might look like this:
The last line is a list of interfaces, which in this case is the D-Bus interface
used for the overview search in GNOME Shell. In the case of the
org.freedesktop.FileManager1 interface we could infer a default from the
preferred inode/directory mime-type handler, but there is no support for
defining a default or fallback priority for these interfaces.
While researching URI handlers as part of the work funded by the STF, Sonny
reached out to a number developers, including Sebastian Wick, who has been
helping to push forward sandboxing thumbnailers. The proposed intent-apps
Specification turns out to be a sensible way to frame URI handlers, and other
interfaces have requirements that make it an even better choice.
In community-driven software, we've operated on a scratch-an-itch priority
model for a very long time. At this point we have several, arguably critical,
use cases for an intent system. Some known use cases include:
Default Terminal
This one should be pretty well known and a good example of when you might
need an intent system. Terminals aren't really associated with anything, let
alone a mime-type or URI scheme, so we've all been hard-coding defaults for
decades now. See the proposed terminal-intent Specification for details.
Thumbnailers
If C/C++ are the languages responsible for most vulnerabilities, thumbnailers
have to be high on the list of application code to blame. Intents will allow
using or providing thumbnailing services from a sandboxed application.
URI Handler
This intent is probably of interest to the widest range of developers, since
it allows a lot freedom for independent applications and provides assurances
relied on by everything from authentication flows to personal banking apps.
Below is a hypothetical example of how an application might declare it can
handle particular URIs:
While the Desktop Entry specification states that interfaces can have a named
group like above, there are no standardized keys shared by all interfaces. The
Supports key proposed by Sebastian is important for both thumbnailers and URI
handlers. Unlike a Terminal which lacks any association with data, these need
the ability to express additional constraints.
So the proposal is to have the existing Implements key work in tandem with
the intentapps.list (similar to the MimeType key and mimeapps.list), while
the Supports key allows interfaces to define their own criteria for defaults
and fallbacks. Below is a hypothetical example of a thumbnailer's .desktop
file:
The Supports key will always be a list of strings, but the values themselves
are entirely up to the interface to define. To the intent system, these are
simply opaque tags with no implicit ordering. In the URI handler we may want
this to be a top-level domain to prevent things like link hijacking, while
thumbnailers want to advertise which mime-types they can process.
In the intentapps.list below, we're demonstrating how one could insist that a
particular format, like sketchy SVGs, are handled by Loupe:
We're in a time when Linux users need to do things like pass an untrusted file
attachment, from an unknown contact, to a thumbnailer maintained by an
indepedent developer. So while the intent-apps Specification itself is
superficially quite simple, if we get this right it can open up a lot of
possibilities and plug a lot of security holes.
First a bit of context for the GLib project, which is comprised of three main
parts: GLib, GObject and GIO. GLib contains things you'd generally get from a
standard library, GObject defines the OOP semantics (methods/properties/signals,
inheritance, etc), and GIO provides reasonably high-level APIs for everything
from sockets and files to D-Bus and Gio.DesktopAppInfo.
The GLib project as a whole contains a substantial amount of the XDG
implementations for the GLib/GTK-lineage of desktop environments. It also
happens to be the layer we implement a lot of our cross-platform support, from
OS-level facilities like process spawning on Windows to desktop subsystems like
sending notifications on macOS.
Fig. 1. A GLib Maintainer
The merge request I drafted for the initial implementation received what
might look like Push Back, but this should really be interpreted as a Speed
Bump. GLib goes a lot of places, including Windows and macOS, thus we need
maintainers to make prudent decisions that allow us to take calculated risks
higher in the stack. It may also be a sign that GLib is no longer the first
place we should be looking to carry XDG implementations.
Something that you may be able to help with, is impedance-matching our
implementation of the intent-apps Specification with its counterparts in the
Apple and Microsoft platforms. Documentation is available (in varying quality),
but hands-on experience would be a great benefit.
Last year, I was invited by Sonny Piers to co-mentor for both Google Summer
of Code and Outreachy, which was really one the best times I've had in the
community. He also invited a couple of us Workbenchers from that period to
the kick-off meeting for this year's projects.
Recently, he asked if I could step in and help out with this year's programs.
This is a very unfortunate set of circumstances to arise during an internship
program, but regardless, I'm both honored and thrilled.
I think there's good chance you've run into one of our mentees this year,
Shem Angelo Verlain (aka vixalien). He's been actively engaging in the GJS
community for some time and contributing to better support for TypeScript,
including his application Decibels which is in incubation to become a part of
GNOME Core. His project to bootstrap TypeScript support in Workbench is going
to play an important role in its adoption by our community.
Our other mentee, Bharat Atbrat, has a familiar origin story. It started as an
innocent attempt to fix a GNOME Shell extension, turned into a merge request
for GNOME Settings, rolled over into porting Workbench demos to Vala and it's
at this point one admits to oneself they've been nerd-sniped. Since then, Bharat
has been porting more demos to Vala and working on an indexed code search for
the demos. As a bonus, we will get a GOM demo that's being used to prototype
and test searching capabilities.
The release notes are not yet finalized for GNOME 47, but there are few
highlights worth mentioning.
There have been several improvements to the periodic credential checks, fixing
several false positives and now notifying when an account needs to be
re-authenticated. The notification policy in GNOME 47.beta turned out overly
aggressive, so it has been amended to ensure you are notified at most once per
account, per session.
Fig. 2. Entirely Reasonable Notification Policy
For Kerberos users, there is rarely any exciting news, however after
resurrecting a merge request by Rishi (a previous maintainer) and some help,
we now support Linux's general notification mechanism as a very efficient
alternative to the default credential polling. If you're using your Kerberos or
Fedora account on a laptop or GNOME Mobile, this may improve your battery life
noticeably.
The support for Mail Autoconfig and improved handling of app passwords for
WebDAV accounts will ship in GNOME 47. The DAV discovery and Mail Autoconfig
will form the base of the collection provider, but this won't ship until
GNOME 48. Aside from time constraints, this will allow a cycle to shake out
bugs while the existing pieces are stitched together.
The Microsoft 365 provider has enabled support for email, calendar and
contacts, thanks to more work by Jan Michael-Brummer and Milan Crha. This
is available in GNOME OS Nightly now, so it's great time to get in some
early testing. We've made progress on verifying our application to supports more
organizational accounts and, although this is not constrained by our release
schedule, I expect it to be resolved by GNOME 47.
Many thanks again to the Sovereign Tech Fund and everyone who helped make it
possible. I would also like to express my appreciation to everyone who helps me
catch up on the historical context of the various XDG and GLib facilities. Even
when documentation exists, it can be extremely arduous to put the picture
together by yourself.
After many months, I finally found the time to finish the GNOME desktop/application settings migration in the Linux Desktop Migration Tool and made another release. It basically involves exporting the dconf keys on the source machine and importing writable keys on the destination machine. I’ve also added some extra code to handle the desktop background. If the dconf key points to a picture that is not present on the destination machine, the picture is copied as well. Be it a custom background, or a system provided one that is no longer shipped (in case you’re doing the migration between OSes of different versions).
The list of migrations the tool can do is already fairly long:
Migrate data in XDG directories (Documents, Pictures, Downloads…) and other arbitrary directories in home.
I attended GUADEC 2024 last month in Denver, Colorado. I thought I’d write about some of the highlights for me.
It was definitely the smallest GUADEC I’ve been to, and it was unusual in some other ways too, such as having several hybrid presentations, with remote and in-person presenters sharing the stage. That took some adjusting, but it worked well, even if I missed some of the energy of past events. (I shared some thoughts about hybrid GUADEC on a Discourse thread).
I felt this GUADEC was really defined by the keynotes. They were great!
First, we had Ryan Stipes from Thunderbird telling us all about Thunderbird’s journey from a somewhat neglected but well-loved side project and on to a thriving self-funded community project: Thunderbird, The Death and Rebirth of an OSS Project (YouTube). He had a lot to say about the value of metrics to measure the impact of certain features and target platforms, which really resonated with people. (It is interesting to note, for instance, there appear to be more Thunderbird users using Windows 8.1 than Linux). He also had a lot to say about the success Thunderbird had just being direct and asking users for money.
Much of this success comes from Thunderbird doing a good job telling its own story. People clearly understand what Thunderbird is doing for them. And there was plenty of talk for the next few days: what does it mean for GNOME to own its story?
I also really enjoyed Stephanie Taylor’s keynote, all about Google Summer of Code (which started 20 years ago now!): Google Summer of Code 20 years of OSS Mentorship (YouTube). It just made me super happy as a GSoC alumni (one of thousands!) to see that program continuing to do so much good, and how much mentorship in open source has grown over the years.
Scott Jenson’s presentation, How can GNOME explore bigger concepts? (YouTube), is another really important watch. Scott’s advice about breaking free from traps like constraint thinking really resonated with me, especially his suggestion to, at first, treat the software like it is magic and see where that leads.
That approach reminds me of how software improves in implementation, as well. It is natural for a codebase to start off with a whole bunch of “do magic” stub functions, then slowly morph into a chaotic mess until finally it turns into something that actually might just work. And getting to that last step usually involves deleting a lot of code, after it turns out you never needed all that much magic. But you have to be patient with the chaos to get there. You have to believe in it.
Speaking of magic, there is so much around GNOME that is exciting right now, so I spent some time just being excited about things.
Eitan Isaacson talked about Spiel, a modern speech synthesis system: The Whole Spiel – A New Speech Synthesis API (YouTube). I loved his examples showing how it is important to satisfy several very different use cases for speech synthesis. While one user may value the precision of eSpeak at chipmunk speed, another user would prefer their computer talks like a human. And if we can get speech synthesis working well for non-accessibility reasons, there’s a real curb cut effect that should benefit everyone, including people who are just starting to use accessibility tools.
I went to the newest edition of Jonathan Blandford and Federico Mena Quintero’s presentation about Crosswords, GNOME Crosswords, Year Three (YouTube). It was abridged due to the format, but I especially enjoyed learning about the MVC-like data model for the application. It would be neat to see more GNOME apps using the same pattern.
There was a lot to learn about GNOME OS and OpenQA testing. The process for a new developer to get into hacking on a GNOME system component tends to be really awkward – particularly if that developer doesn’t want to mess up their host system. So You’re always breaking GNOME (YouTube) got me pretty excited about what’s coming with GNOME OS and sysext, as well as for testing in general. The OpenQA workshop on Monday was also well attended. Some people were unclear what openqa.gnome.org was doing, or what it can do for them. Just stepping through some conveniently broken tests and fixing them together was an excellent way to demystify the thing.
Much of this work is being helped along with the Sovereign Tech Fund. This is the GUADEC where a lot of that is up for show, and I think it’s amazing to see so many quiet but high impact projects finally getting the attention (and funding) they deserve.
Outside of the event, it was great hanging out around Denver with all sorts of GNOME folks. I loved how many restaurants were perfectly happy to accommodate giant mobs of people. We saw huge mountains, the Colorado Rockies winning a baseball game, surprisingly good karaoke, and some very unique bars. On the last day, a large contingent of us headed to Meow Wolf, which was just a ridiculously fun way to spend a few hours. It reminded me of a point and click adventure game in the style of Myst and Riven, in all the best ways.
I was also suitably impressed by the 35 minute walk from where I was staying, around Empower Field, over the South Platte River, under some giant highway … which was actually entirely pleasant, for North America. This part of Denver has plenty of pedestrian bridges, which are both nice to walk along and really helpful to guide pedestrians through certain areas, so for me the obvious walking routes were completely different from (and as efficient as) the obvious driving routes.
The GUADEC dinner was, for me, the ideal GUADEC dinner. It was right there at the venue, at the same brewery people had been going to every day – but this time with free tacos! I truly appreciated the consistency there, for Denver has good beer and good tacos. I also appreciated that we were set up both inside and outside, at nice big tables with plenty of room for people to sit. It helped me to feel comfortable, and it was great for people who were there with families (which meant I got to meet said families!). It reminded me of the GUADEC 2022 taco party. An event like this really shines when people are moving around, and there was a lot of it here.
It turns out I didn’t take many pictures this year, but the official ones are better anyway. I did, however, take far too many pictures from the train ride home: I rode Amtrak, mostly for fun, on California Zephyr from Denver to Sacramento; then Coast Starlight from Sacramento to Seattle; and the smaller Cascades train from Seattle to Vancouver. It was beautiful, and seriously I think everyone should have the opportunity to try an overnight roomette on the Zephyr. My favourite part was sitting in the spacious observation car watching the world go by, getting only the tiniest amount of work done. I found tons of fun people to talk to, which I don’t usually do, but something about that space made it oddly comfortable. Everyone there was happy and sociable and relaxed. And I guess I was still in conference mode.
I returned home refreshed and excited for where GNOME is heading, especially with new progress around accessibility and developer tools. And with plenty of ideas for little projects I can work on this year.
Thanks to all the awesome people who make GUADEC happen, as well as my employer, Endless OS Foundation, for giving me the opportunity to spend several work days meeting people from around the GNOME community and wandering around Denver.
It's been a busy year, and our platform and developer community-building efforts are paying off. Let's take a look at what we've been up to over the last six months, and measure its effect.
We're back with some new milestones thanks to the continued growth of Flathub as an app store and the incredible work of both our largely volunteer team and our growing app developer community:
Over 1,000 apps have been verified by their developers on Flathub, including 70% of the top 30 most popular apps. Developers of verified apps are ultimately in charge of their own app listings, and their updates are delivered directly to Flathub users while passing our automated testing and human review of things like permission changes.
Over 100 apps are now passing our quality guidelines that include checks like icon contrast on both light and dark backgrounds, quality screenshots, and consistent app naming and descriptions so users get a better experience browsing Flathub. These guidelines are what enable us to curate and display visually appealing and consistent banners on the new home page, for example.
This means that between late February and July, the developers of over 100 apps went out of their way to improve—and sometimes make significant changes to—their apps' metadata to get ready for these new guidelines and features on Flathub. We're proud of these developers who have gone above and beyond, and we look forward to even more apps opting in over time.
Developers, if you'd like to see your app featured on the home page, please ensure you are following these guidelines! We've heard from app developers that getting your app featured not only gives a bump in downloads, but can also bring an increase in contributions to your project if it's open source.
Six months ago we passed one million active users based on a simple if conservative estimate of updates to a common runtime version we had served. Using that same methodology, we now estimate we have over 4 million active users!
As a reminder, this data is publicly available and anyone can check our work. In fact, I personally would love if we could work with a volunteer from the community to automate this statistic so we don't have to do manual collation each time. If you're interested, check out this GitHub issue.
Those users have been busy, too: to date we have served over two billion downloads of different apps to people using different Linux flavors all around the world. This is a huge community of people trusting Flathub as their source of apps for Linux.
Thank you to our download-happy community of users who have put their trust in Flathub as their source of apps on Linux. Thank you to all of the developers of those apps, and in particular those developers who have chosen to follow the quality guidelines to help make Flathub a more consistent and engaging space. And thank you to every contributor to Flathub itself whether you are someone who fixed a typo in the developer documentation, helped translate the store, contributed mockups and design work, or spent countless hours keeping everything running smoothly.
As a grassroots effort, we wouldn't have become the Linux app store without each and every one of you. ❤️