Steven Deobald

@steven

2025-08-08 Foundation Update

## Opaque Things

Very unfortunately, most of my past two weeks have been spent on “opaque things.” Let’s just label that whole bundle “bureaucracy” for now.

 

## Apology

I owe a massive apology to the GIMP team. These folks have been incredibly patient while the GNOME Foundation, as their fiscal host, sets up a bunch of new paperwork for them. It’s been a slow process.

Due to the above bureaucracy, this process has been even slower than usual. Particularly on GIMP’s big pearl anniversary year, this is especially frustrating for them and the timing is awful.

As long as I am in this role, I accept responsibility for the Foundation’s struggles in supporting the GIMP project. I’m sorry, folks, and I hope we get past the current round of difficulties quickly so we can provide the fiscal hosting you deserve.

 

## Advisory Board Room

One of my favourite parts of GUADEC was our Advisory Board day. I really enjoyed hearing what all our Advisory Board members are working on, their challenges, and how they might collaborate with one another. It was a really productive day and we all agreed we’d like to continue that feeling throughout the year. We’ve started a new Advisory Board Room, as a result. The medium for this meeting place may change (as required), but it’s my commitment to support it, since the Foundation is the town hall for these organizations. SUSE, Canonical, Red Hat, the Document Foundation, Endless, and Debian were all in attendance at GUADEC — it was an honour to bring these folks together. We recently had postmarketOS join our Advisory Board and, given how much progress they’ve already made, I’m excited to see them breathe new life into the GNOME ecosystem. The desktop is rock solid. Mobile is growing quickly. I can’t wait to listen to more of these conversations.

 

## Draft Budget

Thanks to Deepa’s tremendous work teasing apart our financial reporting, she and I have a draft budget to present to the Board on their August 12th regular meeting.

The board has already seen a preliminary version of the budget as of their July 27th meeting and an early draft as of last week. Our schedule has an optional budget review August 26th, another required budget review on September 9th and, if we need a final meeting to pass the budget, we can do that at the September 23rd board meeting. Last year, Richard passed the first on-time GNOME Foundation budget in many years. I’m optimistic we can pass a clear, uncomplicated budget a month early in 2025.

(Our fiscal year is October – September.)

I’m incredibly grateful to Deepa for all the manual work she’s put into our finances during an already busy summer. Deepa’s also started putting in place a tagging mechanism with our bookkeepers which will hopefully resolve this in a more automatic way in the future. The tagging mechanism will work in conjunction with our chart of accounts, which doesn’t really represent the GNOME Foundation’s operating capital at all, as a 501(c)(3)’s chart of accounts is geared toward the picture we show to the IRS, not the picture we show to the Board or use to pass a budget. It’s the same data, but two very different lenses on it.

Understanding is the first step. After a month of wrestling with the data, we now understand it. Automation is the second step. The board should know exactly where the Foundation stands, financially, every year, every quarter, and every month… without the need for manual reports.

 

## 501(c)(3) Structural Improvements

As I mention in my GUADEC keynote, the GNOME Foundation has a long road ahead of it to become an ideal 501(c)(3) … but we just need to make sure we’re focused on continuous improvement. Other non-profits also struggle with their own structural challenges, since charities can take many different shapes and are often held together almost entirely by volunteers. Every organization is different, every organization wants to succeed.

I had the pleasure of some consultation conversations this week with Amy Parker of the OpenSSL Foundation and Halle Baksh, a 501(c) expert from California. Delightful folks. Super helpful.

One of the greatest things I’m finding about working in the non-profit space is that we have a tremendous support network. Everyone wants us to succeed and every time we’re ready to take the next step, there will be someone there to help us level up. Every time we do, the Foundation will get more mature and, as a result, more effective.

 

## Travel Policy Freeze

I created some confusion by mentioning the travel policy freeze in my AGM slides. A group of hackers asked me at dinner on the last night in Brescia, “does this mean that all travel for contributors is cancelled?”

No, absolutely not. The travel policy freeze means that every travel request must be authorized by the Executive Director and the President; we have temporarily suspended the Travel Committee’s spending authority. We will of course still sponsor visas for interns and other program-related travel. The intention of the travel policy freeze is to reduce administrative travel costs and add them to program travel, not the other way around.

Sorry if I freaked anyone out. The goal (as long as I’m around) will always be to push Foundation resources toward programs like development, infrastructure, and events.

 

## Getting Back To Work

Sorry again for the short update. I’m optimistic that we’ll get over this bureaucratic bump in the road soon enough. Thanks for your patience.

GNOME 49 Backlight Changes

One of the things I’m working on at Red Hat is HDR support. HDR is inherently linked to luminance (brightness, but ignoring human perception) which makes it an important parameter for us that we would like to be in control of.

One reason is rather stupid. Most external HDR displays refuse to let the user control the luminance in their on-screen-display (OSD) if the display is in HDR mode. Why? Good question. Read my previous blog post.

The other reason is that the amount of HDR headroom we have available is the result of the maximum luminance we can achieve versus the luminance that we use as the luminance a sheet of white paper has (reference white level). For power consumption reasons, we want to be able to dynamically change the available headroom, depending on how much headroom the content can make use of. If there is no HDR content on the screen, there is no need to crank up the backlight to give us more headroom, because the headroom will be unused.

To work around the first issue, mutter can change the signal it sends to the display, so that white is not a signal value of 1.0 but somewhere else between 0 and 1. This essentially emulates a backlight in software. The drawback is that we’re not using a bunch of bits in the signal anymore and issues like banding might become more noticeable, but since we’re using this only with 10 or 12 bits HDR signals, this isn’t an issue in practice.

This has been implemented in GNOME 48 already, but it was an API that mutter exposed and Settings showed as “HDR Brightness”.

GNOME Settings Display panel showing the HDR Brightness Setting

"HDR Brightness" in GNOME Settings

The second issue requires us to be able to map the backlight value to luminance, and change the backlight atomically with an update to the screen. We could work towards adding those things to the existing sysfs backlight API but it turns out that there are a number of problems with it. Mapping the sysfs entry to a connected display is really hard (GNOME pretends that there is only one single internal display that can ever be controlled), and writing a value to the backlight requires root privileges or calling a logind dbus API. One internal panel can expose multiple backlights, the value of 0 can mean the display turns off or is just really dim.

So a decision was made to create a new API that will be part of KMS, the API that we use to control the displays.

The sysfs backlight has been controlled by gnome-settings-daemon and GNOME Shell called a dbus API when the screen brightness slider was moved.

To recap:

  • There is a sysfs backlight API for basically one internal panel requires logind or a setuid helper executable
  • gnome-settings-daemon controlled this single screen sysfs backlight
  • Mutter has a “software backlight” feature
  • KMS will get a new backlight API that needs to be controlled by mutter

Overall, this is quite messy, so I decided to clean this up.

Over the last year, I moved the sysfs backlight handling from gnome-settings-daemon into mutter, added logic to decide which backlight to use (sysfs or the “software backlight”), and made it generic so that any screen can have a backlight. This means mutter is the single source of truth for the backlight itself. The backlight itself gets its value from a number of sources. The user can configure the screen brightness in the quick settings menu and via keyboard shortcuts. Power saving features can kick in and dim the screen. Lastly, an Ambient Light Sensor (ALS) can take control over the screen brightness. To make things more interesting, a single “logical monitor” can have multiple hardware monitors which each can have a backlight. All of that logic is now neatly sitting in GNOME Shell which takes signals from gnome-settings-daemon about the ALS and dimming. I also changed the Quick Settings UI to make it possible to control the brightness on multiple screens, and removed the old “HDR Brightness” from Settings.

All of this means that we can now handle screen brightness on multiple monitors and when the new KMS backlight API makes it upstream, we can just plug it in, and start to dynamically create HDR headroom.

Richard Hughes

@hughsie

LVFS Sustainability Plan

tl;dr: I’m asking the biggest users of the LVFS to sponsor the project.

The Linux Foundation is kindly paying for all the hosting costs of the LVFS, and Red Hat pays for all my time — but as LVFS grows and grows that’s going to be less and less sustainable longer term. We’re trying to find funding to hire additional resources as a “me replacement” so that there is backup and additional attention to LVFS (and so that I can go on holiday for two weeks without needing to take a laptop with me).

This year there will be a fair-use quota introduced, with different sponsorship levels having a different quota allowance. Nothing currently happens if the quota is exceeded, although there will be additional warnings asking the vendor to contribute. The “associate” (free) quota is also generous, with 50,000 monthly downloads and 50 monthly uploads. This means that almost all the 140 vendors on the LVFS should expect no changes.

Vendors providing millions of firmware files to end users (and deriving tremendous value from the LVFS…) should really either be providing a developer to help write shared code, design abstractions and review patches (like AMD does) or allocate some funding so that we can pay for resources to take action for them. So far no OEMs provide any financial help for the infrastructure itself, although two have recently offered — and we’re now in a position to “say yes” to the offers of help.

I’ve written a LVFS Project Sustainability Plan that explains the problem and how OEMs should work with the Linux Foundation to help fund the LVFS.

I’m aware funding open source software is a delicate matter and I certainly do not want to cause anyone worry. We need the LVFS to have strong foundations; it needs to grow, adapt, and be resilient – and it needs vendor support.

Draft timeline, which is probably a little aggressive for the OEMs — so the dates might be moved back in the future:

APR 2025: We started showing the historical percentage “fair use” download utilization graph in vendor pages. As time goes on this will also be recorded into per-protocol sections too.

downloads over time

JUL 2025: We started showing the historical percentage “fair use” upload utilization, also broken into per-protocol sections:

uploads over time

JUL 2025: We started restricting logos on the main index page to vendors joining as startup or above level — note Red Hat isn’t sponsoring the LVFS with money (but they do pay my salary!) — I’ve just used the logo as a placeholder to show what it would look like.

AUG 2025: I created this blogpost and sent an email to the lvfs-announce mailing list.

AUG 2025: We allow vendors to join as startup or premier sponsors shown on the main page and show the badge on the vendor list

DEC 2025: Start showing over-quota warnings on the per-firmware pages

DEC 2025: Turn off detailed per-firmware analytics to vendors below startup sponsor level

APR 2026: Turn off access to custom LVFS API for vendors below Startup Sponsorship level, for instance:

  • /lvfs/component/{}/modify/json
  • /lvfs/vendors/auth
  • /lvfs/firmware/auth

APR 2026: Limit the number of authenticated automated robot uploads for less than Startup Sponsorship levels.

Comments welcome!

This Week in GNOME

@thisweek

#211 Handling Brightness

Update on what happened across the GNOME project in the week from August 01 to August 08.

GNOME Core Apps and Libraries

GNOME Shell

Core system user interface for things like launching apps, switching windows, system search, and more.

swick says

Screen brightness handling has been overhauled! The immediate benefit is that the screen brightness controls in the Quick Settings menu now work in HDR mode and with multiple monitors.

Read more in my blog post: https://blog.sebastianwick.net/posts/gnome-49-backlight-changes/

GLib

The low-level core library that forms the basis for projects such as GTK and GNOME.

Philip Withnall reports

Tobias Stoeckmann has been fixing many corner cases in array handling code in GLib, making it more robust, and has also found time to help with reformatting and improving the documentation.

We could do with help to finish the port of GLib to gi-docgen! If you can spare half an hour to tidy up a piece of the API documentation so it follows the new API doc guidelines then please pick something off #3250, thank you!

GNOME Incubating Apps

Alice (she/her) 🏳️‍⚧️🏳️‍🌈 reports

right before the UI freeze, Papers got a new text selection style, matching the rest of the apps - selection is translucent, and the original text color is visible through it. This required an API addition in Poppler and will only work in nightly for now - if Poppler is too old, it will revert to the previous style

GNOME Circle Apps and Libraries

Tuba

Browse the Fediverse.

GeopJr 🏳️‍⚧️🏳️‍🌈 says

Tuba v0.10.0 is now available, with many new features and bug fixes!

✨ Highlights:

  • New Composer
  • Grouped Notifications
  • Play media from third-party services in-app with Clapper
  • In-app web browser
  • Collapse long posts
  • Mastodon quotes
  • Iceshrimp Drive
  • ‘Featured’ Profile tab
  • Local-only posting
  • Search History
  • Alt text from file metadata

Third Party Projects

Alexander Vanhee says

Gradia now has at least 127% more gradients thanks to the new gradient selector, which now supports radial and conic modes as well as custom color stops. I also took advantage of Gradia being a windowed annotation tool by implementing zooming, making it easier to draw with precision.

Try it out via Flathub.

Pipeline

Follow your favorite video creators.

schmiddi announces

Pipeline version 3.0.0 was released. This release is a major redesign of the UI to be more intuitive as well as adaptive for both desktop and mobile. See the changelog for more information regarding the release. Huge thanks to lo for creating the mockup as well as helping to implement and test this version, as well as Alexander for the help implementing quite a lot of the updated UI and also testing.

If you are running Pipeline on an older device without GLES 3.0 support like the PinePhone, note that due to an update in GTK removing GLES 2.0 support the application will now be software rendered, decreasing performance and breaking the internal video player. I recommend switching to use an external video player, like Clapper, instead. There is also a setting to use cairo software rendering instead of LLVMpipe, which in my testing improves performance a bit on those devices.

GNOME Foundation

barthalion announces

GNOME Foundation members (and SSO account holders in general) have two new services at their disposal:

  • vault.gnome.org, a password manager backed by Vaultwarden. Create an account with your @gnome.org e-mail alias; it is not tied to the SSO and so accounts remain active even when the membership expires. Please keep in mind we cannot recover your password, and thus the content of your vault, unless you are a member of staff or the board.
  • reader.gnome.org, an RSS reader backed by Miniflux. Simply log in with your SSO account like to other services. It can also be used with Newsflash after generating an API key in settings.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Michael Meeks

@michael

2025-08-07 Thursday

  • Packed, hearts set for home, rested by the beach, bus, plane, car - back home. Good to see Leanne; bed early - two hours ahead time-wise.

Andy Wingo

@wingo

whippet hacklog: adding freelists to the no-freelist space

August greetings, comrades! Today I want to bookend some recent work on my Immix-inspired garbage collector: firstly, an idea with muddled results, then a slog through heuristics.

the big idea

My mostly-marking collector’s main space is called the “nofl space”. Its name comes from its historical evolution from mark-sweep to mark-region: instead of sweeping unused memory to freelists and allocating from those freelists, sweeping is interleaved with allocation; “nofl” means “no free-list”. As it finds holes, the collector bump-pointer allocates into those holes. If an allocation doesn’t fit into the current hole, the collector sweeps some more to find the next hole, possibly fetching another block. Space for holes that are too small is effectively wasted as fragmentation; mutators will try again after the next GC. Blocks with lots of holes will be chosen for opportunistic evacuation, which is the heap defragmentation mechanism.

Hole-too-small fragmentation has bothered me, because it presents a potential pathology. You don’t know how a GC will be used or what the user’s allocation pattern will be; if it is a mix of medium (say, a kilobyte) and small (say, 16 bytes) allocations, one could imagine a medium allocation having to sweep over lots of holes, discarding them in the process, which hastens the next collection. Seems wasteful, especially for non-moving configurations.

So I had a thought: why not collect those holes into a size-segregated freelist? We just cleared the hole, the memory is core-local, and we might as well. Then before fetching a new block, the allocator slow-path can see if it can service an allocation from the second-chance freelist of holes. This decreases locality a bit, but maybe it’s worth it.

Thing is, I implemented it, and I don’t know if it’s worth it! It seems to interfere with evacuation, in that the blocks that would otherwise be most profitable to evacuate, because they contain many holes, are instead filled up with junk due to second-chance allocation from the freelist. I need to do more measurements, but I think my big-brained idea is a bit of a wash, at least if evacuation is enabled.

heap growth

When running the new collector in Guile, we have a performance oracle in the form of BDW: it had better be faster for Guile to compile a Scheme file with the new nofl-based collector than with BDW. In this use case we have an additional degree of freedom, in that unlike the lab tests of nofl vs BDW, we don’t impose a fixed heap size, and instead allow heuristics to determine the growth.

BDW’s built-in heap growth heuristics are very opaque. You give it a heap multiplier, but as a divisor truncated to an integer. It’s very imprecise. Additionally, there are nonlinearities: BDW is relatively more generous for smaller heaps, because attempts to model and amortize tracing cost, and there are some fixed costs (thread sizes, static data sizes) that don’t depend on live data size.

Thing is, BDW’s heuristics work pretty well. For example, I had a process that ended with a heap of about 60M, for a peak live data size of 25M or so. If I ran my collector with a fixed heap multiplier, it wouldn’t do as well as BDW, because it collected much more frequently when the heap was smaller.

I ended up switching from the primitive “size the heap as a multiple of live data” strategy to live data plus a square root factor; this is like what Racket ended up doing in its simple implementation of MemBalancer. (I do have a proper implementation of MemBalancer, with time measurement and shrinking and all, but I haven’t put it through its paces yet.) With this fix I can meet BDW’s performance for my Guile-compiling-Guile-with-growable-heap workload. It would be nice to exceed BDW of course!

parallel worklist tweaks

Previously, in parallel configurations, trace workers would each have a Chase-Lev deque to which they could publish objects needing tracing. Any worker could steal an object from the top of a worker’s public deque. Also, each worker had a local, unsynchronized FIFO worklist, some 1000 entries in length; when this worklist filled up, the worker would publish its contents.

There is a pathology for this kind of setup, in which one worker can end up with a lot of work that it never publishes. For example, if there are 100 long singly-linked lists on the heap, and the worker happens to have them all on its local FIFO, then perhaps they never get published, because the FIFO never overflows; you end up not parallelising. This seems to be the case in one microbenchmark. I switched to not have local worklists at all; perhaps this was not the right thing, but who knows. Will poke in future.

a hilarious bug

Sometimes you need to know whether a given address is in an object managed by the garbage collector. For the nofl space it’s pretty easy, as we have big slabs of memory; bisecting over the array of slabs is fast. But for large objects whose memory comes from the kernel, we don’t have that. (Yes, you can reserve a big ol’ region with PROT_NONE and such, and then allocate into that region; I don’t do that currently.)

Previously I had a splay tree for lookup. Splay trees are great but not so amenable to concurrent access, and parallel marking is one place where we need to do this lookup. So I prepare a sorted array before marking, and then bisect over that array.

Except a funny thing happened: I switched the bisect routine to return the start address if an address is in a region. Suddenly, weird failures started happening randomly. Turns out, in some places I was testing if bisection succeeded with an int; if the region happened to be 32-bit-aligned, then the nonzero 64-bit uintptr_t got truncated to its low 32 bits, which were zero. Yes, crusty reader, Rust would have caught this!

fin

I want this new collector to work. Getting the growth heuristic good enough is a step forward. I am annoyed that second-chance allocation didn’t work out as well as I had hoped; perhaps I will find some time this fall to give a proper evaluation. In any case, thanks for reading, and hack at you later!

libinput and Lua plugins (Part 2)

Part 2 is, perhaps suprisingly, a follow-up to libinput and lua-plugins (Part 1).

The moon has circled us a few times since that last post and some update is in order. First of all: all the internal work required for plugins was released as libinput 1.29 but that version does not have any user-configurable plugins yet. But cry you not my little jedi and/or sith lord in training, because support for plugins has now been merged and, barring any significant issues, will be in libinput 1.30, due somewhen around October or November. This year. 2025 that is.

Which means now is the best time to jump in and figure out if your favourite bug can be solved with a plugin. And if so, let us know and if not, then definitely let us know so we can figure out if the API needs changes. The API Documentation for Lua plugins is now online too and will auto-update as changes to it get merged. There have been a few minor changes to the API since the last post so please refer to the documentation for details. Notably, the version negotiation was re-done so both libinput and plugins can support select versions of the plugin API. This will allow us to iterate the API over time while designating some APIs as effectively LTS versions, minimising plugin breakages. Or so we hope.

What warrants a new post is that we merged a new feature for plugins, or rather, ahaha, a non-feature. Plugins now have an API accessible that allows them to disable certain internal features that are not publicly exposed, e.g. palm detection. The reason why libinput doesn't have a lot of configuration options have been explained previously (though we actually have quite a few options) but let me recap for this particular use-case: libinput doesn't have a config option for e.g. palm detection because we have several different palm detection heuristics and they depend on device capabilities. Very few people want no palm detection at all[1] so disabling it means you get a broken touchpad and we now get to add configuration options for every palm detection mechanism. And keep those supported forever because, well, workflows.

But plugins are different, they are designed to take over some functionality. So the Lua API has a EvdevDevice:disable_feature("touchpad-palm-detection") function that takes a string with the feature's name (easier to make backwards/forwards compatible this way). This example will disable all palm detection within libinput and the plugin can implement said palm detection itself. At the time of writing, the following self-explanatory features can be disabled: "button-debouncing", "touchpad-hysteresis", "touchpad-jump-detection", "touchpad-palm-detection", "wheel-debouncing". This list is mostly based on "probably good enough" so as above - if there's something else then we can expose that too.

So hooray for fewer features and happy implementing!

[1] Something easily figured out by disabling palm detection or using a laptop where palm detection doesn't work thanks to device issues

Michael Meeks

@michael

2025-08-06 Wednesday

  • J. out for an intermediate paddleboarding lesson; took H. and M. for a sail in a Quest - the wind veering amazingly around the compass - onto, then off-shore, then no wind etc.
  • The team published the next strip: "Spending a surplus"
    The Open Road to Freedom - strip#29 - Spending a surplus
  • Final windsurfing in the afternoon, a somewhat exhausing time of gusts of wind then none, repeated clambering up the board, good practice I guess.
  • Ministry in the evening, dinner, swimming, evening celebration of the week - having met lots of lovely people.

Steven Deobald

@steven

2025-08-01 Foundation Update

This will perhaps be the least-fun Foundation Update of the year. July 27th was supposed to be the “Board Hack Day” (yay?… policy hackfest? everyone’s favourite?), but it ended up consumed with more immediately pressing issues. Somewhat unfortunate, really, as I think we were all looking forward to removing the executive tasks from the Board project wall and boiling their work down to more strategic and policy work. I suppose we’ll just have to do that throughout the year.

Many people are on vacation right now, so the Foundation feels somewhat quiet in its post-GUADEC moment. Budget planning is happening. Doozers are doozering. The forest gnomes are probably taking a nap. There’s a lot of annoying paperwork being shuffled around.

I hope for a more exciting Foundation Update in the next week or two. Can’t win ’em all.

Jussi Pakkanen

@jpakkane

Let's properly analyze an AI article for once

Recently the CEO of Github wrote a blog post called Developers reinvented. It was reposted with various clickbait headings like GitHub CEO Thomas Dohmke Warns Developers: "Either Embrace AI or Get Out of This Career" (that one feels like an LLM generated summary of the actual post, which would be ironic if it wasn't awful). To my great misfortune I read both of these. Even if we ignore whether AI is useful or not, the writings contain some of the absolute worst reasoning and stretched logical leaps I have seen in years, maybe decades. If you are ever in the need of finding out how not to write a "scientific" text on any given subject, this is the disaster area for you.

But before we begin, a detour to the east.

Statistics and the Soviet Union

One of the great wonders of statistical science of the previous century was without a doubt the Soviet Union. They managed to invent and perfect dozens of ways to turn data to your liking, no matter the reality. Almost every official statistic issued by USSR was a lie. Most people know this. But even most of those do not grasp just how much the stats differed from reality. I sure didn't until I read this book. Let's look at some examples.

Only ever report percentages

The USSR's glorious statistics tended to be of the type "manufacturing of shoes grew over 600% this five year period". That certainly sounds a lot better than "In the last five years our factory made 700 pairs of shoes as opposed to 100" or even "7 shoes instead of 1". If you are really forward thinking, you can even cut down shoe production on those five year periods when you are not being measured. It makes the stats even more impressive, even though in reality many people have no shoes at all.

The USSR classified the real numbers as state secrets because the truth would have made them look bad. If a corporation only gives you percentages, they may be doing the same thing. Apply skepticism as needed.

Creative comparisons

The previous section said the manufacturing of shoes has grown. Can you tell what it is not saying? That's right, growth over what? It is implied that the comparison is to the previous five year plan. But it is not. Apparently a common comparison in these cases was the production amounts of the year 1913. This "best practice" was not only used in the early part of the 1900s, it was used far into the 1980s.

Some of you might wonder why 1913 and not 1916, which was the last year before the bolsheviks took over? Simply because that was the century's worst year for Russia as a whole. So if you encounter a claim that "car manufacturing was up 3700%" some year in 1980s Soviet Union, now you know what that actually meant.

"Better" measurements

According to official propaganda, the USSR was the world's leading country in wheat production. In this case they even listed out the production in absolute tonnes. In reality it was all fake. The established way of measuring wheat yields is to measure the "dry weight", that is, the mass of final processed grains. When it became apparent that the USSR could not compete with imperial scum, they changed their measurements to "wet weight". This included the mass of everything that came out from the nozzle of a harvester, such as stalks, rats, mud, rain water, dissidents and so on.

Some people outside the iron curtain even believed those numbers. Add your own analogy between those people and modern VC investors here.

To business then

The actual blog post starts with this thing that can be considered a picture.

What message would this choice of image tell about the person using it in their blog post?

  1. Said person does not have sufficient technical understanding to grasp the fact that children's toy blocks should, in fact, be affected by gravity (or that perspective is a thing, but we'll let that pass).
  2. Said person does not give a shit about whether things are correct or could even work, as long as they look "somewhat plausible".
Are these the sort of traits a person in charge of the largest software development platform on Earth should have? No, they are not.

To add insult to injury, the image seems to have been created with the Studio Ghibli image generator, which Hayao Miyazaki described as an abomination on art itself. Cultural misappropriation is high on the list of core values at Github HQ it seems.

With that let's move on to the actual content, which is this post from Twitter (to quote Matthew Garrett, I will respect their name change once Elon Musk starts respecting his child's).

Oh, wow! A field study. That makes things clear. With evidence and all! How can we possibly argue against that?

Easily. As with a child.

Let's look at this "study" (and I'm using the word in its loosest possible term here) and its details with an actual critical eye. The first thing is statistical representativeness. The sample size is 22. According to this sample size calculator I found, a required sample size for just one thousand people would be 278, but, you know, one order of magnitude one way or another, who cares about those? Certainly not business big shot movers and shakers. Like Stockton Rush for example.

The math above assumes an unbiased sampling. The post does not even attempt to answer whether that is the case. It would mean getting answers to questions like:

  • How were the 22 people chosen?
  • How many different companies, skill levels, nationalities, genders, age groups etc were represented?
  • Did they have any personal financial incentive on making their new AI tools look good?
  • Were they under any sort of duress to produce the "correct" answers?
  • What was/were the exact phrase(s) that was asked?
  • Were they the same for all participants?
  • Was the test run multiple times until it produced the desired result?
The latter is an age old trick where you run a test with random results over and over on small groups. Eventually you will get a run that points the way you want. Then you drop the earlier measurements and publish the last one. In "the circles" this is known as data set selection.

Just to be sure, I'm not saying that is what they did. But if someone drove a dump truck full of money to my house and asked me to create a "study" that produced these results, that is exactly how I would do it. (I would not actually do it because I have a spine.)

Moving on. The main headline grabber is "Either you embrace AI or get out of this career". If you actually read the post (I know), what you find is that this is actually a quote from one of the participants. It's a bit difficult to decipher from the phrasing but my reading is that this is not a grandstanding hurrah of all things AI, but more of a "I guess this is something I'll have to get used to" kind of submission. That is not evidence, certainly not of the clear type. It is an opinion.

The post then goes on a buzzwordsalad tour of statements that range from the incomprehensible to the puzzling. Perhaps the weirdest is this nugget on education:

Teaching [programming] in a way that evaluates rote syntax or memorization of APIs is becoming obsolete.

It is not "becoming obsolete". It has been considered the wrong thing to do for as long as computer science has existed. Learning the syntax of most programming languages takes a few lessons, the rest of the semester is spent on actually using the language to solve problems. Any curriculum not doing that is just plain bad. Even worse than CS education in Russia in 1913.

You might also ponder that if the author is so out of touch with reality in this simple issue, how completely off base the rest of his statements might be. In fact the statement is so wrong at such a fundamental level that it has probably been generated with an LLM.

A magician's shuffle

As nonsensical as the Twitter post is, we have not yet even mentioned the biggest misdirection in it. You might not even have noticed it yet. I certainly did not until I read the actual post. Try if you can spot it.

Ready? Let's go.

The actual fruit of this "study" boils down to this snippet.

Developers rarely mentioned “time saved” as the core benefit of working in this new way with agents. They were all about increasing ambition.

Let that sink in. For the last several years the main supposed advantage of AI tools has been the fact that they save massive amounts of developer time. This has lead to the "fire all your developers and replace them with AI bots" trend sweeping the nation. Now even this AI advertisement of a "study" can not find any such advantages and starts backpedaling into something completely different. Just like we have always been at war with Eastasia, AI has never been about "productivity". No. No. It is all about "increased ambition", whatever that is. The post then carries on with this even more baffling statement.

When you move from thinking about reducing effort to expanding scope, only the most advanced agentic capabilities will do.

Really? Only the most advanced agentics you say? That is a bold statement to make given that the leading reason for software project failure is scope creep. This is the one area where human beings have decades long track record for beating any artificial system. Even if machines were able to do it better, "Make your project failures more probable! Faster! Spectacularer!" is a tough rallying cry to sell. 

 To conclude, the actual findings of this "study" seem to be that:

  1. AI does not improve developer productivity or skills
  2. AI does increase developer ambition
This is strictly worse than the current state of affairs.

GUADEC 2025

I’m back from GUADEC 2025. I’m still super tired, but I wanted to write down my thoughts before they vanish into the eternal void.

First let me start with a massive thank you for everyone that helped organize the event. It looked extremely well rounded, the kind of well rounded that can only be explained by a lot of work from the organizers. Thank you all!

Preparations

For this GUADEC I did something special: little calendars!

3D printed calendars on the table

These were 3D printed from a custom model I’ve made, based on the app icon of GNOME Calendar. I brought a small batch to GUADEC, and to my surprise – and joy – they vanished in just a couple of hours in the first day! It was very cool to see the calendars around after that:

Talks

This year I gave two talks:

The first one was rather difficult. On the one hand, streaming GNOME development and interacting with people online for more than 6 years has given me many anecdotal insights about the social dynamics of free software. On the other hand, it was very difficult to materialize these insights and summarize them in form of a talk.

I’ve received good feedback about this talk, but for some reason I still left it feeling like it missed something. I don’t know what, exactly. But if anyone felt energized to try some streaming, goal accomplished I guess? 🙂

The second talk was just a regular update on the XDG Desktop Portal project. It was made with the sole intention of preparing territory for the Flatpak & Portals BoF that occurred later. Not much to say about it, it was a technical talk, with some good questions and discussions after that.

As for the talks that I’ve watched, to me there is one big highlight for this GUADEC: Emmanuelle’s “Getting Things Done in GNOME”.

Emmanuele on stage with "How do things happen in GNOME?" on the projector

Emmanuele published the contents of this talk in article form recently.

Sometimes, when we’re in the inflection point towards something, the right person with the right sensitivities can say the right things. I think that’s what Emmanuele did here. I think myself and others have already been feeling that the “maintainer”, in the traditional sense of the word, wasn’t a fitting description of how things have been working lately. Emmanuele gifted us with new vocabulary for that: “special interest group”. By the end of GUADEC, we were comfortably using this new descriptor.

Photography

It’s not exactly a secret that I started dipping my toes towards a long admired hobby: photography. This GUADEC was the first time I traveled with a camera and a lens, and actually attempted to document the highlights and impressions.

I’m not going to dump all of it here, but I found Brescia absolutely fascinating and fell in love with the colors and urban vision there. It’s not the most walkable or car-free city I’ve ever been, but there are certain aspects to it that caught my attention!

The buildings had a lovely color palette, sometimes bright, sometimes pastel:

Some of the textures of the material of walls were intriguing:

I fell in love with how Brescia lights itself:

I was fascinated by how many Vespas (and similar) could be found around the city, and decided to photograph them all! Some of the prettiest ones:

The prize for best modeling goes to Sri!

Conclusion

Massive thanks to everyone that helped organize GUADEC this year. It was a fantastic event. It was great to see old friends, and meet new people in there. There were many newcomers attending this GUADEC!

And here’s the GUADEC dinner group photo!

Group photo during the GUADEC dinner party

TIL that You can spot base64 encoded JSON, certificates, and private keys

I was working on my homelab and examined a file that was supposed to contain encrypted content that I could safely commit on a Github repository. The file looked like this

{
  "serial": 13,
  "lineage": "24d431ee-3da9-4407-b649-b0d2c0ca2d67",
  "meta": {
    "key_provider.pbkdf2.password_key": "eyJzYWx0IjoianpHUlpMVkFOZUZKcEpSeGo4UlhnNDhGZk9vQisrR0YvSG9ubTZzSUY5WT0iLCJpdGVyYXRpb25zIjo2MDAwMDAsImhhc2hfZnVuY3Rpb24iOiJzaGE1MTIiLCJrZXlfbGVuZ3RoIjozMn0="
  },
  "encrypted_data": "ONXZsJhz37eJA[...]",
  "encryption_version": "v0"
}

Hm, key provider? Password key? In an encrypted file? That doesn't sound right. The problem is that this file is generated by taking a password, deriving a key from it, and encrypting the content with that key. I don't know what the derived key could look like, but it could be that long indecipherable string.

I asked a colleague to have a look and he said "Oh that? It looks like a base64 encoded JSON. Give it a go to see what's inside."

I was incredulous but gave it a go, and it worked!!

$ echo "eyJzYW[...]" | base64 -d
{"salt":"jzGRZLVANeFJpJRxj8RXg48FfOoB++GF/Honm6sIF9Y=","iterations":600000,"hash_function":"sha512","key_length":32}

I couldn't believe my colleague had decoded the base64 string on the fly, so I asked. "What gave it away? Was it the trailing equal signs at the end for padding? But how did you know it was base64 encoded JSON and not just a base64 string?"

He replied,

Whenever you see ey, that's {" and then if it's followed by a letter, you'll get J followed by a letter.

I did a few tests in my terminal, and he was right! You can spot base64 json with your naked eye, and you don't need to decode it on the fly!

$ echo "{" | base64
ewo=
$ echo "{\"" | base64
eyIK
$ echo "{\"s" | base64
eyJzCg==
$ echo "{\"a" | base64
eyJhCg==
$ echo "{\"word\"" | base64
eyJ3b3JkIgo=

But there's even better! As tyzbit reported on the fediverse, you can even spot base64 encoded certificates and private keys! They all start with LS, which reminds of the LS in "TLS certificate."

$ echo -en "-----BEGIN CERTIFICATE-----" | base64
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0t

[!warning] Errata

As pointed out by gnabgib and athorax on Hacker News, this actually detects the leading dashes of the PEM format, commonly used for certificates, and a YAML file that starts with --- will yield the same result

$ echo "---\n" | base64
LS0tXG4K

This is not a silver bullet!

Thanks Davide and Denis for showing me this simple but pretty useful trick, and thanks tyzbit for completing it with certs and private keys!

Cordoomceps - replacing an Amiga's brain with Doom

There's a lovely device called a pistorm, an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.

These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?

We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.

So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.

And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.

Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.

The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.

But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.

The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.

We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.

First, let's talk about Amiga graphics. We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.

Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.

Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom, the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.

And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.

Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.

Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.

So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available.

[1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM

comment count unavailable comments

Victor Ma

@victorma

It's alive!

In the last two weeks, I’ve been working on my lookahead-based word suggestion algorithm. And it’s finally functional! There’s still a lot more work to be done, but it’s great to see that the original problem I set out to solve is now solved by my new algorithm.

Without my changes

Here’s what the upstream Crosswords Editor looks like, with a problematic grid:

Broken behaviour

The editor suggests words like WORD and WORM, for the 4-Across slot. But none of the suggestions are valid, because the grid is actually unfillable. This means that there are no possible word suggestions for the grid.

The words that the editor suggests do work for 4-Across. But they do not work for 4-Down. They all cause 4-Down to become a nonsensical word.

The problem here is that the current word suggestion algorithm only looks at the row and column where the cursor is. So it sees 4-Across and 1-Down—but it has no idea about 4-Down. If it could see 4-Down, then it would realise that no word that fits in 4-Across also fits in 4-Down—and it would return an empty word suggestion list.

With my changes

My algorithm fixes the problem by considering every intersecting slot of the current slot. In the example grid, the current slot is 4-Across. So, my algorithm looks at 1-Down, 2-Down, 3-Down, and 4-Down. When it reaches 4-Down, it sees that no letter fits in the empty cell. Every possible letter leads to either 4-Across or 4-Down or both slots to contain an invalid word. So, my algorithm correctly returns an empty list of word suggestions.

Fixed behaviour

Christian Hergert

@hergertme

Week 31 Status

Foundry

  • Added a new gutter renderer for diagnostics using the FoundryOnTypeDiagnostics described last week.

  • Write another new gutter renderer for “line changes”.

    I’m really happy with how I can use fibers w/ GWeakRef to do worker loops but not keep the “owner object” alive. As long as you have a nice way to break out of the fiber loop when the object disposes (e.g. trigger a DexCancellable/DexPromise/etc) then writing this sort of widget is cleaner/simpler than before w/ GAsyncReadyCallback.

    foundry-changes-gutter-renderer.c

  • Added a :show-overview property to the line changes renderer which conveniently allows it to work as both a per-line change status and be placed in the right-side gutter as an overview of the whole document to see your place in it. Builder just recently got this feature implemented by Nokse and this is basically just a simplified version of that thanks to fibers.

  • Abstract TTY auth input into a new FoundryInput abstraction. This is currently used by the git subsystem to acquire credentials for SSH, krb, user, user/pass, etc depending on what the peer supports. However, it became pretty obvious to me that we can use it for more than just Git. It maps pretty well to at least two more features coming down the pipeline.

    Since the input mechanisms are used on a thread for TTY input (to avoid blocking main loops, fiber schedulers, etc), they needed to be thread-safe. Most things are immutable and a few well controlled places are mutable.

    The concept of a validator is implemented externally as a FoundryInputValidator which allows for re-use and separating the mechanism from policy. Quite like how it turned out honestly.

    There are abstractions for text, switches, choices, files. You might notice they will map fairly well to AdwPreferenceRow things and that is by design, since in the apps I manage, that would be their intended display mechanism.

  • Templates have finally landed in Foundry with the introduction of a FoundryTemplateManager, FoundryTemplateProvider, and FoundryTemplate. They use the new generalized FoundryInput abstractions that were discussed above.

    That allows for a foundry template list command to list templates and foundry template create to expand a certain template.

    The FoundryInput of the templates are queried via the PTY just like username/password auth works via FoundryInput. Questions are asked, input received, template expansion may continue.

    This will also allow for dynamic creation of the “Create Template” widgetry in Builder later on without sacrificing on design.

  • Meson templates from Builder have also been ported over which means that you can actually use those foundry template commands above to replace your use of Builder if that is all you used it for.

    All the normal ones are there (GTK, Adwaita, library, cli, etc).

  • A new license abstraction was created so that libraries and tooling can get access to licenses/snippets in a simple form w/o duplication. That generally gets used for template expansion and file headers.

  • The FoundryBuildPipeline gained a new vfunc for prepare_to_run(). We always had this in Builder but it never came over to Foundry until now.

    This is the core mechanism behind being able to run a command as if it were the target application (e.g. unit tests).

  • After doing the template work, I realized that we should probably just auto initialize the project so you don’t have to run foundry init afterwards. Extracted the mechanism for setting up the initial .foundry directory state and made templates use that.

  • One of the build pipeline mechanisms still missing from Builder is the ability to sit in the middle of a PTY and extract build diagnostics. This is how errors from GCC are extracted during the build (as well as for other languages).

    So I brought over our “PTY intercept” which takes your consumer FD and creates a producer FD which is bridged to another consumer FD.

    Then the JIT’d error extract regexes may be run over the middle and then create diagnostics as necessary.

    To make this simple to consume in applications, a new FoundryPtyDiagnostics object is created. You set the PTY to use for that and attach it’s intercept PTY to the build/run managers default PTY and then all the GAction will wire up correctly. That object is also a GListModel making it easy to display in application UI.

  • A FoundryService is managed by the FoundryContext. They are just subsystems that combine to useful things in Foundry. One way they can be interacted with is GAction as the base class implements GActionGroup.

    I did some cleanup to make this work well and now you can just attach the FoundryContexts GActionGroup using foundry_context_dup_action_group() to a GtkWindow using gtk_widget_insert_action_group(). At that point your buttons are basically just "context.build-manager.build" for the action-name property.

    All sorts of services export actions now for operations like build, run, clean, invalidate, purge, update dependencies, etc.

    There is a test GTK app in testsuite/tools/ you can play with this all to get ideas and/or integrate into your own app. It also integrates the live diagnostics/PTY code to exemplify that.

  • Fixed the FoundryNoRun tool to connect to the proper PTY in the deployment/run phase.

  • The purge operation now writes information about what files are being deleted to the default build PTY.

  • The new FoundryTextSettings abstraction has landed which is roughly similar to IdeFileSettings in Builder. This time it is much cleaned up now that we have DexFuture to work with.

    I’ve ported the editorconfig support over to use this as well as a new implementation of modeline support which again, is a lot simpler now that we can use fibers/threadpools effectively.

    Plugins can set their text-settings priority in their .plugin file. That way settings can have a specific order such as user-overrides, modelines, editorconfig, gsettings overrides, language defaults, and what-not.

  • The FoundryVcs gained a new foundry_vcs_query_file_status() API which allows querying for the, shocking, file status. That will give you bitflags to know in both the stage or working tree if a file is new/modified/deleted.

    To make this even more useful, you can use the FoundryDirectoryListing class (which is a GListModel of FoundryDirectoryItem) to include vcs::status file-attribute and your GFileInfo will be populated with the uint32 bitflags for a key under the same name.

    It’s also provided as a property on the FoundryDirectoryItem to make writing those git “status icons” dead simple in file panels.

Boxes

  • Found an issue w/ trailing \x00 in paths when new Boxes is opening an ISO from disk on a system with older xdg portals. Sent a pointer on the issue tracker to what Text Editor had to do as well here.

Libpeas

  • GJS gained support for pkgconfig variables and we use that now to determine which mozjs version to link against. That is required to be able to use the proper JS API we need to setup the context.

Ptyxis

  • Merged some improves to the custom link support in Ptyxis. This is used to allow you to highlight custom URL regexes. So you can turn things like “RHEL-1234” into a link to the RHEL issue tracker.

  • Track down an issue filed about titles not updating tab/window titles. It was just an issue with $PROMPT_COMMAND overwriting what they had just changed.

Text Editor

  • A lot of the maintainership of this program is just directing people to the right place. Be that GtkSourceView, GTK, shared-mime-info, etc. Do more of that.

    As an aside, I really wish people spent more time understanding how things work rather than fire-and-forget. The FOSS community used to take pride in ensuring the issue reports landed in the right place to avoid over burdening maintainers, and I’m sad that is been lost in the past decade or so. Probably just a sign of success.

Builder

  • Did a quick and dirty fix for a hang that could slow down startup due to the Manuals code going to the worker process to get the default architecture. Builder doesn’t link against Flatpak in the UI process hence why that did it. But it’s also super easy to put a couple line hard-coded #ifdef and avoid the whole RPC.

Libdex

  • Released 0.11.1 for GNOME 49.beta. I’m strongly considering making the actual 49 release our 1.0. Things have really solidified over the past year with libdex and I’m happy enough to put my stamp of approval on that.

Libspelling

  • Fix an issue with discovery of the no-spellcheck-tag which is used to avoid spellchecking things that are general syntax in language specifications. Helps a bunch when loading a large document and that can get out of sync/changed before the worker discovers it.

  • Fixed a LSAN discovered leak in the testsuite. Still one more to go. Fought LSAN and CI a bit because I can’t seem to reproduce what the CI systems get.

Other

  • Told Chat-GPT to spit me out a throw away script that parses my status reports and converts them into something generally usable by WordPress. Obviously there is a lot of dislike/scrutiny/distrust of LLMs and their creators/operators, but I really don’t see the metaphorical cat going back in the bag when you enable people in a few seconds to scratch an itch. I certainly hope we continue to scrutinize and control scope though.

Over engineering my homelab so I don't pay cloud providers

After years of self-hosting on a VPS in a datacenter, I've decided to move my services at home. But instead of just porting services, I'm using this as an opportunity to migrate to a more flexible and robust set up.

I will deploy services on a single mini pc. Since I need to be able to experiment and learn without disrupting my services, I will need to be able to spin up Virtual Machines (VMs). Let's explore how I deployed Proxmox Virtual Environment on a safe host for my specific needs as a homelabber, and how I automated as much of it as possible. In a follow-up post we will explore how to spin-up and configure VMs in a reproducible way on that setup.

What I want to do and avoid

Objectives

After realizing that my good old Raspberry Pi 4 was too slow to let me backup or restore on an encrypted disk, I bought a Minisforum UM880 Plus. At €600 it was not extremely expensive, but I don't intend to spend more on hardware in the foreseeable future and I want to make the most of what I have right now.

I love to experiment and would like to do it safely without putting my production set-up at risk. Those are self-hosted services mostly for my personal usage, so I can afford occasional downtime, but I don't want to have to rebuild everything if my experiments go wrong. I also don't want to experiment by spinning up VMs at a cloud provider, because I will not know what I'm doing while learning, and cloud providers can get expensive very quickly.

One of my main objectives as I write these lines is to get up to speed with Kubernetes. I want to stay on a single-node k3s deployment while I get comfortable with operating services on a Kubernetes cluster, but I know I will want to explore deployments with several nodes, and eventually create a full blown k8s cluster based on Talos Linux.

Threat model

My server is in my living room. The most prominent threat in my model is a burglary. If my server gets stolen I will lose access to my infrastructure and my data. I also don't want my data to leak in the wild if the burglars put their hands on the disk in my server.

[!info] I need to have disk encryption and solid backups to keep my data safe

The second biggest threat is hardware failure. All devices can fail, but I'm fairly certain this is particularly true of a €600 mini pc that was not necessarily designed to serve as a home server.

[!info] I need to have a setup that can be automatically installed and configured

I am also a team of only one, I am fallible, and I don't have peers to review my exact set-up. To mitigate this risk I have a group of friends called the Infra Nerds Club, whom I regularly ask for advice.

[!info] I need to have a versioned set-up that can easily be rolled back

My ISP-provided router supports wireguard. Even when I'm out, I can join the local network of my server. But my server could be shut down because of a power outage or another reason. I might be at work or even on holidays when it happens, and even wireguard can't solve this.

[!info] I need a KVM on my local network so I can send Wake on LAN packages to my server

Unsurprisingly with my objectives and hardware constraints, I need to be able to spin up VMs to play with. The only realistic option on the table for a hobby homelabber is Proxmox Virtual Environment.

It is important to also highlight that if my server gets stolen or fails, I will not be able to spin up a hypervisor on a baremetal server again, and VPS providers would likely not let me configure a bridged network like I will do below.

The hypervisor and virtual machines I will deploy are just meant to give me flexibility. I consider the hypervisor and Virtual Machines as disposable, so I will not perform backups of the VMs themselves. I will however perform backups of the data and configuration of the services running on it.

One of my goals is to be able to quickly move my infrastructure to a cloud provider if something happened to my baremetal server, and back to a new baremetal server after it's been delivered.

Implementing it

I will deploy a Proxmox hypervisor on the physical server in my living room. On that hypervisor, I want to be able to statically declare what VMs must be spun up, how they should be configured, how the services (e.g. k3s) are deployed on those VMs, and what DNS records must be set to reach those services. There isn't a single unified tool to do so, and I will have to rely on opentofu, cloud init and ansible.

In this post I will only focus on deploying a rock stable Proxmox hypervisor on my server, but it's worth having a glimpse at how I will manage it.

Opentofu, and the mother project Terraform it originated from, are often described as Infra as Code (IaC). In other words, it lets you describe in a text file what VMs you want to create on your infrastructure. With cloud-init, you can add a basic configuration for your VM, such as the users credentials, ssh keys to trust, and network configuration.

A typical opentofu snippet to spin up a VM with a Debian OS pre-configured with cloud-init looks like this. We will explain how to actually use opentofu and cloud-init to spin up VMs in a further blog post.

resource "proxmox_virtual_environment_vm" "k3s-main" {
  name        = "k3s-main"
  description = "Production k3s' main VM"
  tags        = ["production", "k3s", "debian"]
  node_name   = "proximighty"

  cpu {
    cores = 4
    type  = "x86-64-v4"
  }

  memory {
    dedicated = 4096
    floating  = 4096
  }

  disk {
    datastore_id = "local"
    interface    = "virtio0"
    iothread     = true
    size         = 50
    file_id      = proxmox_virtual_environment_download_file.debian_cloud_image.id
  }
  
  [...]
}

Opentofu runs from my laptop. It reads the .tf files and will talk to the Proxmox host to spin up VMs and their basic configuration. It can also talk to my registrar (Cloudflare for now) to add new DNS records if I ask it to.

Having a VM pre-configured with network, users and trusted ssh keys is very useful to hook in the second configuration tool: ansible.

An ansible playbook is a text file describing the desired state of a server, often without describing how it must be achieved. For example, instead of describing "Open the file /etc/hosts and add the line 192.168.1.200 myhost.example.com", you describe "the line 192.168.1.200 myhost.example.com must be present in the file /etc/hosts".

It can sound like the same, but it's not: running the first description twice would result in the same line being added twice to the /etc/hosts file. Running the second description would result in having the desired lined only once. A typical playbook will look like this.

---
- name: Set the timezone to UTC
  community.general.timezone:
    name: UTC

- name: Install bridge utils
  ansible.builtin.apt:
    name: bridge-utils
    state: present

- name: Override Debian's default network configuration
  ansible.builtin.copy:
    src: interfaces
    dest: /etc/network/interfaces
    mode: "0644"

- name: Create a bridge interface vrm0 and give it a static IP
  ansible.builtin.copy:
    src: vmbr0
    dest: /etc/network/interfaces.d/vrm0
    mode: "0644"

- name: Ensure enp2s0 doesn't have an IP
  ansible.builtin.copy:
    src: enp2s0
    dest: /etc/network/interfaces.d/enp2s0
    mode: "0644"

- name: Add local IP to the hosts filename
  ansible.builtin.lineinfile:
    path: /etc/hosts
    line: 192.168.1.200 proximighty.ergaster.org proximighty
    create: true
    mode: "0644"

Ansible also runs from my laptop. It reads the playbook's .yaml files, and uses ssh to log into the target machine and apply the playbook configuration.

Setting up the Proxmox host

Installing an encrypted Debian

Proxmox is based on Debian and can be installed in 3 different way:

  1. Via the official installer
  2. By creating an automated installer
  3. By installing it on top of an existing Debian install

The second option sounds very appealing, but there is a major issue: the Proxmox (automated) installer doesn't support setting up disk encryption. The simplest way to have disk encryption on the host is to install Debian first, and to install Proxmox on top.

I could automate the Debian install using preseeding. Preseed files contain the answers to the questions asked by the Debian installer. A colleague who wrote a preseed file for Debian 8 told me he didn't have to update it since. After writing my own preseed, I could [add it to the Debian netinst usb disk](preseed on usb disk) and Debian would be installed automatically without human intervention.

But preseed files can only customize the basic install of Debian. To install additional packages (like Proxmox) and configure my machine I need to rely on an ansible playbook.

It is also worth noting that if my server got stolen or if its hardware failed, I wouldn't be able to replace it with a baremetal server right away. I would have to choose a cloud provider and spin up VMs that roughly correspond to the ones I had running on my Proxmox.

[!info] Using a preseed would be a case of XKCD 1205

I shouldn't have to perform regular reinstalls of Debian for the Proxmox host, and I need to write an ansible playbook to configure it properly anyway. I would spend a lot of time automating the Debian install, but I wouldn't save a lot of time in doing so.

I grabbed a Debian netinstall and performed a regular install with disk encryption, with the following specificities:

  • I used the full disk with LVM, and set up disk encryption.
  • I didn't let the installer fill my disk with random data because it takes a lot of time and doesn't match my threat model.
  • I did set a root password. Proxmox is very root centric, and while there are workarounds to use a non-root user, it gets tedious very fast for little extra security.
  • At the package selection step, I disabled everything but SSH Server and standard system utilities. I need both to be able to ssh into my server and let ansible control my Proxmox host.

[!warning] Keyboard required

The disk is encrypted by a password. The server will prompt me for the password when it (re)starts and will not be able to boot if I don't type the password.

My server is connected to a KVM, so I can enter the disk encryption password when the server reboots. If you don't, you can install and configure Dropbear to do it over ssh or create a magic usb stick that LUKS will read to decrypt the disk.

Now, I need to interact with my . After installing Debian and unlocking the disk, I need to configure the ssh server to let me temporarily log in as root to copy my public key. Via my KVM, I update the /etc/ssh/sshd_config as follows

PermitRootLogin prohibit-password
PermitRootLogin yes

And I restart the sshd so I can log in as root

# systemctl restart sshd

On my laptop, I copy my public key to the server with

$ ssh-copy-id root@192.168.1.200

And finally I reverse the change on my server by editing /etc/ssh/sshd_config again so I can only log in by ssh key

PermitRootLogin yes
PermitRootLogin prohibit-password

One restart of the sshd later, my server is safe again

# systemctl restart sshd

Installing and configuring Proxmox

Installing Proxmox

Installing Proxmox on top of an existing Debian is a well supported and documented process. I followed these steps on my encrypted Debian until the Proxmox VE package install and... my machine didn't boot anymore. It was very confusing at first sight, because there was no error. I was prompted for my disk encryption password, the disk was successfully unlocked, and then nothing. The system just didn't boot, was unreachable via SSH, and didn't display anything via the KVM.

Figuring out why installing Proxmox bricks my Debian

I was extremely surprised that installing a vanilla Proxmox on a freshly installed, pristine Debian would completely brick the system!

I initially thought that the issue was the Proxmox kernel that didn't support disk encryption, or that didn't support my hardware well. After a few reinstalls and rebooting between the install steps, I figured out that booting on the Proxmox kernel without Promox VE installed worked perfectly fine. Even from an encrypted disk.

So I installed Proxmox, and asked the computer to tell me what it does when it boots. To do so, I wait for the GRUB screen to appear, and pressed <kbd>e</kbd> to get access to the boot command editor.

I replaced the quiet boot parameter by noquiet, and pressed <kbd>Ctrl</kbd> + <kbd>x</kbd> to save my changes and boot with this altered command. I could see that the machine was stuck on Job networking.service/start running.

Looking up Proxmox job networking start running yielded good results on the Proxmox forums. In this thread and that one users say ntp is causing issues. But I didn't have ntp, ntpsec-ntpdate or any related package installed!

After a few reinstalls and a bit of trial and error, I could figure out that my machine wouldn't boot after installing Proxmox VE if I didn't set up a static IP configuration for it. Configuring a static IP for the machine after a fresh reinstall fixed the issue.

Setting up a bridge network

I only have a single physical enp2s0 network interface card on my host, but I will have several guest VMs. Each VM needs to be able to use my host's network card and make it "impersonate" its virtual card. I'm writing a more detailed post about how this works, but the gist of it is that you need to create a virtual network interface vrmb0 called a bridge. The bridge will be connected both to the host's physical network card, and to the VMs network interface.

The physical network card no longer operates at the IP level: it merely serves as a packet sender and receiver. So I need to remove the default IP configuration on enp2s0, and configure vrmb0 to have an IP the host will be able to use instead.

Since I installed a minimal Debian, I need to install the required tools to create bridged networks

# apt install bridge-utils

Then, let's clean up the default network configuration in /etc/network/interfaces to only keep the loopback interface

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

Let's add a file in /etc/network/interfaces.d/ for enp2s0 to be brought up but not try to get an IP

auto enp2s0
iface enp2s0 inet manual

And now let's create and configure vrmb0 to have a static IP

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.200
        gateway 192.168.1.254
        bridge_ports enp2s0
        bridge_stp off
        bridge_fd 0

I can finally restart the network to ensure everything is configured properly

# systemctl restart networking

That was a lot of work, and I'm not sure I will remember how to perform all of these steps if I need to rebuild a Proxmox host. Let's use ansible to automate everything I did after installing a clean Debian!

Automating the Proxmox install with ansible

Installing ansible

The ansible documentation lists several ways to install ansible. I didn't find anything related to homebrew, but the package is still present and seems up to date. Since I regularly upgrade the packages installed with homebrew, I decided it was the simplest way to keep an up to date ansible on my laptop and installed with

$ brew install ansible

Writing the playbook

I created a ~/Projects/infra folder that will contain everything related to my homelab. In this directory, I created two subdirectories: one called opentofu that we will use later to spin up VMs, and one called ansible that will contain my playbooks.

$ cd ~/Projects/infra
$ tree -L 1
.
├── ansible
└── tofu

I want to keep the ansible playbook for my infrastructure in a single place. At the root of my ansible repository, I have created two folders: inventory and proximighty (the name of the Proxmox host).

In the inventory folder I can list all my hosts and organize how I want. I created a production file that contains the following

[proximighty]
192.168.1.200 ansible_ssh_user=root

Since I don't have a local DNS set-up and I'm not too keen on using my public domain name for my internal network, I'll stick to the host IP. I've put it under the [proximighty] group so I can easily refer to it later in ansible, and specified that ansible must ssh as root into the machine to perform operations.

I then create a proximighty folder under ansible where I will describe everything that must be done on a fresh Debian to get it to the desired state. I create a configure.yaml that will be the root of my playbook.

$ cd ~/Projects/infra/ansible
$ tree -L 2
.
├── inventory
│   └── production
└── proximighty
    └── configure.yaml

In the configure.yaml file I describe the rough steps. In my case, I want to do two things:

  1. Configure the host. That means setting the timezone to UTC, and installing the kitty-terminfo package so I can use kitty with my server.
  2. Install Proxmox.

The basic structure looks like this

---
- name: Configure the host
  hosts: proximighty
  tasks:
    - name: Set timezone to UTC
      community.general.timezone:
        name: UTC
    
    - name: Install kitty files
      ansible.builtin.apt:
        name: kitty-terminfo
        state: present

- name: Install Proxmox
  hosts: proximighty
  tasks:
    - name: Install Proxmox
      ???

I left question marks at the end of the file, because there are quite a few steps to install Proxmox, including reboots. To keep the playbook readable, I will isolate these steps into their own module. Ansible calls this module a role. Let's go to the proximighty folder and create a role folder in it. Inside it we can create a proxmox folder that will contain all the instructions to install proxmox.

$ cd ~/Projects/infra/ansible
$ tree -L 3
.
├── inventory
│   └── production
└── proximighty
    ├── configure.yaml
    └── roles
        └── proxmox

The entry point of a task is a main.yaml file nested inside a tasks folder, so let's create the relevant file structure

$ cd ~/Projects/infra/ansible/proximighty
$ tree -L 4
.
├── configure.yaml
└── roles
    └── proxmox
        └── tasks
            └── main.yaml

Finally we can open open the main.yaml file and start describing the steps necessary to install Proxmox! The file starts with --- and will then contain the various steps. Let's start by ensuring that the bridge-utils package is present, so we can set-up a bridged network

---
- name: Install bridge utils
  ansible.builtin.apt:
    name: bridge-utils
    state: present

Then we will fiddle with the network files. We want to

  1. Override the default configuration in /etc/network/interfaces
  2. Create a bridge interface vmbr0 described by a file in /etc/network/interfaces.d/vmbr0
  3. Configure enp2s0 with a file in /etc/network/interfaces.d/enp2s0 so it doesn't try to get its own IP
  4. Add a local IP into /etc/hosts
  5. Restart the network

Let's describe that in ansible terms

[...]

- name: Remove Debian's default network configuration
  ansible.builtin.copy:
    src: interfaces
    dest: /etc/network/interfaces
    mode: "0644"

- name: Create a bridge interface vrm0 and give it a static IP
  ansible.builtin.copy:
    src: vmbr0
    dest: /etc/network/interfaces.d/vrm0
    mode: "0644"

- name: Ensure enp2s0 doesn't have an IP
  ansible.builtin.copy:
    src: enp2s0
    dest: /etc/network/interfaces.d/enp2s0
    mode: "0644"

- name: Add local IP to the hosts filename
  ansible.builtin.lineinfile:
    path: /etc/hosts
    line: 192.168.1.200 proximighty.ergaster.org  proximighty
    create: true
    mode: "0644"

- name: Restart the networking service
  ansible.builtin.systemd_service:
    name: networking
    state: restarted

We're asking ansible to copy files over to the server, but we didn't tell it where to take the source files. By default, ansible looks up file in a files folder at the root of the role. Let's create the relevant files then:

$ cd ~/Projects/infra/ansible/proximighty
$ tree -L 4
.
├── configure.yaml
└── roles
    └── proxmox
        ├── files
        │   ├── enp2s0
        │   ├── interfaces
        │   └── vmbr0
        └── tasks
            └── main.yaml

The content of the files is the same as in the previous section. Now, to install Proxmox we need to add the Proxmox apt repositories to our apt list. For apt to trust it, we need to add the Proxmox signing key. We need the gpg package to be able to manipulate it. So let's add those steps in our proxmox/tasks/main.yaml file

[...]

- name: Ensure gpg is installed
  ansible.builtin.apt:
    name: gpg
    state: present

- name: Add the Proxmox key
  ansible.builtin.apt_key:
    url: https://enterprise.proxmox.com/debian/proxmox-release-bookworm.gpg
    state: present
    keyring: /etc/apt/trusted.gpg.d/proxmox-release-bookworm.gpg

- name: Add pve-no-subscription repository
  ansible.builtin.apt_repository:
    repo: "deb [arch=amd64] http://download.proxmox.com/debian/pve bookworm pve-no-subscription"
    state: present
    update_cache: true
    filename: pve-no-subscription

Finally we can update all the packages, ensure no package related to ntp is present, and reboot. Let's add those instructions to proxmox/tasks/main.yaml

[...]

- name: Update all packages
  ansible.builtin.apt:
    upgrade: full
    update_cache: true
  notify: Reboot

- name: Ensure ntp and related packages are absent
  ansible.builtin.apt:
    name:
      - ntp
      - ntpsec
      - ntpsec-ntpdate
    state: absent
  notify: Reboot

- name: Reboot after upgrading packages
  ansible.builtin.meta: flush_handlers

You probably noticed the notify: Reboot that appears twice. It could look like the machine is going to reboot twice, but this is not the case. It means each step will notify the Reboot handler, but handlers are only called at the end of a task... unless they are flushed before the end of the task. We explicitly flush the handlers with ansible.builtin.meta: flush_handlers, so the reboot will only happen here.

We called a handler, but we didn't define it anywhere. Like for files, ansible has a default place to look up for handlers: the handlers directory at the root of the module. Let's create the relevant files.

$ cd ~/Projects/infra/ansible/proximighty
$ tree -L 4
.
├── configure.yaml
└── roles
    └── proxmox
        ├── files
        │   ├── enp2s0
        │   ├── interfaces
        │   ├── storage.cfg
        │   └── vmbr0
        ├── handlers
        │   └── main.yaml
        └── tasks
            └── main.yaml

And let's add the Reboot handler in there

---
- name: Reboot
  ansible.builtin.reboot:

We can now finalize the install by

  1. Installing the Proxmox kernel and rebooting
  2. Installing Proxmox VE and dependancies
  3. Removing the Debian kernel and os-prober
  4. Removing the pve-enterprise repository that Proxmox automatically installed
  5. Rebooting one last time

Let's append those steps to the proxmox/tasks/main.yaml file

[...]

- name: Install Proxmox VE Kernel
  ansible.builtin.apt:
    name: "proxmox-default-kernel"
    state: present
  notify: Reboot

- name: Reboot after installing Proxmox VE Kernel
  ansible.builtin.meta: flush_handlers

- name: Install Proxmox VE and dependencies
  ansible.builtin.apt:
    name:
      - proxmox-ve
      - postfix
      - open-iscsi
      - chrony
    state: present

- name: Remove the Debian kernel and os-prober
  ansible.builtin.apt:
    name:
      - linux-image-amd64
      - os-prober
    state: absent
  notify:
    - Update GRUB
    - Reboot

- name: Remove pve-enterprise repository
  ansible.builtin.apt_repository:
    repo: deb https://enterprise.proxmox.com/debian/pve {{ debian_version }} pve-enterprise
    state: absent
    update_cache: true
    filename: pve-enterprise

- name: Reboot after installing Proxmox VE and removing old kernels
  ansible.builtin.meta: flush_handlers

You might notice the extra Update GRUB handler, that we need to also add to our handlers

[...]

- name: Update GRUB
  ansible.builtin.command: update-grub
  changed_when: true
  notify: Reboot

Finally, we can wrap it all together by calling this proxmox role from our main configure.yaml file

---
- name: Configure the host
  hosts: proximighty
  tasks:
    - name: Set timezone to UTC
      community.general.timezone:
        name: UTC
    
    - name: Install kitty files
      ansible.builtin.apt:
        name: kitty-terminfo
        state: present

- name: Install Proxmox
  hosts: proximighty
  tasks:
    - name: Install Proxmox
      ansible.builtin.import_role:
        name: proxmox

It's now time to execute that playbook!

Executing the playbook

After writing this playbook, it's now time to execute it! To be able to execute this playbook, we need to be able to ssh as root on the Debian host that will get Proxmox installed, with a ssh key and not a password.

As a quick test, running ssh root@192.168.1.200 should log me in without prompting me for a password or a fingerprint verification.

From my laptop, I go to the ansible directory, from which I can run a command to invoke the configure.yaml playbook with the production inventory like so

$ cd ~/Projects/infra/ansible
$ ansible-playbook -i inventory/production proximighty/configure.yaml

Ansible will install everything and occasionally reboot the server when needed. Since my server has an encrypted disk, I need to monitor what's happening on my KVM and unlock the disk with my encryption passphrase when prompted to.

I now have an ansible playbook I can use to quickly spin up a new Proxmox host on a fresh Debian with an encrypted disk! This is a solid foundation for a flexible homelab. I will be able to spin up a long-lived VM for my main k3s node. I will be able to spin up additional k3s workers if need be, or an entirely different cluster to play with, all while keeping my production reasonably isolated and stable.

We'll see in another blog post how to use opentofu, cloud-init and ansible to spin up new VMs on that Proxmox host!

Massive thanks to my colleagues and friends Half-Shot, Davide, and Ark for their insights!

unplug - a tool to test input devices via uinput

Yet another day, yet another need for testing a device I don't have. That's fine and that's why many years ago I wrote libinput record and libinput replay (more powerful successors to evemu and evtest). Alas, this time I had a dependency on multiple devices to be present in the system, in a specific order, sending specific events. And juggling this many terminal windows with libinput replay open was annoying. So I decided it's worth the time fixing this once and for all (haha, lolz) and wrote unplug. The target market for this is niche, but if you're in the same situation, it'll be quite useful.

Pictures cause a thousand words to finally shut up and be quiet so here's the screenshot after running pip install unplug[1]:

This shows the currently pre-packaged set of recordings that you get for free when you install unplug. For your use-case you can run libinput record, save the output in a directory and then start unplug path/to/directory. The navigation is as expected, hitting enter on the devices plugs them in, hitting enter on the selected sequence sends that event sequence through the previously plugged device.

Annotation of the recordings (which must end in .yml to be found) can be done by adding a YAML unplug: entry with a name and optionally a multiline description. If you have recordings that should be included in the default set, please file a merge request. Happy emulating!

[1] And allowing access to /dev/uinput. Details, schmetails...

Julian Hofer

@julianhofer

Git Forges Made Simple: gh & glab

When I set the goal for myself to contribute to open source back in 2018, I mostly struggled with two technical challenges:

  • Python virtual environments, and
  • Git together with GitHub.

Solving the former is nowadays my job, so let me write up my current workflow for the latter.

Most people use Git in combination with modern Git forges like GitHub and GitLab. Git doesn't know anything about these forges, which is why CLI tools exist to close that gap. It's still good to know how to handle things without them, so I will also explain how to do things with only Git. For GitHub there's gh and for GitLab there's glab. Both of them are Go binaries without any dependencies that work on Linux, macOS and Windows. If you don't like any of the provided installation methods, you can simply download the binary, make it executable and put it in your PATH.

Luckily, they also have mostly the same command line interface. First, you have to login with the command that corresponds to your git forge:

gh auth login
glab auth login

In the case of gh this even authenticates Git with GitHub. With GitLab, you still have to set up authentication via SSH.

Working Solo#

The simplest way to use Git is to use it like a backup system. First, you create a new repository on either Github or GitLab. Then you clone it with git clone <REPO>. From that point on, all you have to do is:

  • do some work
  • commit
  • push
  • repeat

On its own there aren't a lot of reasons to choose this approach over a file syncing service like Nextcloud. No, the main reason you do this, is because you are either already familiar with the git workflow, or want to get used to it.

Contributing#

Git truly shines as soon as you start collaborating with others. On a high level this works like this:

  • You modify some files in a Git repository,
  • you propose your changes via the Git forge,
  • maintainers of the repository review your changes, and
  • as soon as they are happy with your changes, they will integrate your changes into the main branch of the repository.

As before, you clone the repository with git clone <REPO>. Change directories into that repository and run git status. The branch that it shows is the default branch which is probably called main or master. Before you start a new branch, you will run the following two commands to make sure you start with the latest state of the repository:

git switch <DEFAULT-BRANCH>
git pull

You switch and create a new branch with:

git switch --create <BRANCH>

That way you can work on multiple features at the same time and easily keep your default branch synchronized with the remote repository.

The next step is to open a pull request on GitHub or merge request on GitLab. They are equivalent, so I will call both of them pull requests from now on. The idea of a pull request is to integrate the changes from one branch into another branch (typically the default branch). However, you don't necessarily want to give every potential contributor the power to create new branches on your repository. That is why the concept of forks exists. Forks are copies of a repository that are hosted on the same Git forge. Contributors can now create branches on their forks and open pull requests based on these branches.

If you don't have push access to the repository, now it's time to create your own fork. Without the forge CLI tools, you first fork the repository in the web interface.

Then, you run the following commands:

git remote rename origin upstream
git remote add origin <FORK>

When you cloned your repository, Git set the default branch of the original repo as the upstream branch for your local default branch. This is preserved by the remote rename, which is why the default branch can still be updated from upstream with git pull and no additional arguments.

Git Forge Tools#

Alternatively, you can use the forge tool that corresponds to your git forge. You still clone and switch branches with Git as shown before. However, you only need a single command to both fork the repository and set up the git remotes.

gh repo fork --remote
glab repo fork --remote

Then, you need to push your local branch. With Git, you first have to tell it that it should create the corresponding branch on the remote and set it as upstream branch.

git push --set-upstream origin <BRANCH>

Next, you open the repository in the web interface, where it will suggest opening a pull request. The upstream branch of your local branch is now configured, which means you can update your remote by running git push without any additional arguments.

Using pr create directly pushes and sets up your branch, and opens the pull request for you. If you have a fork available, it will ask whether you want to push your branch there:

gh pr create
glab mr create

Checking out Pull Requests#

Often, you want to check out a pull request on your own machine to verify that it works as expected. This is surprisingly difficult with Git alone.

First, navigate to the repository where the pull request originates in your web browser. This might be the same repository, or it could be a fork.

If it's the same repository, checking out their branch is not too difficult: you run git switch <BRANCH>, and it's done.

However, if it's a fork, the simplest way is to add a remote for the user who opened the pull request, fetch their repo, and finally check out their branch.

This looks like this:

git remote add <USER> <FORK_OF_USER>
git fetch <USER>
git switch <USER>/<BRANCH>

With the forge CLIs, all you have to do is:

gh pr checkout <PR_NUMBER>
glab mr checkout <MR_NUMBER>

It's a one-liner, works no matter if the pull request is coming from the repo itself or a fork, and it doesn't set up any additional remotes.

You don't even have to open your browser to get the pull request number. Simply run the following commands, and it will give you a list of all open pull requests:

gh pr list
glab mr list

If you have push access to the original repository, you will also be able to push to the branch of the pull request unless the author explicitly opted out of that. This is useful for changes that are easier to do yourself than communicating via a comment.

Finally, you can check out the status of your pull request with pr view. By adding --web, it will directly open your web browser for you:

gh pr view --web
glab mr view --web

Conclusion#

When I first heard of gh's predecessor hub, I thought this is merely a tool for people who insist on doing everything in the terminal. I only realized relatively recently that Git forge tools are in fact the missing piece to an efficient Git workflow. Hopefully, you now have a better idea of the appeal of these tools!

Many thanks to Sabrina and Lucas for their comments and suggestions on this article

You can find the discussion at this Mastodon post.

Emmanuele Bassi

@ebassi

Governance in GNOME

How do things happen in GNOME?

Things happen in GNOME? Could have fooled me, right?

Of course, things happen in GNOME. After all, we have been releasing every six months, on the dot, for nearly 25 years. Assuming we’re not constantly re-releasing the same source files, then we have to come to the conclusion that things change inside each project that makes GNOME, and thus things happen that involve more than one project.

So let’s roll back a bit.

GNOME’s original sin

We all know Havoc Pennington’s essay on preferences; it’s one of GNOME’s foundational texts, we refer to it pretty much constantly both inside and outside the contributors community. It has guided our decisions and taste for over 20 years. As far as foundational text goes, though, it applies to design philosophy, not to project governance.

When talking about the inception and technical direction of the GNOME project there are really two foundational texts that describe the goals of GNOME, as well as the mechanisms that are employed to achieve those goals.

The first one is, of course, Miguel’s announcement of the GNOME project itself, sent to the GTK, Guile, and (for good measure) the KDE mailing lists:

We will try to reuse the existing code for GNU programs as much as possible, while adhering to the guidelines of the project. Putting nice and consistent user interfaces over all-time favorites will be one of the projects. — Miguel de Icaza, “The GNOME Desktop project.” announcement email

Once again, everyone related to the GNOME project is (or should be) familiar with this text.

The second foundational text is not as familiar, outside of the core group of people that were around at the time. I am referring to Derek Glidden’s description of the differences between GNOME and KDE, written five years after the inception of the project. I isolated a small fragment of it:

Development strategies are generally determined by whatever light show happens to be going on at the moment, when one of the developers will leap up and scream “I WANT IT TO LOOK JUST LIKE THAT” and then straight-arm his laptop against the wall in an hallucinogenic frenzy before vomiting copiously, passing out and falling face-down in the middle of the dance floor. — Derek Glidden, GNOME vs KDE

What both texts have in common is subtle, but explains the origin of the project. You may not notice it immediately, but once you see it you can’t unsee it: it’s the over-reliance on personal projects and taste, to be sublimated into a shared vision. A “bottom up” approach, with “nice and consistent user interfaces” bolted on top of “all-time favorites”, with zero indication of how those nice and consistent UIs would work on extant code bases, all driven by somebody’s with a vision—drug induced or otherwise—that decides to lead the project towards its implementation.

It’s been nearly 30 years, but GNOME still works that way.

Sure, we’ve had a HIG for 25 years, and the shared development resources that the project provides tend to mask this, to the point that everyone outside the project assumes that all people with access to the GNOME commit bit work on the whole project, as a single unit. If you are here, listening (or reading) to this, you know it’s not true. In fact, it is so comically removed from the lived experience of everyone involved in the project that we generally joke about it.

Herding cats and vectors sum

During my first GUADEC, back in 2005, I saw a great slide from Seth Nickell, one of the original GNOME designers. It showed GNOME contributors represented as a jumble of vectors going in all directions, cancelling each component out; and the occasional movement in the project was the result of somebody pulling/pushing harder in their direction.

Of course, this is not the exclusive province of GNOME: you could take most complex free and open source software projects and draw a similar diagram. I contend, though, that when it comes to GNOME this is not emergent behaviour but it’s baked into the project from its very inception: a loosey-goosey collection of cats, herded together by whoever shows up with “a vision”, but, also, a collection of loosely coupled projects. Over the years we tried to put a rest to the notion that GNOME is a box of LEGO, meant to be assembled together by distributors and users in the way they most like it; while our software stack has graduated from the “thrown together at the last minute” quality of its first decade, our community is still very much following that very same model; the only way it seems to work is because we have a few people maintaining a lot of components.

On maintainers

I am a software nerd, and one of the side effects of this terminal condition is that I like optimisation problems. Optimising software is inherently boring, though, so I end up trying to optimise processes and people. The fundamental truth of process optimisation, just like software, is to avoid unnecessary work—which, in some cases, means optimising away the people involved.

I am afraid I will have to be blunt, here, so I am going to ask for your forgiveness in advance.

Let’s say you are a maintainer inside a community of maintainers. Dealing with people is hard, and the lord forbid you talk to other people about what you’re doing, what they are doing, and what you can do together, so you only have a few options available.

The first one is: you carve out your niche. You start, or take over, a project, or an aspect of a project, and you try very hard to make yourself indispensable, so that everything ends up passing through you, and everyone has to defer to your taste, opinion, or edict.

Another option: API design is opinionated, and reflects the thoughts of the person behind it. By designing platform API, you try to replicate your toughts, taste, and opinions into the minds of the people using it, like the eggs of parasitic wasp; because if everybody thinks like you, then there won’t be conflicts, and you won’t have to deal with details, like “how to make this application work”, or “how to share functionality”; or, you know, having to develop a theory of mind for relating to other people.

Another option: you try to reimplement the entirety of a platform by yourself. You start a bunch of projects, which require starting a bunch of dependencies, which require refactoring a bunch of libraries, which ends up cascading into half of the stack. Of course, since you’re by yourself, you end up with a consistent approach to everything. Everything is as it ought to be: fast, lean, efficient, a reflection of your taste, commitment, and ethos. You made everyone else redundant, which means people depend on you, but also nobody is interested in helping you out, because you are now taken for granted, on the one hand, and nobody is able to get a word edgewise into what you made on the other.

I purposefully did not name names, even though we can all recognise somebody in these examples. For instance, I recognise myself. I have been all of these examples, at one point or another over the past 20 years.

Painting a target on your back

But if this is what it looks like from within a project, what it looks like from the outside is even worse.

Once you start dragging other people, you raise your visibility; people start learning your name, because you appear in the issue tracker, on Matrix/IRC, on Discourse and Planet GNOME. Youtubers and journalists start asking you questions about the project. Randos on web forums start associating you to everything GNOME does, or does not; to features, design, and bugs. You become responsible for every decision, whether you are or not, and this leads to being the embodiment of all evil the project does. You’ll get hate mail, you’ll be harrassed, your words will be used against you and the project for ever and ever.

Burnout and you

Of course, that ends up burning people out; it would be absurd if it didn’t. Even in the best case possible, you’ll end up burning out just by reaching empathy fatigue, because everyone has access to you, and everyone has their own problems and bugs and features and wouldn’t it be great to solve every problem in the world? This is similar to working for non profits as opposed to the typical corporate burnout: you get into a feedback loop where you don’t want to distance yourself from the work you do because the work you do gives meaning to yourself and to the people that use it; and yet working on it hurts you. It also empowers bad faith actors to hound you down to the ends of the earth, until you realise that turning sand into computers was a terrible mistake, and we should have torched the first personal computer down on sight.

Governance

We want to have structure, so that people know what to expect and how to navigate the decision making process inside the project; we also want to avoid having a sacrificial lamb that takes on all the problems in the world on their shoulders until we burn them down to a cinder and they have to leave. We’re 28 years too late to have a benevolent dictator, self-appointed or otherwise, and we don’t want to have a public consultation every time we want to deal with a systemic feature. What do we do?

Examples

What do other projects have to teach us about governance? We are not the only complex free software project in existence, and it would be an appaling measure of narcissism to believe that we’re special in any way, shape or form.

Python

We should all know what a Python PEP is, but if you are not familiar with the process I strongly recommend going through it. It’s well documented, and pretty much the de facto standard for any complex free and open source project that has achieved escape velocity from a centralised figure in charge of the whole decision making process. The real achievement of the Python community is that it adopted this policy long before their centralised figure called it quits. The interesting thing of the PEP process is that it is used to codify the governance of the project itself; the PEP template is a PEP; teams are defined through PEPs; target platforms are defined through PEPs; deprecations are defined through PEPs; all project-wide processes are defined through PEPs.

Rust

Rust has a similar process for language, tooling, and standard library changes, called RFC. The RFC process is more lightweight on the formalities than Python’s PEPs, but it’s still very well defined. Rust, being a project that came into existence in a Post-PEP world, adopted the same type of process, and used it to codify teams, governance, and any and all project-wide processes.

Fedora

Fedora change proposals exist to discuss and document both self-contained changes (usually fairly uncontroversial, given that they are proposed by the same owners of module being changed) and system-wide changes. The main difference between them is that most of the elements of a system-wide change proposal are required, wheres for self-contained proposals they can be optional; for instance, a system-wide change must have a contingency plan, a way to test it, and the impact on documentation and release notes, whereas as self-contained change does not.

GNOME

Turns out that we once did have GNOME Enhancement Proposals” (GEP), mainly modelled on Python’s PEP from 2002. If this comes as a surprise, that’s because they lasted for about a year, mainly because it was a reactionary process to try and funnel some of the large controversies of the 2.0 development cycle into a productive outlet that didn’t involve flames and people dramatically quitting the project. GEPs failed once the community fractured, and people started working in silos, either under their own direction or, more likely, under their management’s direction. What’s the point of discussing a project-wide change, when that change was going to be implemented by people already working together?

The GEP process mutated into the lightweight “module proposal” process, where people discussed adding and removing dependencies on the desktop development mailing list—something we also lost over the 2.x cycle, mainly because the amount of discussions over time tended towards zero. The people involved with the change knew what those modules brought to the release, and people unfamiliar with them were either giving out unsolicited advice, or were simply not reached by the desktop development mailing list. The discussions turned into external dependencies notifications, which also died up because apparently asking to compose an email to notify the release team that a new dependency was needed to build a core module was far too much of a bother for project maintainers.

The creation and failure of GEP and module proposals is both an indication of the need for structure inside GNOME, and how this need collides with the expectation that project maintainers have not just complete control over every aspect of their domain, but that they can also drag out the process until all the energy behind it has dissipated. Being in charge for the long run allows people to just run out the clock on everybody else.

Goals

So, what should be the goal of a proper technical governance model for the GNOME project?

Diffusing responsibilities

This should be goal zero of any attempt at structuring the technical governance of GNOME. We have too few people in too many critical positions. We can call it “efficiency”, we can call it “bus factor”, we can call it “bottleneck”, but the result is the same: the responsibility for anything is too concentrated. This is how you get conflict. This is how you get burnout. This is how you paralise a whole project. By having too few people in positions of responsibility, we don’t have enough slack in the governance model; it’s an illusion of efficiency.

Responsibility is not something to hoard: it’s something to distribute.

Empowering the community

The community of contributors should be able to know when and how a decision is made; it should be able to know what to do once a decision is made. Right now, the process is opaque because it’s done inside a million different rooms, and, more importantly, it is not recorded for posterity. Random GitLab issues should not be the only place where people can be informed that some decision was taken.

Empowering individuals

Individuals should be able to contribute to a decision without necessarily becoming responsible for a whole project. It’s daunting, and requires a measure of hubris that cannot be allowed to exist in a shared space. In a similar fashion, we should empower people that want to contribute to the project by reducing the amount of fluff coming from people with zero stakes in it, and are interested only in giving out an opinion on their perfectly spherical, frictionless desktop environment.

It is free and open source software, not free and open mic night down at the pub.

Actual decision making process

We say we work by rough consensus, but if a single person is responsible for multiple modules inside the project, we’re just deceiving ourselves. I should not be able to design something on my own, commit it to all projects I maintain, and then go home, regardless of whether what I designed is good or necessary.

Proposed GNOME Changes✝

✝ Name subject to change

PGCs

We have better tools than what the GEP used to use and be. We have better communication venues in 2025; we have better validation; we have better publishing mechanisms.

We can take a lightweight approach, with a well-defined process, and use it not for actual design or decision-making, but for discussion and documentation. If you are trying to design something and you use this process, you are by definition Doing It Wrong™. You should have a design ready, and series of steps to achieve it, as part of a proposal. You should already know the projects involved, and already have an idea of the effort needed to make something happen.

Once you have a formal proposal, you present it to the various stakeholders, and iterate over it to improve it, clarify it, and amend it, until you have something that has a rough consensus among all the parties involved. Once that’s done, the proposal is now in effect, and people can refer to it during the implementation, and in the future. This way, we don’t have to ask people to remember a decision made six months, two years, ten years ago: it’s already available.

Editorial team

Proposals need to be valid, in order to be presented to the community at large; that validation comes from an editorial team. The editors of the proposals are not there to evaluate its contents: they are there to ensure that the proposal is going through the expected steps, and that discussions related to it remain relevant and constrained within the accepted period and scope. They are there to steer the discussion, and avoid architecture astronauts parachuting into the issue tracker or Discourse to give their unwarranted opinion.

Once the proposal is open, the editorial team is responsible for its inclusion in the public website, and for keeping track of its state.

Steering group

The steering group is the final arbiter of a proposal. They are responsible for accepting it, or rejecting it, depending on the feedback from the various stakeholders. The steering group does not design or direct GNOME as a whole: they are the ones that ensure that communication between the parts happens in a meaningful manner, and that rough consensus is achieved.

The steering group is also, by design, not the release team: it is made of representatives from all the teams related to technical matters.

Is this enough?

Sadly, no.

Reviving a process for proposing changes in GNOME without addressing the shortcomings of its first iteration would inevitably lead to a repeat of its results.

We have better tooling, but the problem is still that we’re demanding that each project maintainer gets on board with a process that has no mechanism to enforce compliance.

Once again, the problem is that we have a bunch of fiefdoms that need to be opened up to ensure that more people can work on them.

Whither maintainers

In what was, in retrospect, possibly one of my least gracious and yet most prophetic moments on the desktop development mailing list, I once said that, if it were possible, I would have already replaced all GNOME maintainers with a shell script. Turns out that we did replace a lot of what maintainers used to do, and we used a large Python service to do that.

Individual maintainers should not exist in a complex project—for both the project’s and the contributors’ sake. They are inefficiency made manifest, a bottleneck, a point of contention in a distributed environment like GNOME. Luckily for us, we almost made them entirely redundant already! Thanks to the release service and CI pipelines, we don’t need a person spinning up a release archive and uploading it into a file server. We just need somebody to tag the source code repository, and anybody with the right permissions could do that.

We need people to review contributions; we need people to write release notes; we need people to triage the issue tracker; we need people to contribute features and bug fixes. None of those tasks require the “maintainer” role.

So, let’s get rid of maintainers once and for all. We can delegate the actual release tagging of core projects and applications to the GNOME release team; they are already releasing GNOME anyway, so what’s the point in having them wait every time for somebody else to do individual releases? All people need to do is to write down what changed in a release, and that should be part of a change itself; we have centralised release notes, and we can easily extract the list of bug fixes from the commit log. If you can ensure that a commit message is correct, you can also get in the habit of updating the NEWS file as part of a merge request.

Additional benefits of having all core releases done by a central authority are that we get people to update the release notes every time something changes; and that we can sign all releases with a GNOME key that downstreams can rely on.

Embracing special interest groups

But it’s still not enough.

Especially when it comes to the application development platform, we have already a bunch of components with an informal scheme of shared responsibility. Why not make that scheme official?

Let’s create the SDK special interest group; take all the developers for the base libraries that are part of GNOME—GLib, Pango, GTK, libadwaita—and formalise the group of people that currently does things like development, review, bug fixing, and documentation writing. Everyone in the group should feel empowered to work on all the projects that belong to that group. We already are, except we end up deferring to somebody that is usually too busy to cover every single module.

Other special interest groups should be formed around the desktop, the core applications, the development tools, the OS integration, the accessibility stack, the local search engine, the system settings.

Adding more people to these groups is not going to be complicated, or introduce instability, because the responsibility is now shared; we would not be taking somebody that is already overworked, or even potentially new to the community, and plopping them into the hot seat, ready for a burnout.

Each special interest group would have a representative in the steering group, alongside teams like documentation, design, and localisation, thus ensuring that each aspect of the project technical direction is included in any discussion. Each special interest group could also have additional sub-groups, like a web services group in the system settings group; or a networking group in the OS integration group.

What happens if I say no?

I get it. You like being in charge. You want to be the one calling the shots. You feel responsible for your project, and you don’t want other people to tell you what to do.

If this is how you feel, then there’s nothing wrong with parting ways with the GNOME project.

GNOME depends on a ton of projects hosted outside GNOME’s own infrastructure, and we communicate with people maintaining those projects every day. It’s 2025, not 1997: there’s no shortage of code hosting services in the world, we don’t need to have them all on GNOME infrastructure.

If you want to play with the other children, if you want to be part of GNOME, you get to play with a shared set of rules; and that means sharing all the toys, and not hoarding them for yourself.

Civil service

What we really want GNOME to be is a group of people working together. We already are, somewhat, but we can be better at it. We don’t want rule and design by committee, but we do need structure, and we need that structure to be based on expertise; to have distinct sphere of competence; to have continuity across time; and to be based on rules. We need something flexible, to take into account the needs of GNOME as a project, and be capable of growing in complexity so that nobody can be singled out, brigaded on, or burnt to a cinder on the sacrificial altar.

Our days of passing out in the middle of the dance floor are long gone. We might not all be old—actually, I’m fairly sure we aren’t—but GNOME has long ceased to be something we can throw together at the last minute just because somebody assumed the mantle of a protean ruler, and managed to involve themselves with every single project until they are the literal embodiement of an autocratic force capable of dragging everybody else towards a goal, until the burn out and have to leave for their own sake.

We can do better than this. We must do better.

To sum up

Stop releasing individual projects, and let the release team do it when needed.

Create teams to manage areas of interest, instead of single projects.

Create a steering group from representatives of those teams.

Every change that affects one or more teams has to be discussed and documented in a public setting among contributors, and then published for future reference.

None of this should be controversial because, outside of the publishing bit, it’s how we are already doing things. This proposal aims at making it official so that people can actually rely on it, instead of having to divine the process out of thin air.


The next steps

We’re close to the GNOME 49 release, now that GUADEC 2025 has ended, so people are busy working on tagging releases, fixing bugs, and the work on the release notes has started. Nevertheless, we can already start planning for an implementation of a new governance model for GNOME for the next cycle.

First of all, we need to create teams and special interest groups. We don’t have a formal process for that, so this is also a great chance at introducing the change proposal process as a mechanism for structuring the community, just like the Python and Rust communities do. Teams will need their own space for discussing issues, and share the load. The first team I’d like to start is an “introspection and language bindings” group, for all bindings hosted on GNOME infrastructure; it would act as a point of reference for all decisions involving projects that consume the GNOME software development platform through its machine-readable ABI description. Another group I’d like to create is an editorial group for the developer and user documentation; documentation benefits from a consistent editorial voice, while the process of writing documentation should be open to everybody in the community.

A very real issue that was raised during GUADEC is bootstrapping the steering committee; who gets to be on it, what is the committee’s remit, how it works. There are options, but if we want the steering committee to be a representation of the technical expertise of the GNOME community, it also has to be established by the very same community; in this sense, the board of directors, as representatives of the community, could work on defining the powers and compositions of this committee.

There are many more issues we are going to face, but I think we can start from these and evaluate our own version of a technical governance model that works for GNOME, and that can grow with the project. In the next couple of weeks I’ll start publishing drafts for team governance and the power/composition/procedure of the steering committee, mainly for iteration and comments.

Tobias Bernard

@tbernard

GUADEC 2025

Last week was this year’s GUADEC, the first ever in Italy! Here are a few impressions.

Local-First

One of my main focus areas this year was local-first, since that’s what we’re working on right now with the Reflection project (see the previous blog post). Together with Julian and Andreas we did two lightning talks (one on local-first generally, and one on Reflection in particular), and two BoF sessions.

Local-First BoF

At the BoFs we did a bit of Reflection testing, and reached a new record of people simultaneously trying the app:

This also uncovered a padding bug in the users popover :)

Andreas also explained the p2anda stack in detail using a new diagram we made a few weeks back, which visualizes how the various components fit together in a real app.

p2panda stack diagram

We also discussed some longer-term plans, particularly around having a system-level sync service. The motivation for this is twofold: We want to make it as easy as possible for app developers to add sync to their app. It’s never going to be “free”, but if we can at least abstract some of this in a useful way that’s a big win for developer experience. More importantly though, from a security/privacy point of view we really don’t want every app to have unrestricted access to network, Bluetooth, etc. which would be required if every app does its own p2p sync.

One option being discussed is taking the networking part of p2panda (including iroh for p2p networking) and making it a portal API which apps can use to talk to other instances of themselves on other devices.

Another idea was a more high-level portal that works more like a file “share” system that can sync arbitary files by just attaching the sync context to files as xattrs and having a centralized service handle all the syncing. This would have the advantage of not requiring special UI in apps, just a portal and some integration in Files. Real-time collaboration would of course not be possible without actual app integration, but for many use cases that’s not needed anyway, so perhaps we could have both a high- and low-level API to cover different scenarios?

There are still a lot of open questions here, but it’s cool to see how things get a little bit more concrete every time :)

If you’re interested in the details, check out the full notes from both BoF sessions.

Design

Jakub and I gave the traditional design team talk – a bit underprepared and last-minute (thanks Jakub for doing most of the heavy lifting), but it was cool to see in retrospect how much we got done in the past year despite how much energy unfortunately went into unrelated things. The all-new slate of websites is especially cool after so many years of gnome.org et al looking very stale. You can find the slides here.

Jakub giving the design talk

Inspired by the Summer of GNOME OS challenge many of us are doing, we worked on concepts for a new app to make testing sysexts of merge requests (and nightly Flatpaks) easier. The working title is “Test Ride” (a more sustainable version of Apple’s “Test Flight” :P) and we had fun drawing bicycles for the icon.

Test Ride app mockup

Jakub and I also worked on new designs for Georges’ presentation app Spiel (which is being rebranded to “Podium” to avoid the name clash with the a11y project). The idea is to make the app more resilient and data more future-proof, by going with a file-based approach and simple syntax on top of Markdown for (limited) layout customization.

Georges and Jakub discussing Podium designs

Miscellaneous

  • There was a lot of energy and excitement around GNOME OS. It feels like we’ve turned a corner, finally leaving the “science experiment” stage and moving towards “daily-drivable beta”.
  • I was very happy that the community appreciation award went to Alice Mikhaylenko this year. The impact her libadwaita work has had on the growth of the app ecosystem over the past 5 years can not be overstated. Not only did she build dozens of useful and beautiful new adaptive widgets, she also has a great sense for designing APIs in a way that will get people to adopt them, which is no small thing. Kudos, very well deserved!
  • While some of the Foundation conflicts of the past year remain unresolved, I was happy to see that Steven’s and the board’s plans are going in the right direction.

Brescia

The conference was really well-organized (thanks to Pietro and the local team!), and the venue and city of Brescia had a number of advantages that were not always present at previous GUADECs:

  • The city center is small and walkable, and everyone was staying relatively close by
  • The university is 20 min by metro from the city center, so it didn’t feel like a huge ordeal to go back and forth
  • Multiple vegan lunch options within a few minutes walk from the university
  • Lots of tables (with electrical outlets!) for hacking at the venue
  • Lots of nice places for dinner/drinks outdoors in the city center
  • Many dope ice cream places
Piazza della Loggia at sunset

A few (minor) points that could be improved next time:

  • The timetable started veeery early every day, which contributed to a general lack of sleep. Realistically people are not going to sleep before 02:00, so starting the program at 09:00 is just too early. My experience from multi-day events in Berlin is that 12:00 is a good time to start if you want everyone to be awake :)
  • The BoFs could have been spread out a bit more over the two days, there were slots with three parallel ones and times with nothing on the program.
  • The venue closing at 19:00 is not ideal when people are in the zone hacking. Doesn’t have to be all night, but the option to hack until after dinner (e.g. 22:00) would be nice.
  • Since that the conference is a week long accommodation can get a bit expensive, which is not ideal since most people are paying for their own travel and accommodation nowadays. It’d have been great if there was a more affordable option for accommodation, e.g. at student dorms, like at previous GUADECs.
  • A frequent topic was how it’s not ideal to have everyone be traveling and mostly unavailable for reviews a week before feature freeze. It’s also not ideal because any plans you make at GUADEC are not going to make it into the September release, but will have to wait for March. What if the conference was closer to the beginning of the cycle, e.g. in May or June?

A few more random photos:

Matthias showing us fancy new dynamic icon stuff
Dinner on the first night feat. yours truly, Robert, Jordan, Antonio, Maximiliano, Sam, Javier, Julian, Adrian, Markus, Adrien, and Andreas
Adrian and Javier having an ad-hoc Buildstream BoF at the pizzeria
Robert and Maximiliano hacking on Snapshot

Daiki Ueno

@ueno

Optimizing CI resource usage in upstream projects

At GnuTLS, our journey into optimizing GitLab CI began when we faced a significant challenge: we lost our GitLab.com Open Source Program subscription. While we are still hoping that this limitation is temporary, this meant our available CI/CD resources became considerably lower. We took this opportunity to find smarter ways to manage our pipelines and reduce our footprint.

This blog post shares the strategies we employed to optimize our GitLab CI usage, focusing on reducing running time and network resources, which are crucial for any open-source project operating with constrained resources.

CI on every PR: a best practice, but not cheap

While running CI on every commit is considered a best practice for secure software development, our experience setting up a self-hosted GitLab runner on a modest Virtual Private Server (VPS) highlighted its cost implications, especially with limited resources. We provisioned a VPS with 2GB of memory and 3 CPU cores, intending to support our GnuTLS CI pipelines.

The reality, however, was a stark reminder of the resource demands. A single CI pipeline for GnuTLS took an excessively long time to complete, often stretching beyond acceptable durations. Furthermore, the extensive data transfer involved in fetching container images, dependencies, building artifacts, and pushing results quickly led us to reach the bandwidth limits imposed by our VPS provider, resulting in throttled connections and further delays.

This experience underscored the importance of balancing CI best practices with available infrastructure and budget, particularly for resource-intensive projects.

Reducing CI running time

Efficient CI pipeline execution is paramount, especially when resources are scarce. GitLab provides an excellent article on pipeline efficiency, though in practice, project specific optimization is needed. We focused on three key areas to achieve faster pipelines:

  • Tiering tests
  • Layering container images
  • De-duplicating build artifacts

Tiering tests

Not all tests need to run on every PR. For more exotic or costly tasks, such as extensive fuzzing, generating documentation, or large-scale integration tests, we adopted a tiering approach. These types of tests are resource-intensive and often provide value even when run less frequently. Instead of scheduling them for every PR, they are triggered manually or on a periodic basis (e.g., nightly or weekly builds). This ensures that critical daily development workflows remain fast and efficient, while still providing comprehensive testing coverage for the project without incurring excessive resource usage on every minor change.

Layering container images

The tiering of tests gives us an idea which CI images are more commonly used in the pipeline. For those common CI images, we transitioned to using a more minimal base container image, such as fedora-minimal or debian:<flavor>-slim. This reduced the initial download size and the overall footprint of our build environment.

For specialized tasks, such as generating documentation or running cross-compiled tests that require additional tools, we adopted a layering approach. Instead of building a monolithic image with all possible dependencies, we created dedicated, smaller images for these specific purposes and layered them on top of our minimal base image as needed within the CI pipeline. This modular approach ensures that only the necessary tools are present for each job, minimizing unnecessary overhead.

De-duplicating build artifacts

Historically, our CI pipelines involved many “configure && make” steps for various options. One of the major culprits of long build times is repeatedly compiling source code, oftentimes resulting in almost identical results.

We realized that many of these compile-time options could be handled at runtime. By moving configurations that didn’t fundamentally alter the core compilation process to runtime, we simplified our build process and reduced the number of compilation steps required. This approach transforms a lengthy compile-time dependency into a quicker runtime check.

Of course, this approach cuts both ways: while it simplifies the compilation process, it could increase the code size and attack surface. For example, support for legacy protocol features such as SSL 3.0 or SHA-1 that may lower the entire security should still be able to be switched off at the compile time.

Another caveat is that some compilation options are inherently incompatible with each other. One example is that thread sanitizer cannot be enabled with address sanitizer at the same time. In such cases a separate build artifact is still needed.

The impact: tangible results

The efforts put into optimizing our GitLab CI configuration yielded significant benefits:

  • The size of the container image used for our standard build jobs is now 2.5GB smaller than before. This substantial reduction in image size translates to faster job startup times and reduced storage consumption on our runners.
  • 9 “configure && make” steps were removed from our standard build jobs. This streamlined the build process and directly contributed to faster execution times.

By implementing these strategies, we not only adapted to our reduced resources but also built a more efficient, cost-effective, and faster CI/CD pipeline for the GnuTLS project. These optimizations highlight that even small changes can lead to substantial improvements, especially in the context of open-source projects with limited resources.

For further information on this, please consult the actual changes.

Next steps

While the current optimizations have significantly improved our CI efficiency, we are continuously exploring further enhancements. Our future plans include:

  • Distributed GitLab runners with external cache: To further scale and improve resource utilization, we are considering running GitLab runners on multiple VPS instances. To coordinate these distributed runners and avoid redundant data transfers, we could set up an external cache, potentially using a solution like MinIO. This would allow shared access to build artifacts, reducing bandwidth consumption and build times.
  • Addressing flaky tests: Flaky tests, which intermittently pass or fail without code changes, are a major bottleneck in any CI pipeline. They not only consume valuable CI resources by requiring entire jobs to be rerun but also erode developer confidence in the test suite. In TLS testing, it is common to write a test script that sets up a server and a client as a separate process, let the server bind a unique port to which the client connects, and instruct the client to initiate a certain event through a control channel. This kind of test could fail in many ways regardless of the test itself, e.g., the port might be already used by other tests. Therefore, rewriting tests without requiring a complex setup would be a good first step.

GUADEC 2025: Thoughts and Reflections

Another year, another GUADEC. This was the 25th anniversary of the first GUADEC, and the 25th one I’ve gone to. Although there have been multiple bids for Italy during the past quarter century, this was the first successful one. It was definitely worth the wait, as it was one of the best GUADECs in recent memory.

A birthday cake with sparklers stands next to a sign saying GUADEC 2025
GUADEC’s 25th anniversary cake

This was an extremely smooth conference — way smoother than previous years. The staff and volunteers really came through in a big way and did heroic work! I watched Deepesha, Asmit, Aryan, Maria, Zana, Kristi, Anisa, and especially Pietro all running around making this conference happen. I’m super grateful for their continued hard work in the project. GNOME couldn’t happen without their effort.

A brioche and espresso cup sit on a table
La GNOME vita

Favorite Talks

I commented on some talks as they happened (Day 1, Day 2, Day 3). I could only attend one track at a time so missed a lot of them, but the talks I saw were fabulous. They were much higher quality than usual this year, and I’m really impressed at this community’s creativity and knowledge. As I said, it really was a strong conference.

I did an informal poll of assorted attendees I ran into on the streets of Brescia on the last night, asking what their favorite talks were. Here are the results:

Emmanuele standing in front of a slide that says "Whither Maintainers"
Whither Maintainers
  • Emmanuele’s talk on “Getting Things Done In GNOME”: This talk clearly struck a chord amongst attendees. He proposed a path forward on the technical governance of the project. It also had a “Wither Maintainers” slide that lead to a lot of great conversations.
  • George’s talk on streaming: This was very personal, very brave, and extremely inspiring. I left the talk wanting to try my hand at live streaming my coding sessions, and I’m not the only one.
  • The poop talk, by Niels: This was a very entertaining lightning talk with a really important message at the end.
  • Enhancing Screen Reader Functionality in Modern Gnome by Lukas: Unfortunately, I was at the other track when this one happened so I don’t know much about it. I’ll have to go back and watch it! That being said, it was so inspiring to see how many people at GUADEC were working on accessibility, and how much progress has been made across the board. I’m in awe of everyone that works in this space.

Honorable mentions

In reality, there were many amazing talks beyond the ones I listed. I highly recommend you go back and see them. I know I’m planning on it!

Crosswords at GUADEC

Refactoring gnome-crosswords

We didn’t have a Crosswords update talk this cycle. However we still had some appearances worth mentioning:

  • Federico gave a talk about how to use unidirectional programming to add tests to your application, and used Crosswords as his example. This probably the 5th time one of the two of us have talked about this topic. This was the best one to date, though we keep giving it because we don’t think we’ve gotten the explanation right. It’s a complex architectural change which has a lot of nuance, and is hard for us to explain succinctly. Nevertheless, we keep trying, as we see how this could lead to a big revolution in the quality of GNOME applications. Crosswords is pushing 80KLOC, and this architecture is the only thing allowing us to keep it at a reasonable quality.
  • People keep thinking this is “just” MVC, or a minor variation thereof, but it’s different enough that it requires a new mindset, a disciplined approach, and a good data model. As an added twist, Federico sent an MR to GNOME Calendar to add initial unidirectional support to that application. If crosswords are too obscure a subject for you, then maybe the calendar section of the talk will help you understand it.
  • I gave a lighting talk about some of the awesome work that our GSoC (and prospective GSoC) students are doing.
  • I gave a BOF on Words. It was lightly attended, but led to a good conversation with the always-helpful Martin.
  • Finally, I sat down with Tobias to do a UX review of the crossword game, with an eye to getting it into GNOME Circle. This has been in the works for a long-time and I’m really grateful for the chance to do it in person. We identified many papercuts to fix of course, but Tobias was also able to provide a suggestion to improve a long-standing problem with the interface. We sketched out a potential redesign that I’m really happy with. I hope the Circle team is able to continue to do reviews across the GNOME ecosystem as it provides so much value.

Personal

A final comment: I’m reminded again of how much of a family the GNOME community is. For personal reasons, it was a very tough GUADEC for Zana and I. It was objectively a fabulous GUADEC and we really wanted to enjoy it, but couldn’t. We’re humbled at the support and love this community is capable of. Thank you.

Dev Log July 2025

AbiWord

Working on rebasing and finishing an "old" patch from Michael Gorse that implement accessibility in AbiWord. While the patch is a few years old, it's perfectly rebasable.

Pushed a lot of code modernisation on master, as well as various memory leaks and crashes on stable.

Released 3.0.7 (tag, and flatpak build). 3.0.8 might come soon as I'm backporting more crashers that are already fixed on master. Until I have a 3.1.90 version ready for testing.

libopenraw

Finally I have Fujifilm X-Trans demosaicking, which needs more work as it is still a crude port of the dcraw C code. A also apply the white balance. Added few new cameras and some benchmarks.

Finally I released 0.4.0-alpha.11.

Also I did update libopenraw-view which is my GUI for testing libopenraw. It now renders asynchronously.

Supporting cast

Some other various stuff.

glycin

The merge request to update libopenraw was merged. Thanks to Sophie!

gegl-rs

Updated gegl-rs to the new glib-rs.

lrcat-extractor

Released a new version after updating rusqlite.

Niepce

Ported to the latest gtk4-rs, hence the update to gegl-rs. Some small API changes in the object subclassing needed to be handled.

This Week in GNOME

@thisweek

#210 Periodic Updates

Update on what happened across the GNOME project in the week from July 25 to August 01.

GNOME Core Apps and Libraries

Libadwaita

Building blocks for modern GNOME apps using GTK4.

Alice (she/her) 🏳️‍⚧️🏳️‍🌈 announces

last cycle, libadwaita gained a way to query the system document font, even though it was same as the UI font. This cycle it has been made larger (12pt instead of 11pt) and we have a new .document style class that makes the specified widget use it (as well as increases line height) - intended to be used for the app content such as messages in a chat client.

Meanwhile, the formerly useless .body style class also features increased line height now (along with .caption) and can be used to make labels such as UI descriptions more legible compared to default styles. Some examples of where it’s already used:

  • Dialog body text
  • Preferences group description
  • Status page description
  • What’s new, legal and troubleshooting sections in about dialogs

GNOME Circle Apps and Libraries

revisto announces

Drum Machine 1.4.0 is out! 🥁

Drum Machine, a GNOME Circle application, now supports more pages (bars) for longer beats, mobile improvements, and translations in 17 languages! You can try it out and do more creativity with it.

What’s new: • Extended pattern grid for longer, more complex rhythms • Mobile-responsive UI that adapts to different screen sizes • Global reach with translations including Farsi, Chinese, Russian, Arabic, Hebrew, and more

If you have any suggestions and ideas, you can always contribute and make Drum Machine better, all ideas (even better/more default presets) are welcome!

Available on Flathub: https://flathub.org/apps/io.github.revisto.drum-machine

Happy drumming! 🎶

Third Party Projects

lo reports

After a while of working on and off, I have finally released the first version of Nucleus, a periodic table app for searching and viewing various properties of the chemical elements!

You can get it on Flathub: https://flathub.org/apps/page.codeberg.lo_vely.Nucleus

francescocaracciolo says

Newelle 1.0.0 has been released! Huge release for this AI assistant for Gnome.

  • 📱 Mini Apps support! Extensions can now show custom mini apps on the sidebar
  • 🌐 Added integrated browser Mini App: browse the web directly in Newelle and attach web pages
  • 📁 Improved integrated file manager, supporting multiple file operations
  • 👨‍💻 Integrated file editor: edit files and codeblocks directly in Newelle
  • 🖥 Integrated Terminal mini app: open the terminal directly in Newelle
  • 💬 Programmable prompts: add dynamic content to prompts with conditionals and random strings
  • ✍️ Add ability to manually edit chat name
  • 🪲 Minor bug fixes
  • 🚩 Added support for multiple languages for Kokoro TTS and Whisper.CPP
  • 💻 Run HTML/CSS/JS websited directly in app
  • ✨ New animation on chat change

Get it on FlatHub

Fractal

Matrix messaging app for GNOME written in Rust.

Kévin Commaille announces

Want to get a head start and try out Fractal 12 before its release? That’s what this Release Candidate is for! New since 12.beta:

  • The upcoming room version 12 is supported, with the special power level of room creators
  • Requesting invites to rooms (aka knocking) is now possible
  • Clicking on the name of the sender of a message adds a mention to them in the composer

As usual, this release includes other improvements, fixes and new translations thanks to all our contributors, and our upstream projects.

It is available to install via Flathub Beta, see the instructions in our README.

As the version implies, it should be mostly stable and we expect to only include minor improvements until the release of Fractal 12.

If you want to join the fun, you can try to fix one of our newcomers issues. We are always looking for new contributors!

GNOME Websites

Guillaume Bernard reports

After months of work and test, it’s now possible to connect to Damned Lies using third party providers. You can now use your GNOME SSO account as well as other common providers used by translators: Fedora, Launchpad, GitHub and GitLab.com. The login/password authentication has been disabled and email addresses are validated for security reasons.

On another hand, under the hood, Damned Lies has been modernized: we upgrade to Fedora 42, which provides a fresher gettext (0.23) and we moved to Django 5.2 LTS and Python 3.13. Users can expect performances improvements, as we replaced the Apache mod_wsgi by gunicorn that is said as more CPU and RAM efficient.

Next step is working on async git pushes and merge requests support. Help is very welcome on these topics!

Shell Extensions

Just Perfection reports

The GNOME Shell 49 port guide for extensions is ready! We are now accepting GNOME Shell 49 extension packages on EGO. Please join us on the GNOME Extensions Matrix Channel if you have any issues porting your extension. Also, thanks to Florian Müllner, gnome-extensions has added a new upload command for GNOME Shell 49, making it easier to upload your extensions to EGO. You can also use it with CI.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Secure boot certificate rollover is real but probably won't hurt you

LWN wrote an article which opens with the assertion "Linux users who have Secure Boot enabled on their systems knowingly or unknowingly rely on a key from Microsoft that is set to expire in September". This is, depending on interpretation, either misleading or just plain wrong, but also there's not a good source of truth here, so.

First, how does secure boot signing work? Every system that supports UEFI secure boot ships with a set of trusted certificates in a database called "db". Any binary signed with a chain of certificates that chains to a root in db is trusted, unless either the binary (via hash) or an intermediate certificate is added to "dbx", a separate database of things whose trust has been revoked[1]. But, in general, the firmware doesn't care about the intermediate or the number of intermediates or whatever - as long as there's a valid chain back to a certificate that's in db, it's going to be happy.

That's the conceptual version. What about the real world one? Most x86 systems that implement UEFI secure boot have at least two root certificates in db - one called "Microsoft Windows Production PCA 2011", and one called "Microsoft Corporation UEFI CA 2011". The former is the root of a chain used to sign the Windows bootloader, and the latter is the root used to sign, well, everything else.

What is "everything else"? For people in the Linux ecosystem, the most obvious thing is the Shim bootloader that's used to bridge between the Microsoft root of trust and a given Linux distribution's root of trust[2]. But that's not the only third party code executed in the UEFI environment. Graphics cards, network cards, RAID and iSCSI cards and so on all tend to have their own unique initialisation process, and need board-specific drivers. Even if you added support for everything on the market to your system firmware, a system built last year wouldn't know how to drive a graphics card released this year. Cards need to provide their own drivers, and these drivers are stored in flash on the card so they can be updated. But since UEFI doesn't have any sandboxing environment, those drivers could do pretty much anything they wanted to. Someone could compromise the UEFI secure boot chain by just plugging in a card with a malicious driver on it, and have that hotpatch the bootloader and introduce a backdoor into your kernel.

This is avoided by enforcing secure boot for these drivers as well. Every plug-in card that carries its own driver has it signed by Microsoft, and up until now that's been a certificate chain going back to the same "Microsoft Corporation UEFI CA 2011" certificate used in signing Shim. This is important for reasons we'll get to.

The "Microsoft Windows Production PCA 2011" certificate expires in October 2026, and the "Microsoft Corporation UEFI CA 2011" one in June 2026. These dates are not that far in the future! Most of you have probably at some point tried to visit a website and got an error message telling you that the site's certificate had expired and that it's no longer trusted, and so it's natural to assume that the outcome of time's arrow marching past those expiry dates would be that systems will stop booting. Thankfully, that's not what's going to happen.

First up: if you grab a copy of the Shim currently shipped in Fedora and extract the certificates from it, you'll learn it's not directly signed with the "Microsoft Corporation UEFI CA 2011" certificate. Instead, it's signed with a "Microsoft Windows UEFI Driver Publisher" certificate that chains to the "Microsoft Corporation UEFI CA 2011" certificate. That's not unusual, intermediates are commonly used and rotated. But if we look more closely at that certificate, we learn that it was issued in 2023 and expired in 2024. Older versions of Shim were signed with older intermediates. A very large number of Linux systems are already booting certificates that have expired, and yet things keep working. Why?

Let's talk about time. In the ways we care about in this discussion, time is a social construct rather than a meaningful reality. There's no way for a computer to observe the state of the universe and know what time it is - it needs to be told. It has no idea whether that time is accurate or an elaborate fiction, and so it can't with any degree of certainty declare that a certificate is valid from an external frame of reference. The failure modes of getting this wrong are also extremely bad! If a system has a GPU that relies on an option ROM, and if you stop trusting the option ROM because either its certificate has genuinely expired or because your clock is wrong, you can't display any graphical output[3] and the user can't fix the clock and, well, crap.

The upshot is that nobody actually enforces these expiry dates - here's the reference code that disables it. In a year's time we'll have gone past the expiration date for "Microsoft Windows UEFI Driver Publisher" and everything will still be working, and a few months later "Microsoft Windows Production PCA 2011" will also expire and systems will keep booting Windows despite being signed with a now-expired certificate. This isn't a Y2K scenario where everything keeps working because people have done a huge amount of work - it's a situation where everything keeps working even if nobody does any work.

So, uh, what's the story here? Why is there any engineering effort going on at all? What's all this talk of new certificates? Why are there sensationalist pieces about how Linux is going to stop working on old computers or new computers or maybe all computers?

Microsoft will shortly start signing things with a new certificate that chains to a new root, and most systems don't trust that new root. System vendors are supplying updates[4] to their systems to add the new root to the set of trusted keys, and Microsoft has supplied a fallback that can be applied to all systems even without vendor support[5]. If something is signed purely with the new certificate then it won't boot on something that only trusts the old certificate (which shouldn't be a realistic scenario due to the above), but if something is signed purely with the old certificate then it won't boot on something that only trusts the new certificate.

How meaningful a risk is this? We don't have an explicit statement from Microsoft as yet as to what's going to happen here, but we expect that there'll be at least a period of time where Microsoft signs binaries with both the old and the new certificate, and in that case those objects should work just fine on both old and new computers. The problem arises if Microsoft stops signing things with the old certificate, at which point new releases will stop booting on systems that don't trust the new key (which, again, shouldn't happen). But even if that does turn out to be a problem, nothing is going to force Linux distributions to stop using existing Shims signed with the old certificate, and having a Shim signed with an old certificate does nothing to stop distributions signing new versions of grub and kernels. In an ideal world we have no reason to ever update Shim[6] and so we just keep on shipping one signed with two certs.

If there's a point in the future where Microsoft only signs with the new key, and if we were to somehow end up in a world where systems only trust the old key and not the new key[7], then those systems wouldn't boot with new graphics cards, wouldn't be able to run new versions of Windows, wouldn't be able to run any Linux distros that ship with a Shim signed only with the new certificate. That would be bad, but we have a mechanism to avoid it. On the other hand, systems that only trust the new certificate and not the old one would refuse to boot older Linux, wouldn't support old graphics cards, and also wouldn't boot old versions of Windows. Nobody wants that, and for the foreseeable future we're going to see new systems continue trusting the old certificate and old systems have updates that add the new certificate, and everything will just continue working exactly as it does now.

Conclusion: Outside some corner cases, the worst case is you might need to boot an old Linux to update your trusted keys to be able to install a new Linux, and no computer currently running Linux will break in any way whatsoever.

[1] (there's also a separate revocation mechanism called SBAT which I wrote about here, but it's not relevant in this scenario)

[2] Microsoft won't sign GPLed code for reasons I think are unreasonable, so having them sign grub was a non-starter, but also the point of Shim was to allow distributions to have something that doesn't change often and be able to sign their own bootloaders and kernels and so on without having to have Microsoft involved, which means grub and the kernel can be updated without having to ask Microsoft to sign anything and updates can be pushed without any additional delays

[3] It's been a long time since graphics cards booted directly into a state that provided any well-defined programming interface. Even back in 90s, cards didn't present VGA-compatible registers until card-specific code had been executed (hence DEC Alphas having an x86 emulator in their firmware to run the driver on the card). No driver? No video output.

[4] There's a UEFI-defined mechanism for updating the keys that doesn't require a full firmware update, and it'll work on all devices that use the same keys rather than being per-device

[5] Using the generic update without a vendor-specific update means it wouldn't be possible to issue further updates for the next key rollover, or any additional revocation updates, but I'm hoping to be retired by then and I hope all these computers will also be retired by then

[6] I said this in 2012 and it turned out to be wrong then so it's probably wrong now sorry, but at least SBAT means we can revoke vulnerable grubs without having to revoke Shim

[7] Which shouldn't happen! There's an update to add the new key that should work on all PCs, but there's always the chance of firmware bugs

comment count unavailable comments

Alley Chaggar

@AlleyChaggar

Challenges

Debugging and My Challenges

For the past two weeks, I’ve been debugging the json module. I hooked up the JSON module into the codebase hierarchy by modifying valagsignalmodule.vala to extend the JSON module, which, before extended the GObject module. Running the test case called json.vala, crashes the program.

In the beginning, I was having quite the difficulty trying to use gdb and coredumpctl to investigate the crash. I kept doing:

./autogen.sh --enable-debug
make 

/

./configure --enable-debug
make

Then I’d run commands like:

coredumpctl gdb
coredumpctl info

It simply wasn’t working when I built it this way with the following coredumpctl commands. It wasn’t showing the debug symbols that I needed to be able to see the functions that were causing the program to crash. When I built Vala using GNOME Builder’s build button, it also didn’t work.

Lorenz, my mentor, helped me a lot on this issue. How we were able to fix this was first, I needed to build Vala by doing

./configure --enable-debug
make

Then I needed to run the test in the ‘build terminal’ in GNOME Builder compiler/valac --pkg json-glib-1.0 tests/annotations/json.vala

Then, in a regular terminal, I ran:

gdb compiler/.libs/lt-valac
(gdb) run --pkg json-glib-1.0 tests/annotations/json.vala
(gdb) bt

Once I ran these commands, I was finally able to see the functions causing the crash to happen.

123456#6  0x00007ffff7a1ef37 in vala_ccode_constant_construct_string (object_type=Python Exception <class 'gdb.error'>: value has been optimized out
, _name=0x5555563ef1c0 "anything") at /home/alley/Desktop/vala/ccode/valaccodeconstant.vala:41
#7  0x00007ffff7a1f9f7 in vala_ccode_constant_new_string (_name=0x5555563ef1c0 "anything") at /home/alley/Desktop/vala/ccode/valaccodeconstant.vala:40
#8  0x00007ffff7a10918 in vala_json_module_json_builder (self=0x55555558c810) at /home/alley/Desktop/vala/codegen/valajsonmodule.vala:292
#9  0x00007ffff7a0f07d in vala_json_module_generate_class_to_json (self=0x55555558c810, cl=0x555557199120) at /home/alley/Desktop/vala/codegen/valajsonmodule.vala:191
#10 0x00007ffff7a127f4 in vala_json_module_real_generate_class_init (base=0x55555558c810, cl=0x555557199120) at /home/alley/Desktop/vala/codegen/valajsonmodule.vala:410

This snippet of the backtrace shows that the function vala_json_module_json_builder () on line 292 was the actual crash culprit.

After I fixed the debug symbols, my git push decided not to work properly for a few days. So I was manually editing my changes on GitLab. My theory for git push not working is that Kwalletmanager had a problem, and so the credentials stopped working, which hanged the git push. Either way, I fixed it by switching my repo to SSH. I’ll investigate why the HTTP side of git stopped working, and I’ll fix it.

GUADEC

This was my first-ever GUADEC event that I’ve ever watched. I watched it online, and I particularly found the lightning talks to be my favourite parts. They were all short, sweet, and to the point. It was also kind of comedic how fast people talked to fit everything they wanted to say in a short time span.

Some talks I particularly found interesting are:

  1. The open source game called Threadbare by Endless Access. As someone who is a game programming graduate, it instantly caught my eye and had my attention. I’ll definitely be checking it out and trying to contribute to it.

  2. Carlos Garnacho’s talk about GNOME on our TVs. The idea of GNOME expanding onto smart TVs opens up a whole new area of usability and user experience. It got me thinking about the specs of a regular TV set and how GNOME can adapt to and enhance that experience. The possibilities are exciting, and I’m curious to see how far this concept goes.

Overall, GUADEC made me feel more connected to the GNOME community even though I joined remotely. I’d love to have GAUDEC hosted in Toronto :)

Christian Schaller

@cschalle

Artificial Intelligence and the Linux Community

I have wanted to write this blog post for quite some time, but been unsure about the exact angle of it. I think I found that angle now where I will root the post in a very tangible concrete example.

So the reason I wanted to write this was because I do feel there is a palpable skepticism and negativity towards AI in the Linux community, and I understand that there are societal implications that worry us all, like how deep fakes have the potential to upend a lot of things from news disbursement to court proceedings. Or how malign forces can use AI to drive narratives in social media etc., is if social media wasn’t toxic enough as it is. But for open source developers like us in the Linux community there is also I think deep concerns about tooling that deeply incurs into something that close to the heart of our community, writing code and being skilled at writing code. I hear and share all those concerns, but at the same time having spent time the last weeks using Claude.ai I do feel it is not something we can afford not to engage with. So I know people have probably used a lot of different AI tools in the last year, some being more cute than useful others being somewhat useful and others being interesting improvements to your Google search for instance. I think I shared a lot of those impressions, but using Claude this last week has opened my eyes to what AI enginers are going to be capable of going forward.

So my initial test was writing a python application for internal use at Red Hat, basically connecting to a variety of sources and pulling data and putting together reports, typical management fare. How simple it was impressed me though, I think most of us having to deal with pulling data from a new source know how painful it can be, with issues ranging from missing, outdated or hard to parse API documentation. I think a lot of us also then spend a lot of time experimenting to figure out the right API calls to make in order to pull the data we need. Well Claude was able to give me python scripts that pulled that data right away, I still had to spend some time with it to fine tune the data being pulled and ensuring we pulled the right data, but I did it in a fraction of the time I would have spent figuring that stuff out on my own. The one data source Claude struggled with Fedora’s Bohdi, well once I pointed it to the URL with the latest documentation for that it figured out that it would be better to use the bohdi client library to pull data and once it had that figured out it was clear sailing.

So coming of pretty impressed by that experience I wanted to understand if Claude would be able to put together something programmatically more complex, like a GTK+ application using Vulkan. [Note: should have checked the code better, but thanks to the people who pointed this out. I told the AI to use Vulkan, which it did, but not in the way I expected, I expected it to render the globe using Vulkan, but it instead decided to ensure GTK used its Vulkan backend, an important lesson in both prompt engineering and checking the code afterwards).]So I thought what would be a good example of such an application and I also figured it would be fun if I found something really old and asked Claude to help me bring it into the current age. So I suddenly remembered xtraceroute, which is an old application orginally written in GTK1 and OpenGL showing your traceroute on a 3d Globe.

Screenshot of original xtraceroute

Screenshot of the original Xtraceroute application

I went looking for it and found that while it had been updated to GTK2 since last I looked at it, it had not been touched in 20 years. So I thought, this is a great testcase. So I grabbed the code and fed it into Claude, asking Claude to give me a modern GTK4 version of this application using Vulkan. Ok so how did it go? Well it ended up being an iterative effort, with a lot of back and forth between myself and Claude. One nice feature Claude has is that you can upload screenshots of your application and Claude will use it to help you debug. Thanks to that I got a long list of screenshots showing how this application evolved over the course of the day I spent on it.

First output of Claude

This screenshot shows Claudes first attempt of transforming the 20 year old xtraceroute application into a modern one using GTK4, Vulkan and also adding a Meson build system. My prompt to create this was feeding in the old code and asking Claude to come up with a GTK4 and Vulkan equivalent. As you can see the GTK4 UI is very simple, but ok as it is. The rendered globe leaves something to be desired though. I assume the old code had some 2d fall backcode, so Claude latched onto that and focused on trying to use the Cairo API to recreate this application, despite me telling it I wanted a Vulkan application. What what we ended up with was a 2d circle that I could spin around like a wheel of fortuen. The code did have some Vulkan stuff, but defaulted to the Cairo code.

Second attempt image

Second attempt at updating this application Anyway, I feed the screenshot of my first version back into Claude and said that the image was not a globe, it was missing the texture and the interaction model was more like a wheel of fortune. As you can see the second attempt did not fare any better, in fact we went from circle to square. This was also the point where I realized that I hadn’t uploaded the textures into Claude, so I had to tell it to load the earth.png from the local file repository.

Third attempt by Claude

Third attempt from Claude.Ok, so I feed my second screenshot back into Claude and pointed out that it was no globe, in fact it wasn’t even a circle and the texture was still missing. With me pointing out it needed to load the earth.png file from disk it came back with the texture loading. Well, I really wanted it to be a globe, so I said thank you for loading the texture, now do it on a globe.

This is the output of the 4th attempt. As you can see, it did bring back a circle, but the texture was gone again. At this point I also decided I didn’t want Claude to waste anymore time on the Cairo code, this was meant to be a proper 3d application. So I told Claude to drop all the Cairo code and instead focus on making a Vulkan application.

Fifth attempt

So now we finally had something that started looking like something, although it was still a circle, not a globe and it got that weird division of 4 thing on the globe. Anyway, I could see it using Vulkan now and it was loading the texture. So I was feeling like we where making some decent forward movement. So I wrote a longer prompt describing the globe I wanted and how I wanted to interact with it and this time Claude did come back with Vulkan code that rendered this as a globe, thus I didn’t end up screenshoting it unfortunately.

So with the working globe now in place, I wanted to bring in the day/night cycle from the original application. So I asked Claude to load the night texture and use it as an overlay to get that day/night effect. I also asked it to calculate the position of the sun to earth at the current time, so that it could overlay the texture in the right location. As you can see Claude did a decent job of it, although the colors was broken.

7th attempt

So I kept fighting with the color for a bit, Claude could see it was rendering it brown, but could not initally figure out why. I could tell the code was doing things mostly right so I also asked it to look at some other things, like I realized that when I tried to spin the globe it just twisted the texture. We got that fixed and also I got Claude to create some tests scripts that helped us figure out that the color issue was a RGB vs BRG issue, so as soon as we understood that then Claude was able to fix the code to render colors correctly. I also had a few iterations trying to get the scaling and mouse interaction behaving correctly.

10th attempt

So at this point I had probably worked on this for 4-5 hours, the globe was rendering nicely and I could interact with it using the mouse. Next step was adding the traceroute lines back. By default Claude had just put in code to render some small dots on the hop points, not draw the lines. Also the old method for getting the geocoordinates, but I asked Claude to help me find some current services which it did and once I picked one it on first try gave me code that was able to request the geolocation of the ip addresses it got back. To polish it up I also asked Claude to make sure we drew the lines following the globes curvature instead of just drawing straight lines.

Final version

Final version of the updated Xtraceroute application. It mostly works now, but I did realize why I always thought this was a fun idea, but less interesting in practice, you often don’t get very good traceroutes back, probably due to websites being cached or hosted globally. But I felt that I had proven that with a days work Claude was able to help me bring this old GTK application into the modern world.

Conclusions

So I am not going to argue that Xtraceroute is an important application that deserved to be saved, in fact while I feel the current version works and proves my point I also lost motivation to try to polish it up due to the limitations of tracerouting, but the code is available for anyone who finds it worthwhile.

But this wasn’t really about Xtraceroute, what I wanted to show here is how someone lacking C and Vulkan development skills can actually use a tool like Claude to put together a working application even one using more advanced stuff like Vulkan, which I know many more than me would feel daunting. I also found Claude really good at producing documentation and architecture documents for your application. It was also able to give me a working Meson build system and create all the desktop integration files for me, like the .desktop file, the metainfo file and so on. For the icons I ended up using Gemini as Claude do not do image generation at this point, although it was able to take a png file and create a SVG version of it (although not a perfect likeness to the original png).

Another thing I want to say is that the way I think about this, it is not that it makes coding skills less valuable, AIs can do amazing things, but you need to keep a close eye on them to ensure the code they create actually do what you want and that it does it in a sensible manner. For instance in my reporting application I wanted to embed a pdf file and Claude initial thought was to bring in webkit to do the rendering. That would have worked, but would have added a very big and complex dependency to my application, so I had to tell it that it could just use libpoppler to do it, something Claude agreed was a much better solution. The bigger the codebase the harder it also becomes for the AI to deal with it, but I think it hose circumstances what you can do is use the AI to give you sample code for the functionality you want in the programming language you want and then you can just work on incorporating that into your big application.

The other part here if course in terms of open source is how should contributors and projects deal with this? I know there are projects where AI generated CVEs or patches are drowning them and that helps nobody. But I think if we see AI as a developers tool and that the developer using the tool is responsible for the code generated, then I think that mindset can help us navigate this. So if you used an AI tool to create a patch for your favourite project, it is your responsibility to verify that patch before sending it in, and with that I don’t mean just verifying the functionality it provides, but that the code is clean and readable and following the coding standards of said upstream project. Maintainers on the other hand can use AI to help them review and evaluate patches quicker and thus this can be helpful on both sides of the equation. I also found Claude and other AI tools like Gemini pretty good at generating test cases for the code they make, so this is another area where open source patch contributions can improve, by improving test coverage for the code.

I do also believe there are many areas where projects can greatly benefit from AI, for instance in the GNOME project a constant challenge for extension developers have been keeping their extensions up-to-date, well I do believe a tool like Claude or Gemini should be able to update GNOME Shell extensions quite easily. So maybe having a service which tries to provide a patch each time there is a GNOME Shell update might be a great help there. At the same time having a AI take a look at updated extensions and giving an first review of the update might help reduce the load on people doing code reviews on extensions and help flag problematic extensions.

I know for a lot of cases and situations uploading your code to a webservice like Claude, Gemini or Copilot is not something you want or can do. I know privacy is a big concern for many people in the community. My team at Red Hat has been working on a code assistant tool using the IBM Granite model, called Granite.code. What makes Granite different is that it relies on having the model run locally on your own system, so you don’t send your code or data of somewhere else. This of course have great advantages in terms of improving privacy and security, but it has challenges too. The top end AI models out there at the moment, of which Claude is probably the best at the time of writing this blog post, are running on hardware with vast resources in terms of computing power and memory available. Most of us do not have those kind of capabilities available at home, so the model size and performance will be significantly lower. So at the moment if you are looking for a great open source tool to use with VS Code to do things like code completion I recommend giving Granite.code a look. If you on the other hand want to do something like I have described here you need to use something like Claude, Gemini or ChatGPT. I do recommend Claude, not just because I believe them to be the best at it at the moment, but they also are a company trying to hold themselves to high ethical standards. Over time we hope to work with IBM and others in the community to improve local models, and I am also sure local hardware will keep improving, so over time the experience you can get with a local model on your laptop at least has less of a gap than what it does today compared to the big cloud hosted models. There is also the middle of the road option that will become increasingly viable, where you have a powerful server in your home or at your workplace that can at least host a midsize model, and then you connect to that on your LAN. I know IBM is looking at that model for the next iteration of Granite models where you can choose from a wide variety of sizes, some small enough to be run on a laptop, others of a size where a strong workstation or small server can run them or of course the biggest models for people able to invest in top of the line hardware to run their AI.

Also the AI space is moving blazingly fast, if you are reading this 6 Months from now I am sure the capabilities of online and local models will have changed drastically already.

So to all my friends in the Linux community I ask you to take a look at AI and what it can do and then lets work together on improving it, not just in terms of capabilities, but trying to figure out things like societal challenges around it and sustainability concerns I also know a lot of us got.

Whats next for this code

As I mentioned I while I felt I got it to a point where I proved to myself it worked, I am not planning on working anymore on it. But I did make a cute little application for internal use that shows a spinning globe with all global Red Hat offices showing up as little red lights and where it pulls Red Hat news at the bottom. Not super useful either, but I was able to use Claude to refactor the globe rendering code from xtraceroute into this in just a few hours.

Red Hat Globe

Red Hat Offices Globe and news.

Christian Hergert

@hergertme

Week 30 Status

My approach to engineering involves an engineers notebook and pen at my side almost all the time. My ADHD is so bad that without writing things down I would very much not remember what I did.

Working at large companies can have a silencing effect on engineers in the community because all our communication energy is burnt on weekly status reports. You see this all the time, and it was famously expected behavior when FOSS people joined Google.

But it is not unique to Google and I certainly suffer from it myself. So I’m going to try to experiment for a while dropping my status reports here too, at least for the things that aren’t extremely specific to my employer.

Open Questions

  • What is the state-of-the-art right now for “I want to provide a completely artisan file-system to a container”. For example, say I wanted to have a FUSE file-system for that build pipeline or other tooling accessed.

    At least when it comes to project sources. Everything else should be read-only anyway.

    It would be nice to allow tooling some read/write access but gate the writes so they are limited to the tools running and not persistent when the tool returns.

Foundry

  • A bit more testing of Foundry’s replacement for Jsonrpc-GLib, which is a new libdex based FoundryJsonrpcDriver. It knows how to talk a few different types (HTTP-style, \0 or \n delimited, etc).

    LSP backend has been ported to this now along with all the JSON node creating helpers so try to put those through their paces.

  • Add pre-load/post-load to FoundryTextDocumentAddin so that we can add hooks for addins early in the loading process. We actually want this more for avoiding things during buffer loading.

  • Found a nasty issue where creating addins was causing long running leaks do to the GParameter arrays getting saved for future addin creation. Need to be a bit more clever about initial property setup so that we don’t create this reference cycle.

  • New word-completion plugin for Foundry that takes a different approach from what we did in GtkSourceView. Instead, this runs on demand with a copy of the document buffer on a fiber on a thread. This allows using regex for word boundaries (\w) with JIT, no synchronization with GTK, and just generally _a lot_ faster. It also allowed for following referenced files from #include style style headers in C/C++/Obj-C which is something VIM does (at least with plugins) that I very much wanted.

    It is nice knowing when a symbol comes from the local file vs an included file as well (again, VIM does this) so I implemented that as well for completeness.

    Make sure it does word de-duplication while I’m at it.

  • Preserve completion activation (user initialized, automatic, etc) to propagate to the completion providers.

  • Live diagnostics tracking is much easier now. You can just create a FoundryOnTypeDiagnostics(document) and it will manage updating things as you go. It is also smart enough to do this with GWeakRef so that we don’t keep underlying buffers/documents/etc alive past the point they should be unloaded (as the worker runs on a fiber).

    You can share a single instance of the live diagnostics using foundry_text_document_watch_diagnostics() to avoid extra work.

  • Add a Git-specific clone API in FoundryGitClone which handles all the annoying things like SSH authentication/etc via the use of our prompt abstraction (TTY, app dialog, etc). This also means there is a new foundry clone ... CLI command to test that infrastructure outside of the IDE. Should help for tracking down weird integration issues.

  • To make the Git cloner API work well I had to remove the context requirement from FoundryAuthPrompt. You’ll never have a loaded context when you want to clone (as there is not yet a project) so that requirement was nonsensical.

  • Add new foundry_vcs_list_commits_with_file() API to get the commit history on a single file. This gives you a list model of FoundryCommit which should make it very easy for applications to browse through file history. One call, bridge the model to a listview and wire up some labels.

  • Add FoundryVcsTree, FoundryVcsDiff, FoundryVcsDelta types and git implementations of them. Like the rest of the new Git abstractions, this all runs threaded using libdex and futures which complete when the thread returns. Still need to iterate on this a bit before the 1.0 API is finalized.

  • New API to generate diffs from trees or find trees by identifiers.

  • Found out that libgit2 does not support the bitmap index of the command line git command. That means that you have to do a lot of diffing to determine what commits contain a specific file. Maybe that will change in the future though. We could always shell out to the git command for this specific operation if it ends up too slow.

  • New CTags parser that allows for read-only memory. Instead of doing the optimization in the past (insert \0 and use strings in place) the new index keeps string offset/run for a few important parts.

    Then the open-coded binary search to find the nearest partial match against (then walking backward to get first potential match) can keep that in mind for memcmp().

    We can also send all this work off to the thread pools easily now with libdex/futures.

    Some work still remains if we want to use CTags for symbol resolution but I highly doubt we do.

    Anyway, having CTags is really more just about having an easy test case for the completion engine than “people will actually use this”.

  • Also write a new CTags miner which can build CTags files using whatever ctags engine is installed (universal-ctags, etc). The goal here is, again, to test the infrastructure in a super easy way rather than have people actually use this.

  • A new FoundrySymbolProvider and FoundrySymbol API which allows for some nice ergonomics when bridging to tooling like LSPs.

    It also makes it a lot easier to implement features like pathbars since you can foundry_symbol_list_to_root() and get a future-populated GListModel of the symbol hierarchy. Attach that to a pathbar widget and you’re done.

Foundry-GTK

  • Make FoundrySourceView final so that we can be a lot more careful about life-cycle tracking of related documents, buffers, and addins.

  • Use FoundryTextDocumentAddin to implement spellchecking with libspelling as it vastly improves life-cycle tracking. We no longer rely on UB in GLib weak reference notifications to do cleanup in the right order.

  • Improve the completion bridge from FoundryCompletionProvider to GtkSourceCompletionProvider. Particularly start on after/comment fields. We still need to get before fields setup for return types.

    Still extremely annoyed at how LSP works in this regards. I mean really, my rage that LSP is what we have has no bounds. It’s terrible in almost every way imaginable.

Builder

  • Make my Builder rewrite use new FoundrySourceView

  • Rewrite search dialog to use FoundrySearchEngine so that we can use the much faster VCS-backed file-listing + fuzzy search.

GtkSourceView

  • Got a nice patch for porting space drawing to GskPath, merged it.

  • Make Ctrl+n/Ctrl+p work in VIM emulation mode.

Sysprof

  • Add support for building introspection/docs. Don’t care about the introspection too much, because I doubt anyone would even use it. But it is nice to have documentation for potential contributors to look at how the APIs work from a higher level.

GUADEC

  • Couldn’t attend GUADEC this year, so wrote up a talk on Foundry to share with those that are interested in where things are going. Given the number of downloads of the PDF, decided that maybe sharing my weekly status round-up is useful.

  • Watched a number of videos streamed from GUADEC. While watching Felipe demo his new boxes work, I fired up the repository with foundry and things seem to work on aarch64 (Asahi Fedora here).

    That was the first time ever I’ve had an easy experience running a virtual machine on aarch64 Linux. Really pleasing!

foundry clone https://gitlab.gnome.org/felipeborges/boxes/
cd boxes/
foundry init
foundry run

LibMKS

  • While testing Boxes on aarch64 I noticed it is using the Cairo framebuffer fallback paintable. That would be fine except I’m running on 150% here and when I wrote that code we didn’t even have real fractional scaling in the Wayland protocol defined.

    That means there are stitch marks showing up for this non-accelerated path. We probably want to choose a tile-size based on the scale- factor and be done with it.

    The accelerated path shouldn’t have this problem since it uses one DMABUF paintable and sets the damage regions for the GSK renderer to do proper damage calculation.

Bastien Nocera

@hadess

Digitising CDs (aka using your phone as an image scanner)

I recently found, under the rain, next to a book swap box, a pile of 90's “software magazines” which I spent my evening cleaning, drying, and sorting in the days afterwards.

Magazine cover CDs with nary a magazine 

Those magazines are a peculiar thing in France, using the mechanism of “Commission paritaire des publications et des agences de presse” or “Commission paritaire” for short. This structure exists to assess whether a magazine can benefit from state subsidies for the written press (whether on paper at the time, and also the internet nowadays), which include a reduced VAT charge (2.1% instead of 20%), reduced postal rates, and tax exemptions.

In the 90s, this was used by Diamond Editions[1] (a publisher related to tech shop Pearl, which French and German computer enthusiasts probably know) to publish magazines with just enough original text to qualify for those subsidies, bundled with the really interesting part, a piece of software on CD.

If you were to visit a French newsagent nowadays, you would be able to find other examples of this: magazines bundled with music CDs, DVDs or Blu-rays, or even toys or collectibles. Some publishers (including the infamous and now shuttered Éditions Atlas) will even get you a cheap kickstart to a new collection, with the first few issues (and collectibles) available at very interesting prices of a couple of euros, before making that “magazine” subscription-only, with each issue being increasingly more expensive (article from a consumer protection association).


Other publishers have followed suite.

I guess you can only imagine how much your scale model would end up costing with that business model (50 eurocent for the first part, 4.99€ for the second), although I would expect them to have given up the idea of being categorised as “written press”.

To go back to Diamond Editions, this meant the eventual birth of 3 magazines: Presqu'Offert, BestSellerGames and StratéJ. I remember me or my dad buying a few of those, an older but legit and complete version of ClarisWorks, CorelDraw or a talkie version of a LucasArt point'n'click was certainly a more interesting proposition than a cut-down warez version full of viruses when budget was tight.

3 of the magazines I managed to rescue from the rain

You might also be interested in the UK “covertape wars”.

Don't stress the technique 

This brings us back to today and while the magazines are still waiting for scanning, I tried to get a wee bit organised and digitising the CDs.

Some of them will have printing that covers the whole of the CD, a fair few use the foil/aluminium backing of the CD as a blank surface, which will give you pretty bad results when scanning them with a flatbed scanner: the light source keeps moving with the sensor, and what you'll be scanning is the sensor's reflection on the CD.

My workaround for this is to use a digital camera (my phone's 24MP camera), with a white foam board behind it, so the blank parts appear more light grey. Of course, this means that you need to take the picture from an angle, and that the CD will appear as an oval instead of perfectly circular.

I tried for a while to use GIMP perspective tools, and “Multimedia” Mike Melanson's MobyCAIRO rotation and cropping tool. In the end, I settled on Darktable, which allowed me to do 4-point perspective deskewing, I just had to have those reference points.

So I came up with a simple "deskew" template, which you can print yourself, although you could probably achieve similar results with grid paper.

My janky setup

The resulting picture 

After opening your photo with Darktable, and selecting the “darkroom” tab, go to the “rotate and perspective tool”, select the “manually defined rectangle” structure, and adjust the rectangle to match the centers of the 4 deskewing targets. Then click on “horizontal/vertical fit”. This will give you a squished CD, don't worry, and select the “specific” lens model and voilà.

Tools at the ready

Targets acquired


 Straightened but squished

You can now export the processed image (I usually use PNG to avoid data loss at each step), open things up in GIMP and use the ellipse selection tool to remove the background (don't forget the center hole), the rotate tool to make the writing straight, and the crop tool to crop it to size.

And we're done!


 The result of this example is available on Archive.org, with the rest of my uploads being made available on Archive.org and Abandonware-Magazines for those 90s magazines and their accompanying CDs.

[1]: Full disclosure, I wrote a couple of articles for Linux Pratique and Linux Magazine France in the early 2000s, that were edited by that same company.

Thoughts during GUADEC 2025

Greetings readers of the future from my favourite open technology event of the year. I am hanging out with the people who develop the GNOME platform talking about interesting stuff.

Being realistic, I won’t have time to make a readable writeup of the event. So I’m going to set myself a challenge: how much can I write up of the event so far, in 15 minutes?

Let’s go!

Conversations and knowledge

Conferences involve a series of talks, usually monologues on different topics, with slides and demos. A good talk leads to multi-way conversations.

One thing I love about open source is: it encourages you to understand how things work. Big tech companies want you to understand nothing about your devices beyond how to put in your credit card details and send them money. Sharing knowledge is cool, though. If you know how things work then you can fix it yourself.

Structures

Last year, I also attended the conference and was left with a big question for the GNOME project: “What is our story?” (Inspired by an excellent keynote from Ryan Sipes about the Thunderbird email app, and how it’s supported by donations).

We didn’t answer that directly, but I have some new thoughts.

Open source desktops are more popular than ever. Apparently we have like 5% of the desktop market share now. Big tech firms are nowadays run as huge piles of cash, whose story is that they need to make more cash, in order to give it to shareholders, so that one day you can, allegedly, have a pension. Their main goal isn’t to make computers do interesting things. The modern for-profit corporation is a super complex institution, with great power, which is often abused.

Open communities like GNOME are an antidote to that. With way fewer people, they nevertheless manage to produce better software in many cases, but in a way that’s demanding, fun, chaotic, mostly leaderless and which frequently burns out volunteers who contribute.

Is the GNOME project’s goal to make computers do interesting things? For me, the most interesting part of the conference so far was the focus on project structure. I think we learned some things about how independent, non-profit communities can work, and how they can fail, and how we can make things better.

In a world where political structures are being heavily tested and, in many cases, are crumbling, we would do well to talk more about structures, and to introspect a bit more on what works and what doesn’t. And to highlight the amazing work that the GNOME Foundation’s many volunteer directors have achieved over the last 30 years to create an institution that still functions today, and in many ways functions a lot better than organizations with significantly more resources.

Relevant talks

  • Stephen Deobold’s keynote
  • Emmanuele’s talk on teams

Teams

Emmanuele Bassi tried, in a friendly way, to set fire to long-standing structures around how the GNOME community agrees and disagrees changes to the platform. Based on ideas from other successful projects that are driven by independent, non-profit communities such as the Rust and Python programming languages.

Part of this idea is to create well-defined teams of people who collaborate on different parts of the GNOME platform.

I’ve been contributing to GNOME in different ways for a loooong time, partly due to my day job, where I sometimes work with the technology stack, and partly because its a great group of people, we get to meet around the world once a year, and make software that’s a little more independent from the excesses and the exploitation of modern capitalism, or technofuedalism.

And I think it’s going to be really helpful to organize my contributions according to a team structure with a defined form.

Search

I really hope we’ll have a search team.

I don’t have much news about search. GNOME’s indexer (localsearch) might start indexing the whole home directory soon. Carlos Garnacho continues to heroically make it work really well.

QA / Testing / Developer Experience

I did a talk at the conference (and half of another one with Martín Abente Lahaye) , about end-to-end testing using openQA.

The talks were pretty successful, they lead to some interesting conversations with new people. I hope we’ll continue to grow the Linux QA call and try to keep these conversations going, and try to share knowledge and create better structures so that paid QA engineers who are testing products built with GNOME can collaborate on testing upstream.

Freeform notes

I’m 8 minutes over time already so the rest of this will be freeform notes from my notepad.

Live-coding streams aren’t something I watch or create. It’s an interesting way to share knowledge with the new generation of people who have grown up with internet videos as a primary knowledge source. I don’t have age stats for this blog, but I’m curious how many readers under 30 have read this far down. (Leave a comment if you read this and prove me wrong! : -)

systemd-sysexts for development are going to catch on.

There should be karaoke every year.

Fedora Silverblue isn’t actively developed at the moment. bootc is something to keep an eye on.

GNOME Shell Extensions are really popular and are a good “gateway drug” to get newcomers involved. Nobody figured out a good automated testing story for these yet. I wonder if there’s a QA project there? I wonder if there’s a low-cost way to allow extension developers to test extensions?

Legacy code is “code without tests”. I’m not sure I agree with that.

“Toolkits are transient, apps are forever”. That’s spot-on.

There is a spectrum between being a user and a developer. It’s not a black-and-white distinction.

BuildStream is still difficult to learn and the documentation isn’t a helpful getting started guide for newcomers.

We need more live demos of accessibility tools. I still don’t know how you use the screen reader. I’d like to have the computer read to me.

That’s it for now. It took 34 minutes to empty my brain into my blog, more than planned, but a necessary step. Hope some of it was interesting. See you soon!

Nick Richards

@nedrichards

Octopus Agile Prices For Linux

I’m on the Octopus Agile electricity tariff, where the price changes every half hour based on wholesale costs. This is great for saving money and using less carbon intensive energy, provided you can shift your heavy usage to cheaper times. With a family that insists on eating at a normal hour, that mostly means scheduling the dishwasher and washing machine.

The snag was not having an easy way to see upcoming prices on my Linux laptop. To scratch that itch, I built a small GTK app: Octopus Agile Energy. You can use it yourself if you’re in the UK and have this electricity tarriff. Either install it directly from Flathub or download the source code and ‘press play’ in GNOME Builder. The app is heavily inspired by the excellent Octopus Compare for mobile but I stripped the concept back to a single job: what’s the price now and for the next 24 hours? This felt right for a simple desktop utility and was achievable with a bit of JSON parsing and some hand waving.

Screenshot of the Octopus Agile Energy app showing the current electricity price and a graph of future prices

I wrote a good chunk of the Python for it with the gemini-cli, which was a pleasant surprise. My workflow was running Gemini in a Toolbx container, building on my Silverblue desktop with GNOME Builder, and manually feeding back any errors. I kept myself in the loop, taking my own screenshots of visual issues rather than letting the model run completely free and using integrations like gnome-mcp-server to inspect itself.

It’s genuinely fun to make apps with GTK 4, libadwaita, and Python. The modern stack has a much lower barrier to entry than the GTK-based frameworks I’ve worked on in the past. And while I have my reservations about cloud-hosted AI, using this kind of technology feels like a step towards giving users more control over their computing, not less. Of course, the 25 years of experience I have in software development helped bridge the gap between a semi-working prototype that only served one specific pricing configuration, didn’t cache anything and was constantly re-rendering; and an actual app. The AI isn’t quite there yet at all, but the potential is there and a locally hosted system by and for the free software ecosystem would be super handy.

I hope the app is useful. Whilst I may well make some tweaks or changes this does exactly what I want and I’d encourage anyone interested to fork the code and build something that makes them happy.

Nancy Nyambura

@nwnyambura

Outreachy Update:Understanding and Improving def-extractor.py

Introduction

Over the past couple of weeks, I have been working on understanding and improving def-extractor.py, a Python script that processes dictionary data from Wiktionary to generate word lists and definitions in structured formats. My main task has been to refactor the script to use configuration files instead of hardcoded values, making it more flexible and maintainable.

In this blog post, I’ll explain:

  1. What the script does
  2. How it works under the hood
  3. The changes I made to improve it
  4. Why these changes matter

What Does the Script Do?

At a high level, this script processes huge JSONL (JSON Lines) dictionary dumps, like the ones from Kaikki.org , and filters them down into clean, usable formats.

The def-extractor.py script takes raw dictionary data (from Wiktionary) and processes it into structured formats like:

  • Filtered word lists (JSONL)
  • GVariant binary files (for efficient storage)
  • Enum tables (for parts of speech & word tags)

It was originally designed to work with specific word lists (Wordnik, Broda, and a test list), but my goal is to make it configurable so it could support any word list with a simple config file.

How It Works (Step by Step)

1. Loading the Word List

The script starts by loading a word list (e.g., Wordnik’s list of common English words). It filters out invalid words (too short, contain numbers, etc.) and stores them in a hash table for quick lookup.

2. Filtering Raw Wiktionary Data

Next, it processes a massive raw-wiktextract-data.jsonl file (theWiktionary dump) and keeps only entries that:

  • Match words from the loaded word list
  • Are in the correct language (e.g., English)

3. Generating Structured Outputs

After filtering, the script creates:

  • Enum tables (JSON files listing parts of speech & word tags)
  • GVariant files (binary files for efficient storage and fast lookup)

What Changes have I Made?

1. Added Configuration Support

Originally, the script uses hardcoded paths and settings. I modified it to read from .config files, allowing users to define:

  • Source word list file
  • Output directory
  • Word validation rules (min/max length, allowed characters)

Before (Hardcoded):

WORDNIK_LIST = "wordlist-20210729.txt"
ALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"

After (Configurable):

ini

[Word List]
Source = my-wordlist.txt
MinLength = 2
MaxLength = 20

2. Improved File Path Handling

Instead of hardcoding paths, the script now constructs them dynamically:

output_path = os.path.join(config.word_lists_dir, f"{config.id}-filtered.jsonl")

Why Do These Changes Matter?

Flexibility -Now supports any word list via config files.
Maintainability– No more editing code to change paths or rules.
Scalability -Easier to add new word lists or languages.
Consistency -All settings are in configs.

Next Steps?

1. Better Error Handling

I am working on adding checks for:

  • Missing config fields
  • Invalid word list files
  • Incorrectly formatted data

2. Unified Word Loading Logic

There are separate functions (load_wordnik()load_broda()).

I want to merged them into one load_words(config) that would works for any word list.

 3. Refactor legacy code for better structure

Try It Yourself

  1. Download the script: [wordlist-Gitlab]
  2. Create a .conf config file
  3. Run: python3 def-extractor.py --config my-wordlist.conf filtered-list

Happy coding!

Outreachy Update:Understanding and Improving def-extractor.py

Introduction

Over the past couple of weeks, I have been working on understanding and improving def-extractor.py, a Python script that processes dictionary data from Wiktionary to generate word lists and definitions in structured formats. My main task has been to refactor the script to use configuration files instead of hardcoded values, making it more flexible and maintainable.

In this blog post, I’ll explain:

  1. What the script does
  2. How it works under the hood
  3. The changes I made to improve it
  4. Why these changes matter

What Does the Script Do?

At a high level, this script processes huge JSONL (JSON Lines) dictionary dumps, like the ones from Kaikki.org , and filters them down into clean, usable formats.

The def-extractor.py script takes raw dictionary data (from Wiktionary) and processes it into structured formats like:

  • Filtered word lists (JSONL)
  • GVariant binary files (for efficient storage)
  • Enum tables (for parts of speech & word tags)

It was originally designed to work with specific word lists (Wordnik, Broda, and a test list), but my goal is to make it configurable so it could support any word list with a simple config file.

How It Works (Step by Step)

1. Loading the Word List

The script starts by loading a word list (e.g., Wordnik’s list of common English words). It filters out invalid words (too short, contain numbers, etc.) and stores them in a hash table for quick lookup.

2. Filtering Raw Wiktionary Data

Next, it processes a massive raw-wiktextract-data.jsonl file (theWiktionary dump) and keeps only entries that:

  • Match words from the loaded word list
  • Are in the correct language (e.g., English)

3. Generating Structured Outputs

After filtering, the script creates:

  • Enum tables (JSON files listing parts of speech & word tags)
  • GVariant files (binary files for efficient storage and fast lookup)

What Changes have I Made?

1. Added Configuration Support

Originally, the script uses hardcoded paths and settings. I modified it to read from .config files, allowing users to define:

  • Source word list file
  • Output directory
  • Word validation rules (min/max length, allowed characters)

Before (Hardcoded):

WORDNIK_LIST = "wordlist-20210729.txt"
ALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"

After (Configurable):

ini

[Word List]
Source = my-wordlist.txt
MinLength = 2
MaxLength = 20

2. Improved File Path Handling

Instead of hardcoding paths, the script now constructs them dynamically:

output_path = os.path.join(config.word_lists_dir, f"{config.id}-filtered.jsonl")

Why Do These Changes Matter?

Flexibility -Now supports any word list via config files.
Maintainability– No more editing code to change paths or rules.
Scalability -Easier to add new word lists or languages.
Consistency -All settings are in configs.

Next Steps?

1. Better Error Handling

I am working on adding checks for:

  • Missing config fields
  • Invalid word list files
  • Incorrectly formatted data

2. Unified Word Loading Logic

There are separate functions (load_wordnik()load_broda()).

I want to merged them into one load_words(config) that would works for any word list.

 3. Refactor legacy code for better structure

Try It Yourself

  1. Download the script: [wordlist-Gitlab]
  2. Create a .conf config file
  3. Run: python3 def-extractor.py --config my-wordlist.conf filtered-list

Happy coding!

Hari Rana

@theevilskeleton

GNOME Calendar: A New Era of Accessibility Achieved in 90 Days

Note

Please consider supporting my effort in making GNOME apps accessible for everybody. Thanks!

Introduction

There is no calendaring app that I love more than GNOME Calendar. The design is slick, it works extremely well, it is touchpad friendly, and best of all, the community around it is just full of wonderful developers, designers, and contributors worth collaborating with, especially with the recent community growth and engagement over the past few years. Georges Stavracas and Jeff Fortin Tam are some of the best maintainers I have ever worked with. I cannot express how thankful I am of Jeff’s underappreciated superhuman capabilities to voluntarily coordinate huge initiatives and issue trackers.

One of Jeff’s many initiatives is gnome-calendar#1036: the accessibility initiative, which is a big and detailed list of issues related to accessibility. In my opinion, GNOME Calendar’s biggest problem was the lack of accessibility support, which made the app completely unusable for people exclusively using a keyboard, or people relying on assistive technologies.

This article will explain in details about the fundamental issues that held back accessibility in GNOME Calendar since the very beginning of its existence (12 years at a minimum), the progress we have made with accessibility as well as our thought process in achieving it, and the now and future of accessibility in GNOME Calendar.

Calendaring Complications

On a desktop or tablet form factor, GNOME Calendar has a month view and a week view, both of which are a grid comprising of cells representing a time frame. In the month view, each row is a week, and each cell is a day. In the week view, the time frame within cells varies on the zooming level.

There are mainly two reasons that made GNOME Calendar inaccessible: firstly, GTK’s accessibility tree does not cover the logically and structurally complicated workflow and design that is a typical calendaring app; and secondly, the significant negative implications of accessibility due to reducing as much overhead as possible.

Accessibility Trees Are Insufficient for Calendaring Apps

GTK’s accessibility tree, or rather any accessibility tree, is rendered insufficient for calendaring apps, mainly because events are extremely versatile. Tailoring the entire interface and experience around that versatility pushes us to explore alternate and custom structures.

Events are highly flexible, because they are time-based. An event can last a couple of minutes, but it can as well last for hours, days, weeks, or even months. It can start in the middle of a day and end on the upcoming day; it can start by the end of a week and end at the beginning of the upcoming week. Essentially, events are limitless, just like time itself.

Since events can last more than a day, cell widgets cannot hold a meaningful link with event widgets, because otherwise event widgets would not be capable of spanning across cells. As such, event widgets are overlaid on top of cell widgets and positioned based on the coordinates, width, and height of each widget.

As a consequence, the visual representation of GNOME Calendar is fundamentally incompatible with accessibility trees. GNOME Calendar’s month and week views are visually 2.5 dimensional: A grid layout by itself is structurally two-dimensional, but overlaying event widgets that is capable of spanning across cells adds an additional layer. Conversely, accessibility trees are fundamentally two-dimensional, so GNOME Calendar’s visual representation cannot be sufficiently adapted into a two-dimensional logical tree.

In summary, accessibility trees are insufficient for calendaring apps, because the versatility and high requirements of events prevents us from linking cell widgets with event widgets, so event widgets are instead overlaid on top, consequently making the visual representation 2.5 dimensional; however, the additional layer makes it fundamentally impossible to adapt to a two-dimensional accessibility tree.

Negative Implications of Accessibility due to Maximizing Performance

Unlike the majority of apps, GNOME Calendar’s layout and widgetry consist of custom widgets and complex calculations according to several factors, such as:

  • the size of the window;
  • the height and width of each cell widget to figure out if one or more event widgets can perceptibly fit inside a cell;
  • the position of each event widget to figure out where to position the event widget, and where to reposition all the event widgets around it if necessary;
  • what went wrong in my life to work on a calendaring app written in C.

Due to these complex calculations, along with the fact that it is also possible to have tens, hundreds, or even thousands of events, nearly every calendar app relies on maximizing performance as much as possible, while being at the mercy of the framework or toolkit. Furthermore, GNOME Calendar supports smooth scrolling and kinetic scrolling, so each event and cell widget’s position needs to be recalculated for every pixel when the user scrolls or swipes with a mouse or touchpad.

One way to minimize that problem is by creating custom widgets that are minimal and only fulfill the purpose we absolutely need. However, this comes at the cost of needing to reimplement most functionality, including most, if not all accessibility features and semantics, such as keyboard focus, which severely impacted accessibility in GNOME Calendar.

While GTK’s widgets are great for general purpose use-cases and do not have any performance impact with limited instances of them, performance starts to deteriorate on weaker systems when there are hundreds, if not thousands of instances in the view, because they contain a lot of functionality that event widgets may not need.

In the case of the GtkButton widget, it has a custom multiplexer, it applies different styles for different child types, it implements the GtkActionable interface for custom actions, and more technical characteristics. Other functionality-based widgets will have more capabilities that might impact performance with hundreds of instances.

To summarize, GNOME Calendar reduces overhead by creating minimal custom widgets that fulfill a specific purpose. This unfortunately severely impacted accessibility throughout the app and made it unusable with a keyboard, as some core functionalities, accessibility features and semantics were never (re)implemented.

Improving the Existing Experience

Despite being inaccessible as an app altogether, not every aspect was inaccessible in GNOME Calendar. Most areas throughout the app worked with a keyboard and/or assistive technologies, but they needed some changes to improve the experience. For this reason, this section is reserved specifically for mentioning the aspects that underwent a lot of improvements.

Improving Focus Rings

The first major step was to improve the focus ring situation throughout GNOME Calendar. Since the majority of widgets are custom widgets, many of them require to manually apply focus rings. !563 addresses that by declaring custom CSS properties, to use as a base for focus rings. !399 tweaks the style of the reminders popover in the event editor dialog, with the addition of a focus ring.

We changed the behavior of the event notes box under the “Notes” section in the event editor dialog. Every time the user focuses on the event notes box, the focus ring appears and outlines the entire box until the user leaves focus. This was accomplished by subclassing AdwPreferencesRow to inherit its style, then applying the .focused class whenever the user focuses on the notes.

Improving the Calendar Grid

The calendar grid on the sidebar suffered from several issues when it came to keyboard navigation, namely:

  • pressing would focus the next cell in the grid up until the last cell;
  • when out of bounds, there would be no auditory feedback;
  • on the last row, pressing would focus a blank element; and
  • pressing in left-to-right languages, or in right-to-left languages, on the last column would move focus to a completely different widget.

While the calendar grid can be interacted with a keyboard, the keyboard experience was far from desired. !608 addresses these issues by overriding the Gtk.Widget.focus () virtual method. Pressing or Shift+ skips the entire grid, and the grid is wrapped to allow focusing between the first and last columns with and , while notifying the user when out of bounds.

Improving the Calendar List Box

Note

The calendar list box holds a list of available calendars, all of which can be displayed or hidden from the week view and month view. Each row is a GtkListBoxRow that holds a GtkCheckButton.

The calendar list box had several problems in regards to keyboard navigation and the information each row provided to assistive technologies.

The user was required to press a second time to get to the next row in the list. To elaborate: pressing once focused the row; pressing it another time moved focus to the check button within the row (bad); and finally pressing the third time focused the next row.

Row widgets had no actual purpose besides toggling the check button upon activation. Similarly, the only use for a check button widget inside each row was to display the “check mark” icon if the calendar was displayed. This meant that the check button widget held all the desired semantics, such as the “checkbox” role and the “checked” state; but worst of all, it was getting focus. Essentially, the check button widget was handling responsibilities that should have been handled by the row.

Both inconveniences were addressed by !588. The check button widget was replaced with a check mark icon using GtkImage, a widget that does not grab focus. The accessible role of the row widget was changed to “checkbox”, and the code was adapted to handle the “checked” state.

Implementing Accessibility Functionality

Accessibility is often absolute: there is no ‘in-between’ state; either the user can access functionality, or they cannot, which can potentially make the app completely unusable. This section goes in depth with the widgets that were not only entirely inaccessible but also rendered GNOME Calendar completely unusable with a keyboard and assistive technology.

Making the Event Widget Accessible

Note

GcalEventWidget, the name of the event widget within GNOME Calendar, is a colored rectangular toggle button containing the summary of an event.

Activating it displays a popover that displays additional detail for that event.

GcalEventWidget subclasses GtkWidget.

The biggest problem in GNOME Calendar, which also made it completely impossible to use the app with a keyboard, was the lack of a way to focus and activate event widgets with a keyboard. Essentially, one would be able to create events, but there would be no way to access them in GNOME Calendar.

Quite literally, this entire saga began all thanks to a dream I had, which was to make GcalEventWidget subclass GtkButton instead of GtkWidget directly. The thought process was: GtkButton already implements focus and activation with a keyboard, so inheriting it should therefore inherit focus and activation behavior.

In merge request !559, the initial implementation indeed subclassed GtkButton. However, that implementation did not go through, due to the reason outlined in § Negative Implications of Accessibility due to Maximizing Performance.

Despite that, the initial implementation instead significantly helped us figure out exactly what were missing with GcalEventWidget: specifically, setting Gtk.Widget:receives-default and Gtk.Widget:focusable properties to “True”. Gtk.Widget:receives-default makes it so the widget can be activated how ever desired, and Gtk.Widget:focusable allows it to become focusable with a keyboard. So, instead of subclassing GtkButton, we instead reimplemented GtkButton’s functionality in order to maintain performance.

While preliminary support for keyboard navigation was added into GcalEventWidget, accessible semantics for assistive technologies like screen readers were severely lacking. This was addressed by !587, which sets the role to “toggle-button, to convey that GcalEventWidget is a toggle button. The merge request also indicates that the widget has a popup for the event popover, and has the means to update the “pressed” state of the widget.

In summary, we first made GcalEventWidget accessible with a keyboard by reimplementing some of GtkButton’s functionality. Then, we later added the means to appropriately convey information to assistive technologies. This was the worst offender, and was the primary reason why GNOME Calendar was unusable with a keyboard, but we finally managed to solve it!

Making the Month and Year Spin Buttons Accessible

Note

GcalMultiChoice is the name of the custom spin button widget used for displaying and cycling through months and/or years.

It comprises of a “decrement” button to the start, a flat toggle button in the middle that contains a label that displays the value, and an “increment” button to the end. Only the button in the middle can gain keyboard focus throughout GcalMultiChoice.

In some circumstances, GcalMultiChoice can display a popover for increased granularity.

GcalMultiChoice was not interactable with a keyboard, because:

  1. it did not react to and keys; and
  2. the “decrement” and “increment” buttons were not focusable.

For a spin button widget, the “decrement” and “increment” buttons should generally remain unfocusable, because and keys already accomplish that behavior. Furthermore, GtkSpinButton’s “increment” (+) and “decrement” (-) buttons are not focusable either, and the Date Picker Spin Button Example by the ARIA Authoring Practices Guide (APG) avoids that functionality as well.

However, since GcalMultiChoice did not react to and keys, having the “decrement” and “increment” buttons be focusable would have been a somewhat acceptable workaround. Unfortunately, since those buttons were not focusable, and and keys were not supported, it was impossible to increment or decrement values in GcalMultiChoice with a keyboard without resorting to workarounds.

Additionally, GcalMultiChoice lacked the semantics to communicate with assistive technologies. So, for example, a screen reader would never say anything meaningful.

All of the above problems remained problems until merge request !603. For starters, it implements GtkAccessible and GtkAccessibleRange, and then implements keyboard navigation.

Implementing GtkAccessible and GtkAccessibleRange

The merge request implements the GtkAccessible interface to retrieve information from the flat toggle button.

Fundamentally, since the toggle button was the only widget capable of gaining keyboard focus throughout GcalMultiChoice, this caused two distinct problems.

The first issue was that assistive technologies only retrieved semantic information from the flat toggle button, such as the type of widget (accessible role), its label, and its description. However, the toggle button was semantically just a toggle button; since it contained semantics and provided information to assistive technologies, the information it provided was actually misleading, because it only provided information as a toggle button, not a spin button!

So, the solution to this is to strip the semantics from the flat toggle button. Setting its accessible role to “none” makes assistive technologies ignore its information. Then, setting the accessible role of the top-level (GcalMultiChoice) to “spin-button” gives semantic meaning to assistive technologies, which allows the widget to appropriately convey these information, when focused.

This led to the second issue: Assistive technologies only retrieved information from the flat toggle button, not from the top-level. Generally, assistive technologies retrieve information from the focused widget. Since the toggle button was the only widget capable of gaining focus, it was also the only widget providing information to them; however, since its semantics were stripped, it had no information to share, and thus assistive technologies would retrieve absolutely nothing.

The solution to this is to override the Gtk.Accessible.get_platform_state () virtual method, which allows us to bridge communication between the states of child widgets and the top-level widget. In this case, both GcalMultiChoice and the flat toggle button share the state—if the flat toggle button is focused, then GcalMultiChoice is considered focused; and since GcalMultiChoice is focused, assistive technologies can then retrieve its information and state.

The last issue that needed to be addressed was that GcalMultiChoice was still not providing any of the values to assistive technologies. The solution to this is straightforward: implementing the GtkAccessibleRange interface, which makes it necessary to set values for the following accessible properties: “value-max”, “value-min”, “value-now”, and “value-text”.

After all this effort, GcalMultiChoice now provides correct semantics to assistive technologies. It appropriately reports its role, the current textual value, and whether it contains a popover.

To summarize:

  • The flat toggle button was the only widget conveying information to assistive technologies, as it was the only widget capable of gaining focus and providing semantic information. To solve this, its semantics were stripped away.
  • The top-level, being GcalMultiChoice, was assigned the “spin-button” role to provide semantics; however, it was still incapable of providing information to assistive technologies, because it was never getting focused. To solve this, the state of the toggle button, including the focused state, carried over to the top-level to allow assistive technologies to retrieve information from the top-level.
  • GcalMultiChoice still did not provide its values to assistive technologies. This is solved by implementing the GtkAccessibleRange interface.

Providing Top-Level Semantics to a Child Widget As Opposed to the Top-Level Widget Is Discouraged

As you read through the previous section, you may have asked yourself: “Why go through all of those obstacles and complications when you could have just re-assigned the flat toggle button as “spin-button” and not worry about the top-level’s role and focus state?

Semantics should be provided by the top-level, because they are represented by the top-level. What makes GcalMultiChoice a spin button is not just the flat toggle button, but it is the combination of all the child widgets/objects, event handlers (touch, key presses, and other inputs), accessibility attributes (role, states, relationships), widget properties, signals, and other characteristics. As such, we want to maintain that consistency for practically everything, including the state. The only exception to this is widgets whose sole purpose is to contain one or more elements, such as GtkBox.

This is especially important for when we want it to communicate with other widgets and APIs, such as the Gtk.Widget::state-flags-changed signal, the Gtk.Widget.is_focus () method, and other APIs where it is necessary to have the top-level represent data accurately and behave predictably. In the case of GcalMultiChoice, we set accessible labels at the top-level. If we were to re-assign the flat toggle button’s role as “spin-button”, and set the accessible label to the top-level, assistive technologies would only retrieve information from the toggle button while ignoring the labels defined at the top-level.

For the record, GtkSpinButton also overrides Gtk.Accessible.get_platform_state ():

1
2
3
4
5
6
7
8
9
10
11
12
13
14
static gboolean
gtk_spin_button_accessible_get_platform_state (GtkAccessible              *self,
                                               GtkAccessiblePlatformState  state)
{
  return gtk_editable_delegate_get_accessible_platform_state (GTK_EDITABLE (self), state);
}

static void
gtk_spin_button_accessible_init (GtkAccessibleInterface *iface)
{
  

  iface->get_platform_state = gtk_spin_button_accessible_get_platform_state;
}

To be fair, assigning the “spin-button” role to the flat toggle button is unlikely to cause major issues, especially for an app. Re-assigning the flat toggle button was my first instinct. The initial implementation did just that as well. I was completely unaware of the Gtk.Accessible.get_platform_state () virtual method before finalizing the merge request, so I initially thought that was the correct way to do. Even if the toggle button had the “spin-button” role instead of the top-level, it would not have stopped us from implementing workarounds, such as a getter method that retrieves the flat toggle button that we can then use to manipulate it.

In summary, we want to provide semantics at the top-level, because they are structurally part of it. This comes with the benefit of making the widget easier to work with, because APIs can directly communicate with it, instead of resorting to workarounds.

The Now and Future of Accessibility in GNOME Calendar

All these accessibility improvements will be available on GNOME 49, but you can download and install the pre-release on the “Nightly GNOME Apps” DLC Flatpak remote on nightly.gnome.org.

In the foreseeable future, I want to continue working on !564, to make the month view itself accessible with a keyboard, as seen in the following:

A screen recording demoing keyboard navigation within the month view. Focus rings appear and disappear as the user moves focus between cells. Going out of bounds in the vertical axis scrolls the view to the direction, and going out of bounds in the horizontal axis moves focus to the logical sibling.

However, it is already adding 640 lines of code, and I can only see it increasing overtime. We also want to make cells in the week view accessible, but this will also be a monstrous merge request, just like the above merge request.

Most importantly, we want (and need) to collaborate and connect with people who rely on assistive technologies to use their computer, especially when everybody working on GNOME Calendar does not rely on assistive technologies themselves.

Conclusion

I am overwhelmingly satisfied of the progress we have made with accessibility on GNOME Calendar in six months. Just a year ago, if I was asked about what needs to be done to incorporate accessibility features in GNOME Calendar, I would have shamefully said “dude, I don’t know where to even begin”; but as of today, we somehow managed to turn GNOME Calendar into an actual, usable calendaring app for people who rely on assistive technologies and/or a keyboard.

Since this is still Disability Pride Month, and GNOME 49 is not out yet, I encourage you to get the alpha release of GNOME Calendar on the “Nightly GNOME Apps” Flatpak remote at nightly.gnome.org. The alpha release is in a state where the gays with disabilities can organize and do crimes using GNOME Calendar 😎 /j

Philip Withnall

@pwithnall

A brief parental controls update

Over the past few weeks, Ignacy and I have made good progress on the next phase of features for parental controls in GNOME: a refresh of the parental controls UI, support for screen time limits for child accounts, and basic web filtering support are all in progress. I’ve been working on the backend stuff, while Ignacy has been speedily implementing everything needed in the frontend.

Ignacy is at GUADEC, so please say hi to him! The next phase of parental controls work will involve changes to gnome-control-center and gnome-shell, so he’ll be popping up all over the stack.

I’ll try and blog more soon about the upcoming features and how they’re implemented, because there are necessarily quite a few moving parts to them.

Sjoerd Stendahl

@sstendahl

A Brief History of Graphs; My Journey Into Application Development

It’s been a while since I originally created this page. I’ve been planning for a while (over a year) to write an article like this, but have been putting this off for one reason or another. With GUADEC going on while writing this, listening to some interesting talks on YouTube, I thought this is a good time as ever to actually to submit my first post on this page. In this article, I’ll simply lie down the history of Graphs, how it came to be, how it evolved to an actually useful program for some, and what is on the horizon. Be aware that any opinions expressed are my own, and do not necessarily reflect those of my employer, any contributors or the GNOME Foundation itself.

I would also like to acknowledge that while I founded Graphs, the application I’m mainly talking about, I’m not the only one working on the project. We’ve got a lot of input from the community. And I maintain the project with Christoph, the co-maintainer of the project. Any credit towards this particular program, is shared credit.

Motivations

As many open source projects, I originally developed Graphs because I had a personal itch to scratch. At the time I was working on my PhD, and I regularly had to plot data to prepare for presentations. As well as to do some simple manipulations. Things like cutting away the first few degrees from a X-ray reflectivity measurement, normalizing data, or shifting data to show multiple measurements on the same graph.

At the time, I had a license for OriginLabs. Which was an issue for multiple reasons. Pragmatically, it only works on Windows and even if it had a Linux client, we had a single license coupled to my work PC in my office which I didn’t tend to use a lot. Furthermore, the software itself is an interface nightmare and doing simple operations like cutting data or normalization is not exactly intuitive.

My final issue was more philosophical, which is that I have fundamental problems with using proprietary software in scientific work. It is bluntly absurd how we have rigorous and harsh rules about showing your work and making your research replicable in scientific articles (which is fair), but as soon as software is involved it’s suddenly good enough when a private entity tells us “just trust me bro“. Let it be clear that I have no doubt that a proprietary application actually implements the algorithms that it says it does according to their manual. But you are still replacing a good chunk of your article with a black box, which in my view is fundamentally unscientific. There could be bugs, and subtlety could be missed. Let alone the fact that replicability is just completely thrown out of the window if you delegate all your data processing to a magic black box. This is an issue where a lot of people I talk to tend to agree with me on principle, yet very few people actually care enough to move away from proprietary solutions. Whenever people use open-source software for their research, I found it’s typically a coincidence based on the merits that it was free, rather than a philosophical or ideological choice.

Either way, philosophically, I wanted to do my data-reduction using complete transparency. And pragmatically I simply needed something that just plots my data, and allows me to do basic transformations. For years I had asked myself questions like “why can’t I just visually select part of the data, and then press a “cut” button?” and “Why do all these applications insist on over-complicating this?” Whilst I still haven’t found an answer to the second question, I had picked up programming as a hobby at that stage, so I decided to answer my first question with a “fine, I’ll do it myself”. But first, let’s start at what drove me to start working on applications such as these.

Getting into application development

Whilst I had developed a lot in MatLab during my master’s (as well as TI-Basic at high-school), my endeavor in application-development started mostly during my PhD. Starting with some very simple applications, like a calculator tool for growth rate in magnetron sputtering based on calibration measurements. Another application that I wrote during that time was a tool that simply plotted logs that we got from our magnetron sputtering machine. Fun fact here is that my logging software also kept track of how the software running our magnetron sputtering chambers slowed down over time. Basically, our machine was steered using LabView, and after about 1000 instructions or so it started to slow down a bit. So if we tell it to do something for 24 seconds, it started to take 24.1 seconds for instance. At one point we had a reviewer comment that didn’t believe that we could get such a delay with modern computers, so it was nice to have the receipts to back this up. Still the conclusion here should be that LabView is not exactly great to steer hardware directly, but it’s not me that’s calling the shots.

My first “bigger” project was something in between a database (all stored in a csv file), as well as a plotting program. Basically for every sample I created in the lab, I added an item where I stored all relevant information, including links to the measurements I did on the sample (like sane people would do in an excel sheet). Then using a simple list of all samples, I could quickly just plot my data for the measurements I wanted. I also had some functionality like cutting away the start of the data, normalizing the data, or the ability to calculate sample thickness based on the measurement. In a sense, this was Graphs 0.1. The code is still online, if someone wants to laugh at a physicists code without any real developer experience.

The second “big” tool that I created during that time was GIScan. This lied the foundation of my very favourite article that I wrote during my PhD. Essentially, we got 24 hours to measure as many samples as we can at a Synchrotron facility. So that’s exactly what we did, almost blindly. Then we came home with thousands of measurements on a few hundred samples, and it was time to analyze. At the very first stage, I did some basic analysis using Python. Basically all filenames were tagged somewhat strategically, so I could use regex to isolate the measurement series, and then I could quite quickly find the needle in the haystack. Basically, I found which 20 measurements or so where interesting for us and where to look further. The only problem, the work we were doing was extremely niche and the data reduction software available, which would do things like background subtraction and coordinate conversion for us (from pixels to actually physical coordinates), was barely functional and not made for our type of measurements. So here I wrote my own data reduction software, GIScan. Explaining what makes it actually incredibly useful would require an article series about the physics behind this, but my entire analysis hinged on this data analysis software. GIScan is also available as GPLv3 licensed software, but also here I will use my right to remain silent on any questions about the crimes committed in the code quality itself. Fun fact, all graphs in the mentioned article were made using Graphs and Inkscape. Most of the figures themselves are available under a CC-BY license as part of my PhD. I asked about using a CC-BY-SA license, but the university strongly recommended against as they felt it could make it more difficult to use for others in publishing if I care about sharing my work, basically journals are the biggest parasites in academia.

Then we get to Graphs. This was the last, and biggest program that I wrote during that time in my career. At the very beginning, I actually started in Qt. Not because I preferred it as a toolkit (I didn’t, and still don’t), but because it’s easier to port to Windows and spread to my peers. Quite quickly I got to the state where I could easily import two-column data, and do simple manipulations on this data like normalizing the data. It was barebones, but really useful for my workflow. However, as this quickly turned into a passion project, I decided to do the selfish thing and actually rewrite the entire thing in the toolkit that I personally preferred, GTK with libadwaita. It looked beautiful (well, in the same way a newborn baby is beautiful to their parents), and it integrated very nicely into my own desktop. In fact, I was so pleased with it, that I felt like I wanted to share this online, the original Reddit post can still be found here. This marked the very first release of Graphs 1.0, which can be seen in its full glory below, and this is essentially where the fun began

The very first version of Graphs
Graphs 1.0

The power of community

When I originally developed this for my personal use, I simply called it “Data Manipulator”, which I had shortened to DatMan. Quite early in the process, even before I shared the project to Reddit, Hari Rana (aka TheEvilSkeleton) filed an issue asking me to consider naming the project in accordance with the GNOME HIG. After some small discussions there, we settled on Graphs. This was my first experience with feedback or contributions from the community, and something I am still grateful for. It’s a much nicer name that fits in with GNOME applications. They also helped me with a few other design patterns like modal windows and capitalization. Shoutout to Skelly here, for the early help to a brand new project. It did push me to look more into the HIG, and thus helped a lot in getting that ball rolling. I don’t take any donations, but feel free to help them out with the work on several projects that are significantly more high-stress than Graphs. There’s a donation page on their webpage.

After sharing the initial release to Reddit, I continued development and slowly started tweaking things and polishing existing features. I added support for multiple axes, added some more transformations and added basic options like import settings. It was also around this time that Tobias Bernard from the GNOME Design team dropped by with some help. At first with the generous offer to design a logo for the project, which is still the logo of Graphs today. The old logo, followed by the newly designed logo can be found here:

The original graphs logo, a simpy red cruve on a graph roughly looking like x*sin(x)
The original Graphs logo
The redesigned Graphs logo. Showing a curve on a plot, on a notebook
The redesigned Graphs logo

Yet again, I was very pleasantly surprised by complete strangers just dropping by and offering help. Of course, they’re not just helping me personally, but rather helping out the community and ecosystem as a whole. But this collaborative feeling that we’re all working on a system that we collectively own together is something that really attracted me to GNOME and FOSS in general.

It was also around these early days that Christoph, who now maintains Graphs with me came by with some pull requests. This went on to the point that he pretty naturally ended up in the role of maintainer. I can confidently say that him joining this endeavor is the best thing that ever happened to Graphs. Both in terms of a general elevation of the code quality, but also in terms of motivation and decision making. Not only did the introduction of a second maintainer mean that new code actually got reviewed, but someone else contributing is a really strong motivator and super contagious for me. In these somewhat early days things were moving fast, and we really saw strong improvement both in terms of the quality of the code, but also in the general user experience of the app.

In terms of UX, I’d really like to thank Tobias again. Even before we even had GNOME Circle on our radar as goal, he helped us a lot with general feedback about the UX. Highlighting papercuts, and coming up with design patterns that made more sense. Here we really saw a lot of improvements, and I really learned a lot at the time about having design at the core of the development process. The way the application works is not just a means to get something done, the design is the program. Not to say that I’d classify myself as an expert UI-designer these days, but a lot of lessons have been learned thanks to the involvement of people that have more expertise than me. The GNOME Design team in general has been very helpful with suggestions and feedback during the development. Whenever I got in touch with the GNOME Developer community, it’s been nothing but helpfulness and honest advice. The internet stereotype about GNOME Developers being difficult to work with simply does not hold up. Not in my experience. It’s been a fantastic journey. Note that nobody is obliged to fix your problems for you, but you will find that if you ask nicely and listen to feedback from others, people are more than willing to help you out!

I couldn’t talk about the history of Graphs, without at least mentioning the process of getting to GNOME Circle. It’s there for me, that Graphs really went from a neat hobbyist tool to a proper useful application. When we initially applied there, I was pretty happy about the state we were at, but we’ve actually undergone quite a bit of a transformation since Graphs got accepted. If someone feels inclined following the process, the entire process is still available on the GitLab page. I won’t go too much about joining GNOME Circle here, there’s a nice talk scheduled at GUADEC2025 from the developer of Drum Machine about that. But here’s what Graphs looked like before, and after the GNOME Circle Application:

A screenshot showing Graphs before the GNOME Circle application. In general with a more cluttered interface
The state of Graphs when we just applied to GNOME Circle
A screenshot showing Graphs just after the GNOME Circle application.
Graphs just after the GNOME Circle application. This also coincided with the new sidebar in libadwaita.

Two particular changes that stuck to me where the introduction of touchpad gesture support, and the change in the way we handle settings. Starting with touchpad gestures, I had always considered this to be out of our control. We use Matplotlib to render the plots themselves, which by itself doesn’t support touch gestures. It’s mostly thanks to Tobias naming it as part of the GNOME Cirle Review that I actually went ahead and try to implement it myself. After a week or so digging into documentations and testing some different calculations for the different axes, I actually got this working. It’s a moment that stuck with me, partly because of the dopamine hit when things finally worked, but also because it again showed the value of starting with intended the user experience first and then working backwards to fit the technology with that. Rather than starting with the technology, and then creating a user experience from that.

The change in settings is something I wanted to highlight, because this is such a common theme in discussions about GNOME in general. Over time, I’ve been leaning more and more towards the idea that preferences in many cases are simply just an excuse to avoid making difficult choices. Before submitting, we had settings for basically everything. We had a setting for the default plotting style in dark mode and light mode, we had a setting for the clipboard size. We had a setting for the default equation when creating a new equation, and I could on for a bit. Most of these settings could simply be replaced by making the last alternative persistent between sessions. The default equation now is simply the last used equation, same for import settings where we just added a button to reset these settings to default. For the styling we don’t have a separate dark and light style that can be set, instead you just set one style total and one of the options is just “System”, which essentially resembles Adwaita and Adwaita-dark in light and dark mode respectively. This really, really streamlined the entire user experience. Things got much easier to use, and options got much easier to find. I would strongly recommend anyone that develops applications (within the GNOME ecosystem or elsewhere), to read the “Choosing our preferences” article, it’s a real eye-opener.

Where we are now, and where we’re going

These days Graphs is relatively mature and works pretty well. Since being accepted to the GNOME Circle, we haven’t had such a major overhaul as presented here. We’ve had some performance upgrades under the hood, fixed quite a bit of bugs and made some improvements with the layout system. We’ve since also added full support for touchscreen devices (thanks to me getting a Steam Deck, allowing me to test on touch), improved the rubberband on the canvas, and improved equation parsing a bit.

Despite the somewhat slower pace, there is a major release in the brewing with some exiting features already in the main branch. Some of the features that you can expect in the next stable release:

Full equation support on an infinite canvas

At the moment, you cannot really add an “equation” to Graphs. Instead, what you do is that you generate data based on an equation. In the next release, we actually support “equations”. These span the entire canvas, and can be changed afterwards as well. Operations you do on the equation (such as derivatives), actually affect the equation accordingly, and you can actually change the equation also now after adding it.

Screenshot of Graphs showing support for equations
We now fully support equations on an infinite canvas

Generated data can now be changed afterwards

You can still generate data from an equation like you could previously (so it doesn’t have to be an infinite equation). But generated data can now also be changed afterwards by changing the input equation.

A screenshot showing how generated data can be regenerated afterwards
Generated data can now be regenerated afterwards

A fully revamped style editor

In the upcoming release, you can actually open .mplstyle files using Graphs, which opens the style editor itself instead of the main application. Furthermore, you can now import styles from the GUI, and open Graphs styles in another application (like your text editor) to do some advanced changes in the style that are not supported by our GUI. Likewise, you can now export your Graphs style-file so you can share it with others. (Maybe even with us, as a merge request, if it’s really nice 😉 )

Another really nice touch is that you now get a live preview of the actual style you’re working on, so you don’t need to go back and forth every time when you make incremental changes.

A screenshot showing Graph's new style editor
Graph’s new style editor

Drag and drop support

You can now import data by simply drag and dropping data into the As usual, there’s more features that I probably forgot. But the next release is bound to be a banger. I won’t dare to pin a release date here. But all the mentioned changes are already working (sqlite support is still in MR) and can be tested from the main branch. There’s still work to do though with regard to a planned rework on the way we import data, and the way we access the style editor which is currenlty a bit buried in the stable release. main application

A screenshot showing drag and drop support
You can now drag and drop data in Graphs

Multiple sessions

You can now finally have multiple sessions of Graphs open at the same time. Allowing you to view and work on data side-by-side.

You can now have multiple sessions open in Graphs
You can now have multiple sessions open in Graphs

Support for sqlite databases

We now added support for sqlite databases. So you can import data from your .db file

Graphs now supports databases as input, on import you can choose your database table, and your columns based on that.
Graphs now supports databases as input, on import you can choose your database table, and your columns based on that.

And more

As usual, there’s more features that I probably forgot. But the next release is bound to be a banger. I won’t dare to pin a release date here. But all the mentioned changes are already working (sqlite support is still in MR) and can be tested from the main branch. There’s still work to do though with regard to a planned rework on the way we import data, and the way we access the style editor which is currently a bit buried in the stable release.

Conclusion

This post got a bit longer than I anticipated. But I hope in general this could give people some insight on how it is for a newcomer to get into application development. I really encourage people to test the waters. It really shows that you really can get involved, even if it involves learning along the way. These days I no longer work in academia, and I am willing to bet that I’d probably wouldn’t have my current position working with software if it wasn’t for these adventures.

Again, I would really like to thank the GNOME Community as a whole. The adventure so far has been great, and I promise that it’s far from over 🙂

Jussi Pakkanen

@jpakkane

Comparing a red-black tree to a B-tree

 In an earlier blog post we found that optimizing the memory layout of a red-black tree does not seem to work. A different way of implementing an ordered container is to use a B-tree. It was originally designed to be used for on-disk data. The design principle was that memory access is "instant" while disk access is slow. Nowadays this applies to memory access as well, as cache hits are "instant" and uncached memory is slow.

I implemented a B-tree in Pystd. Here is how all the various containers compare. For test data we used numbers from zero to one million in a random order.


As we can see, an unordered map is massively faster than any ordered container. If your data does not need to be ordered, that is the one you should use. For ordered data, the B-tree is clearly faster than either red-black tree implementation.

Tuning the B-tree

B-trees have one main tunable parameter, namely the spread factor of the nodes. In the test above it was five, but for on disk purposes the recommended value is "in the thousands". Here's how altering the value affects performance.


The sweet spot seems to be in the 256-512 range, where the operations are 60% faster than standard set. As the spread factor grows towards infinity, the B-tree reduces to just storing all data in a single sorted array. Insertion into that is an O(N^2) algorithm as can be seen here.

Getting weird

The B-tree implementation has many assert calls to verify the internal structures. We can compile the code with -DNDEBUG to make all those asserts disappear. Removing redundant code should make things faster, so let's try it.

There are 13 measurements in total and disabling asserts (i.e. enabling NDEBUG) makes the code run slower in 8 of those cases. Let's state that again, since it is very unexpected: in this particular measurement code with assertions enabled runs faster than the same code without them. This should not be happening. What could be causing it?

I don't know for sure, so here is some speculation instead.

First of all a result of 8/13 is probably not statistically significant to say that enabling assertions makes things faster. OTOH it does mean that enabling them does not make the code run noticeably slower. So I guess we can say that both ways of building the code are approximately as fast.

As to why that is, things get trickier. Maybe GCC's optimizer is just really good at removing unnecessary checks. It might even be that the assertions give the compiler more information so it can skip generating code for things that can never happen. I'm not a compiler engineer, so I'll refrain from speculating further, it would probably be wrong in any case.

Michael Catanzaro

@mcatanzaro

Fedora Must (Carefully) Embrace Flathub

Motivation

Opportunity is upon us! For the past few years, the desktop Linux user base has been growing at a historically high rate. StatCounter currently has us at 4.14% desktop OS market share for Q2 2025. For comparison, when Fedora Workstation was first released in Q4 2014, desktop Linux was at 1.38%. Now, StatCounter measures HTTP requests, not computers, but it’s safe to say the trend is highly encouraging. Don’t trust StatCounter? Cloudflare reports 2.9% for Q2 2025. One of the world’s most popular websites reports 5.1%. And although I was unable to figure out how to make a permanent link to the results, analytics.usa.gov is currently reporting a robust 6.2% for the past 90 days, and increasing. The Linux user base is already much larger than I previously suspected would ever be possible, and it seems to be increasing quickly. I wonder if we are perhaps nearing an inflection point where our user base may soon increase even more considerably. The End of 10 and enthusiastic YouTubers are certainly not hurting.

Compared to its peers, Fedora is doing particularly well. It’s pretty safe to say that Fedora is now one of the 2 or 3 most popular and successful desktop Linux operating systems, a far cry from its status 10 years ago, when Fedora suffered from an unfortunate longstanding reputation that it was an unstable “test bed” OS only suitable for experienced technical users. Those days are long gone; nowadays, Fedora has an army of social media users eager to promote it as a reliable, newcomer-friendly choice.

But we cannot stop here. If we become complacent and content ourselves with the status quo, then we will fail to take maximum advantage of the current opportunity.

Although Fedora Workstation works well for most users, and although quality and reliability has improved considerably over the past decade, it is still far too easy for inexperienced users to break the operating system. Today’s Fedora Workstation is fundamentally just a nicer version of the same thing we already had 10 years ago. The original plan called for major changes that we have thus far failed to deliver, like “Robust Upgrades,” “Better upgrade/rollback control,” and “Container based application install.” These critical goals are notably all already achieved by Fedora Silverblue, the experimental image-based alternative to Fedora Workstation, but few Fedora users benefit because only the most experienced and adventurous users are willing to install Silverblue. I had long assumed that Silverblue would eventually become the next Fedora Workstation, and that the Silverblue code name would eventually be retired. This is now an explicit project goal of Fedora’s Strategy 2028, and it is critical for Fedora’s success. The Fedora Workstation of the future must be:

  • Safe and image-based by default: an atomic operating system composed of RPMs built on bootc. Most users should stick with image-based mode because it’s much harder to break the OS, and easier to troubleshoot when something does go wrong.
  • Flexible if you so choose: converting the image-based OS into the traditional package-based OS managed by RPM and dnf must be allowed, for users who prefer or require it. Or alternatively, if converting is not possible, then installing a traditional non-atomic Fedora must remain possible. Either way, we must not force users to use image-based desktops if they do not want to, so no need to panic. But image-based must eventually become the new default.

Silverblue is not ready yet, but Fedora has a large community of developers and should be able to eventually resolve the remaining problems.

But wait, wasn’t this supposed to be a blog post about Flathub? Well, consider that with an image-based OS, you cannot easily install traditional RPM packages. Instead, in Fedora Silverblue, desktop applications are installed only via Flatpak. (This is also true of Fedora Kinoite and Fedora’s other atomic desktop variants.) So Fedora must have a source of Flatpaks, and that source must be enabled by default, or there won’t be any apps available.

(Don’t like Flatpak? This blog post is long enough already, so I’ll ask you to just make a leap of faith and accept that Flatpak is cool. Notably, Flatpak applications that keep their bundled dependencies updated and do not subvert the sandbox are much safer to use than traditional distro-packaged applications.)

In practice, there are currently only two interesting sources of Flatpaks to choose from: Fedora Flatpaks and Flathub. Flathub is the much better choice, and enabling it by default should be our end goal. Fedora is already discussing whether to do this. But Flathub also has several disadvantages, some of which ought to be blockers.

Why Flathub?

There are important technical differences between Fedora’s Flatpaks, built from Fedora RPMs, vs. Flathub’s Flatpaks, which are usually built on top of freedesktop-sdk. But I will not discuss those, because the social differences are more important than the technical differences.

Users Like Flathub

Feedback from Fedora’s user base has been clear: among users who like Flatpaks, Flathub is extremely popular. When installing a Flatpak application, users generally expect it to come from Flathub. In contrast, many users of Fedora Flatpaks do not install them intentionally, but rather by accident, only because they are the preferred software source in GNOME Software. Users are often frustrated to discover that Fedora Flatpaks are not supported by upstream software developers and have a different set of bugs than upstream Flatpaks do. It is also common for users and even Fedora developers to entirely remove the Fedora Flatpak application source.

Not so many users prefer to use Fedora Flatpaks. Generally, these users cite some of Flathub’s questionable packaging practices as justification for avoiding use of Flathub. These concerns are valid; Flathub has some serious problems, which I will discuss in more detail below. But improving Flathub and fixing these problems would surely be much easier than creating thousands of Fedora Flatpak packages and attempting to compete with Flathub, a competition that Fedora would be quite unlikely to win.

Flathub is drastically more popular than Fedora Flatpaks even among the most hardcore Fedora community members who participate in change proposal debate on Fedora Discussion. (At time of writing, nearly 80% of discussion participants favor filtering out Fedora Flatpaks.)

This is the most important point. Flathub has already won.

Cut Out the Middleman

In general, upstream software developers understand their software much better than downstream packagers. Bugs reported to downstream issue trackers are much less likely to be satisfactorily resolved. There are a variety of ways that downstream packagers could accidentally mess up a package, whether by failing to enable a feature flag, or upgrading a dependency before the application is compatible with the new version. Downstream support is almost never as good as upstream support.

Adding a middleman between upstream and users really only makes sense if the middleman is adding significant value. Traditional distro-packaged applications used to provide considerable value by making it easy to install the software. Nowadays, since upstreams can distribute software directly to users via Flathub, that value is far more limited.

Bus Factor is Critical

Most Flatpak application developers prefer to contribute to Flathub. Accordingly, there are very few developers working on Fedora Flatpaks. Almost all of the Fedora Flatpaks are actually owned by one single developer who has packaged many hundreds of applications. This is surely not a healthy situation.

Bugs in Fedora Flatpaks are reported on the Fedora Flatpak SIG issue tracker. This SIG notably does not have a list of active members, but rather a years-old list of people who are interested in joining the SIG, who are encouraged to attend the first meeting. Needless to say the SIG does not seem to be in a very good state.

I suspect this situation is permanent, reflecting a general lack of interest in Fedora Flatpak development, not just a temporary shortfall of contributors. Quality is naturally going to be higher where there are more contributors. The quality of Fedora Flatpak applications is often lower than Flathub applications, sometimes significantly so. Fedora Flatpaks also receive significantly less testing than Flathub Flatpaks. Upstream developers do not test the Fedora Flatpaks, and downstream developers are spread too thin to have plausible hope of testing them adequately.

Focus on What Really Matters

Fedora’s main competency and primary value is the core operating system, not miscellaneous applications that ship on top of it for historical reasons.

When people complain that “distros are obsolete,” they don’t mean that Linux operating systems are not needed anymore. Of course you need an OS on which to run applications. The anti-distro people notably all use distros.

But it’s no longer necessary for a Linux distribution to attempt to package every open source desktop application. That used to be a requirement for a Linux operating system to be successful, but nowadays it is an optional activity that we perform primarily for historical reasons, because it is what we have always done rather than because it is still truly strategic or essential. It is a time-consuming, resource-intensive side quest that no longer makes sense and does not add meaningful value.

The Status Quo

Let’s review how things work currently:

  • By default, Fedora Workstation allows users to install open source software from the following sources: Fedora Flatpaks, Fedora RPMs, and Cisco’s OpenH264 RPM.
  • The post-install initial setup workflow, gnome-initial-setup, suggests enabling third-party repositories. If the user does not click the Enable button, then GNOME Software will make the same suggestion the first time it is run. Clicking this button enables all of Flathub, plus a few other RPM repositories.
Image displaying the Third-Party Repositories page in Fedora's gnome-initial-setup.

Fedora will probably never enable software sources that contain proprietary software by default, but it’s easy to enable searching for proprietary software if desired.

(Technically, Fedora actually has a filter in place to allow hiding any Flathub applications we don’t want users to see. But since Fedora 38, this filter is empty, so no apps are hidden in practice. The downstream filter was quite unpopular with users, and the mechanism still exists only as a safety hatch in case there is some unanticipated future emergency.)

The Future

Here are my proposed requirements for Fedora Workstation to become a successful image-based OS.

This proposal applies only to Fedora Workstation (Fedora’s GNOME edition). These proposals could just as well apply to other Fedora editions and spins, like Fedora KDE Plasma Desktop, but different Fedora variants have different needs, so each should be handled separately.

Flathub is Enabled by Default

Since Flathub includes proprietary software, we cannot include all of Flathub by default. But Flathub already supports subsets. Fedora can safely enable the floss subset by default, and replace the “Enable Third-Party Repositories” button with an “Enable Proprietary Software Sources” button that would allow users to switch from the floss subset to the full Flathub if they so choose.

This goal can be implemented today, but we should wait because Flathub has some problems that we ought to fix first. More on that below.

All Default Applications are Fedora Flatpak Applications

All applications installed by default in Fedora Workstation should be Fedora Flatpaks. (Or almost all. Certain exceptions, like gnome-control-center, would make more sense as part of the OS image rather than as a Flatpak.)

Notice that I said Fedora Flatpaks, not Flathub. Fedora surely does need to control the handful of applications that are shipped by default. We don’t want to be at the mercy of Flathub to provide the core user experience.

There has been recent progress towards this goal, although it’s not ready yet.

All Other Applications are Flathub Flatpaks

With the exception of the default Fedora Flatpak applications, Flathub should be the only source of applications in GNOME Software.

It will soon be time to turn off GNOME Software’s support for installing RPM applications, making it a Flatpak-only software center by default. (Because GNOME Software uses a plugin architecture, users of traditional package-based Fedora who want to use GNOME Software to install RPM applications would still be able to do so by installing a subpackage providing a plugin, if desired.)

This requirement is an end goal. It can be done today, but it doesn’t necessarily need to be an immediate next step.

Flathub Must Improve

Flathub has a few serious problems, and needs to make some policy changes before Fedora enables it by default. I’ll discuss this in more detail next.

Fedora Must Help

We should not make demands of Flathub without helping to implement them. Fedora has a large developer community and significant resources. We must not barge in and attempt to take over the Flathub project; instead, let’s increase our activity in the Flathub community somewhat, and lend a hand where requested.

The Case for Fedora Flatpaks

Earlier this year, Yaakov presented The Case for Fedora Flatpaks. This is the strongest argument I’ve seen in favor of Fedora Flatpaks. It complains about five problems with Flathub:

  • Lack of source and build system provenance: on this point, Yaakov is completely right. This is a serious problem, and it would be unacceptable for Fedora to embrace Flathub before it is fixed. More on this below.
  • Lack of separation between FOSS, legally encumbered, and proprietary software: this is not a real problem. Flathub already has a floss subset to separate open source vs. proprietary software; it may not be a separate repository, but that hardly matters because subsets allow us to achieve an equivalent user experience. Then there is indeed no separate subset for legally-encumbered software, but this also does not matter. Desktop users invariably wish to install encumbered software; I have yet to meet a user who does not want multimedia playback to work, after all. Fedora cannot offer encumbered multimedia codecs, but Flathub can, and that’s a major advantage for Flathub. Users and operating systems can block the multimedia extensions if truly desired. Lastly, some of the plainly-unlicensed proprietary software currently available on Flathub does admittedly seem pretty clearly outrageous, but if this is a concern for you, simply stick to the floss subset.
  • Lack of systemic upgrading of applications to the latest runtime: again, Yaakov is correct. This is a serious problem, and it would be unacceptable for Fedora to embrace Flathub before it is fixed. More on this below.
  • Lack of coordination of changes to non-runtime dependencies: this is a difference from Fedora, but it’s not necessarily a problem. In fact, allowing applications to have different versions of dependencies can be quite convenient, since upgrading dependencies can sometimes break applications. It does become a problem when bundled dependencies become significantly outdated, though, as this creates security risk. More on this below.
  • Lack of systemic community engagement: it’s silly to claim that Flathub has no community. Unresponsive Flathub maintainers are a real problem, but Fedora has an unresponsive maintainer problem too, so this can hardly count as a point against Flathub. That said, yes, Flathub needs a better way to flag unresponsive maintainers.

So now we have some good reasons to create Fedora Flatpaks. But maintaining Flatpaks is a tremendous effort. Is it really worth doing if we can improve Flathub instead?

Flathub Must Improve

I propose the following improvements:

  • Open source software must be built from source on trusted infrastructure.
  • Applications must not depend on end-of-life runtimes.
  • Applications must use flatpak-external-data-checker to monitor bundled dependencies wherever possible.
  • Sandbox holes must be phased out, except where this is fundamentally technically infeasible.

Let’s discuss each point in more detail.

Build Open Source from Source

Open source software can contain all manner of vulnerabilities. Although unlikely, it might even contain malicious backdoors. Building from source does nothing to guarantee that the software is in any way safe to use (and if it’s written in C or C++, then it’s definitely not safe). But it sets an essential baseline: you can at least be confident that the binary you install on your computer actually corresponds to the provided source code, assuming the build infrastructure is trusted and not compromised. And if the package supports reproducible builds, then you can reliably detect malicious infrastructure, too!

In contrast, when shipping a prebuilt binary, whoever built the binary can easily insert an undetectable backdoor; there is no need to resort to stealthy obfuscation tactics. With proprietary software, this risk is inherent and unavoidable: users just have to accept the risk and trust that whoever built the software is not malicious. Fine. But users generally do not expect this risk to extend to open source software, because all Linux operating systems fortunately require open source software to be built from source. Open source software not built from source is unusual and is invariably treated as a serious bug.

Flathub is different. On Flathub, shipping prebuilt binaries of open source software is, sadly, a common accepted practice. Here are several examples. Flathub itself admits that around 6% of its software is not built from source, so this problem is pervasive, not an isolated issue. (Although that percentage unfortunately considers proprietary software in addition to open source software, overstating the badness of the problem, because building proprietary software from source is impossible and not doing so is not a problem.) Update: I’ve been advised that I misunderstood the purpose of extra-data. Most apps that ship prebuilt binaries do not use extra-data. I’m not sure how many apps are shipping prebuilt binaries, but the problem is pervasive.

Security is not the only problem. In practice, Flathub applications that do not build from source sometimes package binaries only for x86_64, leaving aarch64 users entirely out of luck, even though Flathub normally supports aarch64, an architecture that is important for Fedora. This is frequently cited by Flathub’s opponents as major disadvantage relative to Fedora Flatpaks.

A plan to fix this should exist before Fedora enables Flathub by default. I can think of a few possible solutions:

  • Create a new subset for open source software not built from source, so Fedora can filter out this subset. Users can enable the subset at their own risk. This is hardly ideal, but it would allow Fedora to enable Flathub without exposing users to prebuilt open source software.
  • Declare that any software not built from source should be treated equivalent to proprietary software, and moved out of the floss subset. This is not quite right, because it is open source, but it has the same security and trust characteristics of proprietary software, so it’s not unreasonable either.
  • Set a flag date by which any open source software not built from source must be delisted from Flathub. I’ll arbitrarily propose July 1, 2027, which should be a generous amount of time to fix apps. This is my preferred solution. It can also be combined with either of the above.

Some of the apps not currently built from source are Electron packages. Electron takes a long time to build, and I wonder if building every Electron app from source might overwhelm Flathub’s existing build infrastructure. We will need some sort of solution to this. I wonder if it would be possible to build Electron runtimes to provide a few common versions of Electron. Alternatively, Flathub might just need more infrastructure funding.

Tangent time: a few applications on Flathub are built on non-Flathub infrastructure, notably Firefox and OBS Studio. It would be better to build everything on Flathub’s infrastructure to reduce risk of infrastructure compromise, but as long as this practice is limited to only a few well-known applications using trusted infrastructure, then the risk is lower and it’s not necessarily a serious problem. The third-party infrastructure should be designed thoughtfully, and only the infrastructure should be able to upload binaries; it should not be possible for a human to manually upload a build. It’s unfortunately not always easy to assess whether an application complies with these guidelines or not. Let’s consider OBS Studio. I appreciate that it almost follows my guidelines, because the binaries are normally built by GitHub Actions and will therefore correspond with the project’s source code, but I think a malicious maintainer could bypass that by uploading a malicious GitHub binary release? This is not ideal, but fortunately custom infrastructure is an unusual edge case, rather than a pervasive problem.

Penalize End-of-life Runtimes

When a Flatpak runtime reaches end-of-life (EOL), it stops receiving all updates, including security updates. How pervasive are EOL runtimes on Flathub? Using the Runtime Distribution section of Flathub Statistics and some knowledge of which runtimes are still supported, I determined that 994 out of 3,438 apps are currently using an EOL runtime. Ouch. (Note that the statistics page says there are 3,063 total desktop apps, but for whatever reason, the number of apps presented in the Runtime Distribution graph is higher. Could there really be 375 command line apps on Flathub?)

Using an EOL runtime is dangerous and irresponsible, and developers who claim otherwise are not good at risk assessment. Some developers will say that security does not matter because their app is not security-critical. It’s true that most security vulnerabilities are not actually terribly important or worth panicking over, but this does not mean it’s acceptable to stop fixing vulnerabilities altogether. In fact, security matters for most apps. A few exceptions would be apps that do not open files and also do not use the network, but that’s really probably not many apps.

I recently saw a developer use the example of a music player application to argue that EOL runtimes are not actually a serious problem. This developer picked a terrible example. Our hypothetical music player application can notably open audio files. Applications that parse files are inherently high risk because users love to open untrusted files. If you give me a file, the first thing I’m going to do is open it to see what it is. Who wouldn’t? Curiosity is human nature. And a music player probably uses GStreamer, which puts it at the very highest tier of security risk (alongside your PDF reader, email client, and web browser). I know of exactly one case of a GNOME user being exploited in the wild: it happened when the user opened a booby-trapped video using Totem, GNOME’s GStreamer-based video player. At least your web browser is guaranteed to be heavily sandboxed; your music player might very well not be.

The Flatpak sandbox certainly helps to mitigate the impact of vulnerabilities, but sandboxes are intended to be a defense in depth measure. They should not be treated as a primary security mechanism or as an excuse to not fix security bugs. Also, too Flatpak many apps subvert the sandbox entirely.

Of course, each app has a different risk level. The risk of you being attacked via GNOME Calculator is pretty low. It does not open files, and the only untrusted input it parses is currency conversion data provided by the International Monetary Fund. Life goes on if your calculator is unmaintained. Any number of other applications are probably generally safe. But it would be entirely impractical to assess 3000 different apps individually to determine whether they are a significant security risk or not. And independent of security considerations, use of an EOL runtime is a good baseline to determine whether the application is adequately maintained, so that abandoned apps can be eventually delisted. It would not be useful to make exceptions.

The solution here is simple enough:

  • It should not be possible to build an application that depends on an EOL runtime, to motivate active maintainers to update to a newer runtime. Flathub already implemented this rule in the past, but it got dropped at some point.
  • An application that depends on an EOL runtime for too long should eventually be delisted. Perhaps 6 months or 1 year would be good deadlines.
  • A monitoring dashboard would make it easier to see which apps are using maintained runtimes and which need to be fixed.

Monitor Bundled Dependencies

Flatpak apps have to bundle any dependencies not present in their runtime. This creates considerable security risk if the maintainer of the Flathub packaging does not regularly update the dependencies. The negative consequences are identical to using an EOL runtime.

Fortunately, Flathub already has a tool to deal with this problem: flatpak-external-data-checker. This tool automatically opens pull requests to update bundled dependencies when a new version is available. However, not all applications use flatpak-external-data-checker, and not all applications that do use it do so for all dependencies, and none of this matters if the app’s packaging is no longer maintained.

I don’t know of any easy ways to monitor Flathub for outdated bundled dependencies, but given the number of apps using EOL runtimes, I assume the status quo is pretty bad. The next step here is to build better monitoring tools so we can better understand the scope of this problem.

Phase Out Most Sandbox Holes (Eventually)

Applications that parse data are full of security vulnerabilities, like buffer overflows and use-after-frees. Skilled attackers can turn these vulnerabilities into exploits, using carefully-crafted malicious data to gain total control of your user account on your computer. They can then install malware, read all the files in your home directory, use your computer in a botnet, and do whatever else they want with it. But if the application is sandboxed, then a second type of exploit, called a sandbox escape, is needed before the app can harm your host operating system and access your personal data, so the attacker now has to exploit two vulnerabilities instead of just one. And while app vulnerabilities are extremely common, sandbox escapes are, in theory, rare.

In theory, Flatpak apps are drastically safer than distro-packaged apps because Flatpak provides a strong sandbox by default. The security benefit of the sandbox cannot be understated: it is amazing technology and greatly improves security relative to distro-packaged apps. But in practice, Flathub applications routinely subvert the sandbox by using expansive static permissions to open sandbox holes. Flathub claims that it carefully reviews apps’ use of static permissions and allows only the most narrow permissions that are possible for the app to function properly. This claim is dubious because, in practice, the permissions of actual apps on Flathub are extremely broad, as often as not making a total mockery of the sandbox.

While some applications use sandbox holes out of laziness, in many cases it’s currently outright impossible to sandbox the application without breaking key functionality. For example, Sophie has documented many problems that necessitate sandbox holes in GNOME’s image viewer, Loupe. These problems are fixable, but they require significant development work that has not happened yet. Should we punish the application by requiring it to break itself to conform to the requirements of the sandbox? The Flathub community has decided that the answer is no: application developers can, in practice, use whatever permissions they need to make the app work, even if this entirely subverts the sandbox.

This was originally a good idea. By allowing flexibility with sandbox permissions, Flathub made it very easy to package apps, became extremely popular, and allowed Flatpak itself to become successful. But the original understanding of the Flatpak community was that this laxity would be temporary: eventually, the rules would be tightened and apps would be held to progressively higher standards, until sandbox holes would eventually become rare. Unfortunately, this is taking too long. Flatpak has been around for a decade now, but this goal is not within reach.

Tightening sandbox holes does not need to be a blocker for adopting Flathub in Fedora because it’s not a problem relative to the status quo in Fedora. Fedora Flatpaks have the exact same problem, and Fedora’s distro-packaged apps are not sandboxed at all (with only a few exceptions, like your web browser). But it’s long past time to at least make a plan for how to eventually phase out sandbox holes wherever possible. (In some cases, it won’t ever be possible; e.g. sandboxing a file manager or disk usage analyzer does not make any sense.) It’s currently too soon to use sticks to punish applications for having too many sandbox holes, but sticks will be necessary eventually, hopefully within the next 5 years. In the meantime, we can immediately begin to use carrots to reward app developers for eliminating holes. We will need to discuss specifics.

We also need more developers to help improve xdg-desktop-portal, the component that allows sandboxed apps to safely access resources on the host system without using sandbox holes. This is too much work for any individual; it will require many developers working together.

Software Source Prioritization

So, let’s say we successfully engage with the Flathub project and make some good progress on solving the above problems. What should happen next?

Fedora is a community of doers. We cannot tell Fedora contributors to stop doing work they wish to do. Accordingly, it’s unlikely that anybody will propose to shut down the Fedora Flatpak project so long as developers are still working on it. Don’t expect that to happen.

However, this doesn’t mean Fedora contributors have a divine right for their packaged applications to be presented to users by default. Each Fedora edition (or spin) should be allowed to decide for itself what should be presented to the user in its software center. It’s time for the Fedora Engineering Steering Committee (FESCo) to allow Fedora editions to prefer third-party content over content from Fedora itself.

We have a few options as to how exactly this should work:

  • We could choose to unconditionally prioritize all Flathub Flatpaks over Fedora Flatpaks, as I proposed earlier this year (Workstation ticket, LWN coverage). The precedence in GNOME Software would be Flathub > Fedora Flatpaks.
  • Alternatively, we could leave Fedora Flatpaks with highest priority, and instead apply a filter such that only Fedora Flatpaks that are installed by default are visible in GNOME Software. This is my preferred solution; there is already an active change proposal for Fedora 43 (proposal, discussion), and it has received considerable support from the Fedora community. Although the proposal only targets atomic editions like Silverblue and Kinoite for now, it makes sense to extend it to Fedora Workstation as well. The precedence would be Filtered Fedora Flatpaks > Flathub.

When considering our desired end state, we can stop there; those are the only two options because of my “All Other Applications are Flathub Flatpaks” requirement: in an atomic OS, it’s no longer possible to install RPM-packaged applications, after all. But in the meantime, as a transitional measure, we still need to consider where RPMs fit in until such time that Fedora Workstation is ready to remove RPM applications from GNOME Software.

We have several possible precedence options. The most obvious option, consistent with my proposals above, is: Flathub > Fedora RPMs > Fedora Flatpaks. And that would be fine, certainly a huge improvement over the status quo, which is Fedora Flatpaks > Fedora RPMs > Flathub.

But we could also conditionally prioritize Flathub Flatpaks over Fedora Flatpaks or Fedora RPMs, such that the Flathub Flatpak is preferred only if it meets certain criteria. This makes sense if we want to nudge Flathub maintainers towards adopting certain best practices we might wish to encourage. Several Fedora users have proposed that we prefer Flathub only if the app has Verified status, indicating that the Flathub maintainer is the same as the upstream maintainer. But I do not care very much whether the app is verified or not; it’s perfectly acceptable for a third-party developer to maintain the Flathub packaging if the upstream developers do not wish to do so, and I don’t see any need to discourage this. Instead, I would rather consider whether the app receives a Probably Safe safety rating in GNOME Software. This would be a nice carrot to encourage app developers to tighten sandbox permissions. (Of course, this would be a transitional measure only, because eventually the goal is for Flathub to be the only software source.)

There are many possible outcomes here, but here are my three favorites, in order:

  1. My favorite option: Filtered Fedora Flatpaks > Probably Safe Flathub > Fedora RPMs > Potentially Unsafe Flathub. Fedora Flatpaks take priority, but this won’t hurt anything because only applications shipped by default will be available, and those will be the ones that receive the most testing. This is not a desirable end state because it is complicated and it will be confusing to explain to users why a certain software source was preferred. But in the long run, when Fedora RPMs are eventually removed, it will simplify to Filtered Fedora Flatpaks > Flathub, which is elegant.
  2. A simple option, the same thing but without the conditional prioritization: Filtered Fedora Flatpaks > Flathub > Fedora RPMs.
  3. Alternative option: Probably Safe Flathub > Fedora RPMs > Potentially Unsafe Flathub > Unfiltered Fedora Flatpaks. When Fedora RPMs are eventually removed, this will simplify to Flathub > Unfiltered Fedora Flatpaks. This alternative option behaves almost the same as the above, except allows users to manually select the Fedora Flatpak if they wish to do so, rather than filtering them out. But there is a significant disadvantage: if you uninstall an application that is installed by default, then reinstall the application, it would come from Flathub rather than Fedora Flatpaks, which is unexpected. So we’ll probably want to hardcode exceptions for default apps to prefer Fedora Flatpaks.
  4. The corresponding simple option without conditional prioritization: Flathub > Fedora RPMs > Unfiltered Fedora Flatpaks.

Any of these options would be fine.

Conclusion

Flathub is, frankly, not safe enough to be enabled by default in Fedora Workstation today. But these problems are fixable. Helping Flathub become more trustworthy will be far easier than competing against it by maintaining thousands of Fedora Flatpaks. Enabling Flathub by default should be a strategic priority for Fedora Workstation.

I anticipate a lively debate on social media, on Matrix, and in the comments. And I am especially eager to see whether the Fedora and Flathub communities accept my arguments as persuasive. FESCo will be considering the Filter Fedora Flatpaks for Atomic Desktops proposal imminently, so the first test is soon.

Alley Chaggar

@AlleyChaggar

YAML Research

Intro

Hi everyone, sorry for the late post. Midterms are this week for GSoC, which means I’m halfway through GSoC. It’s been an incredible experience so far, and I know it’s going to continue to be great.

API vs. ABI

What is the difference between an application programming interface versus an application binary interface? In the beginning, this question tripped me out and confused me, because I wasn’t familiar with ABIs. Understanding what an ABI is has helped me decide which libraries I should consider using in the codegen phase. When talking about Vala, Vala is designed to use a C ABI. First, let’s understand what an API and ABI are separately and then compare them.

API

Personally, I think the understanding of APIs is more popular and well-known than ABIs. An API is usually, at a high level, defined by two software components or computers communicating with each other using a set of definitions and protocols. This definition I always thought was pretty vague and expansive. When dealing with code-level APIs, I like to understand it as APIs are existing entities in the user code (source code) that have functions, constants, structures, etc. You can think of it as when you write code, you access libraries through an API. For example, when you write print('hello world') in Python, print() is a part of Python’s standard library API.

ABI

ABI, on the other hand, is very similar, but instead of the compiler time, they are executed during runtime. Runtime means when your program is done compiling (going through the lexical, syntax, semantic analysis, etc) and the machine is actually running your executable. Its goals are very low-level and entails how compiled code should interact, particularly in the context of operating systems and libraries. It has protocols and standards for how the OS handles your program, such as storage, memory, hardware, and about how your compiled binary works with other compiled components.

YAML Libraries

I’ve started to look into YAML and XML (mainly YAML). I’ve looked into many different libraries dealing with YAML, some such as pluie-yaml, libyaml-glib, glib-yaml, and libyaml. To my understanding and research, there are no well-maintained YAML libraries that integrate GObject or GLib. The goal is to find a well-maintained library that I can use in the codegen.

Pluie yaml

I mentioned pluie-yaml, but this library isn’t a C library like json-glib, it’s a shared Vala library. The good thing is that the codegen can use pure Vala libraries because Vala libraries have a C ABI, however, the bad part is that this library is not well-maintained. The last activity was 7 years ago.

Libyaml glib

Libyaml-glib is a GLib binding of libyaml, plus a GObject builder that understands YAML. Just like pluie, it’s not a C library. It’s written in Vala. And just like pluie, it’s not well-maintained, with the last activity even stretching to longer, 9 years ago.

Glib yaml

Glib-yaml is a GLib-based YAML parser written in C. It again, just like the other libraries, doesn’t pass the maintenance check since it’s been years of no updates or commits in the repo. Going all the way back to 13 years ago. It’s also only a parser, and it doesn’t serialize or emit YAML, so even if it were well-maintained, I’d still need to emit YAML either manually or find another library that does so.

Libyaml

In conclusion, libyaml is the C library that I will be using for parsing and emitting YAML. It has a C ABI, and it’s the most well-maintained out of all of the other libraries. Vala already has a VAPI file binding it, yaml-0.1.vapi. However, there is no GObject or GLib integration, unlike json-glib, but that should be fine.

Status update, 15/07/2025

This month has involved very little programming and a huge amount of writing.

I am accidentally writing a long-form novel about openQA testing. It’s up to 1000 words already and we’re still on the basics.

The idea was to prepare for my talk at the GNOME conference this year called “Let’s build an openQA testsuite, from scratch”, by writing a tutorial that everyone can follow along at home. My goal for the talk is to share everything I’ve learned about automated GUI testing in the 4 years since we started the GNOME openqa-tests project. There’s a lot to share.

I don’t have any time to work on the tests myself — nobody seems interested in giving me paid time to work on them, its not exactly a fun weekend project, and my weekends are busy anyway — so my hope is that sharing knowledge will keep at least some momentum around automated GUI testing. Since we don’t seem yet to have mastered writing apps without bugs : -)

I did a few talks about openQA over the years, always at a high level. “This is how it looks in a web browser”, and so on. Check out “The best testing tools we’ve ever had: an introduction to OpenQA for GNOME” from GUADEC 2023, for example. I told you why openQA is interesting but I didn’t have time to talk about how to use it.

Me trying to convince you to use openQA in 2023

So this time I will be taking the opposite approach. I’m not going to spend time discussing whether you might use it or not. We’re just going to jump straight in with a minimal Linux system and start testing the hell out of it. Hopefully we’ll have time to jump from there to GNOME OS and write a test for Nautilus as well. I’m just going to live demo everything, and everyone in the talk can follow along with their laptops in real time.

Anyway, I’ve done enough talks to know that this can’t possibly go completely according to plan. So the tutorial is the backup plan, which you can follow along before or after or during the talk. You can even watch Emmanuele’s talk “Getting Things Done In GNOME” instead, and still learn everything I have to teach, in your own time.

Tutorials need to make a comeback! As a youth in the 90s, trying to make my own videogames because I didn’t have any, I loved tutorials like Denthor’s Tutorial on VGA Programming and Pete’s QBasic Site and so on. Way back in those dark ages, I even wrote a tutorial about fonts in QBasic. (Don’t judge me… wait, you judged me a while back already, didn’t you).

Anyway, what I forgot, since those days, is that writing a tutorial takes fucking ages!

Victor Ma

@victorma

My first design doc

In the last two weeks, I investigated some bugs, tested some fonts, and started working on a design doc.

Bugs

I found two more UI-related bugs (1, 2). These are in addition to the ones I mentioned in my last blog post—and they’re all related. They have to do with GTK and sidebars and resizing.

I looked into them briefly, but in the end, my mentor decided that the bugs are complicated enough that he should handle them himself. His fix was to replace all the .ui files with Blueprint files, and then make changes from there to squash all the bugs. The port to Blueprint also makes it much easier to edit the UI in the future.

Font testing

Currently, GNOME Crosswords uses the default GNOME font, Cantarell. But we’ve never really explored the possibility of using other fonts. For example, what would Crosswords look like with a monospace font? Or with a handwriting font? This is what I set out to discover.

To change the font, I used GTK Inspector, combined with this CSS selector, which targets the grid and word suggestions list:

edit-grid, wordlist {
 font-family: FONT;
}

This let me dynamically change the font, without having to recompile each time. I created a document with all the fonts that I tried.

Here’s what Source Code Pro, a monospace font, looks like. It gives a more rigid look—especially for the word suggestion list, where all the letters line up vertically.

Monospace font

And here’s what Annie Use Your Telescope, a handwriting font, looks like. It gives a fun, charming look to the crossword grid—like it’s been filled out by hand. It’s a bit too unconventional to use as the default font, but it would definitely be cool to add as an option that the user can enable.

Handwriting font

Design doc

My current task is to improve the word suggestion algorithm for the Crosswords Editor. Last week, I starting working on a design doc that explains my intended change. Here’s a short snippet from the doc, which highlights the problem with our current word suggestion algorithm:

Consider the following grid:

+---+---+---+---+
| | | | Z |
+---+---+---+---+
| | | | E |
+---+---+---+---+
| | | | R |
+---+---+---+---+
| W | O | R | | < current slot
+---+---+---+---+

The 4-Down slot begins with ZER, so the only word it can be is ZERO. This means that the cell in the bottom-right corner must be the letter O.

But 4-Across starts with WOR. And WORO is not a word. So the bottom-right corner cannot actually be the letter O. This means that the slot is unfillable.

If the cursor is on the bottom right cell, then our word suggestion algorithm correctly recognizes that the slot is unfillable and returns an empty list.

But suppose the cursor is on one of the other cells in 4-Across. Then, the algorithm has no idea about 4-Down and the constraint it imposes. So, the algorithm returns all words that match the filter WOR?, like WORD and WORM—even though they do not actually fit the slot.

CSPs

In the process of writing the doc, I came across the concept of a constraint satisfaction problem (CSP), and the related AC-3 algorithm. A CSP is a formalization of a problem that…well…involves satisfying a constraint. And the AC-3 algorithm is an algorithm that’s sometimes used when solving CSPs.

The problem of filling a crossword grid can be formulated as a CSP. And we can use the AC-3 algorithm to generate perfect word suggestion lists for every cell.

This isn’t the approach I will be taking. However, we may decide to implement it in the future. So, I documented the AC-3 approach in my design doc.

Toluwaleke Ogundipe

@toluwalekeog

Profiling Crosswords’ Rendering Pipeline

For the sake of formality, if you’re yet to read the [brief] introduction of my GSoC project, here you go.

Rendering Puzzles in GNOME Crosswords

GNOME Crosswords currently renders puzzles in two layers. The first is a grid of what we call the layout items: the grid cells, borders between cells and around the grid, and the intersections between borders. This is as illustrated below:

Layout items

The game and editor implement this item grid using a set of custom Gtk widgets: PlayCell for the cells and PlayBorder for the borders and intersections, all contained and aligned in a grid layout within a PlayGrid widget. For instance, the simple.ipuz puzzle, with just its item grid, looks like this:

Rendered puzzle with layout item grid only

Then it renders another layer, of what we call the layout overlays, above the grid. Overlays are elements of a puzzle which do not exactly fit into the grid, such as barred borders, enumerations (for word breaks, hyphens, etc), cell dividers and arrows (for arrowword puzzles), amongst others. The fact that overlays do not fit into the grid layout makes it practically impossible to render them using widgets. Hence, the need for another layer, and the term “overlay”.

Overlays are currently implemented in the game and editor by generating an SVG and rendering it onto a GtkSnapshot of the PlayGrid using librsvg. The overlays for the same puzzle, the item grid of which is shown above, look like this:

Rendered layout overlays

When laid over the item grid, they together look like this:

Rendered puzzle with layout item grid and overlays

All these elements (items and overlays) and their various properties and styles are stored in a GridLayout instance, which encapsulates the appearance of a puzzle in a given state. Instances of this class can then be rendered into widgets, SVG, or any other kind of output.

The project’s main source includes svg.c, a source file containing code to generate an SVG string from a GridLayout. It provides a function to render overlays only, another for the entire layout, and a function to create an RsvgHandle from the generated SVG string.

Crosswords, the game, uses the SVG code to display thumbnails of puzzles (though currently only for the Cats and Dogs puzzle set), and Crossword Editor displays thumbnails of puzzle templates in its greeter for users to create new puzzles from.

Crosswords’ puzzle picker grid
Crossword Editor’s greeter

Other than the game and editor, puzzles are also rendered by the crosswords-thumbnailer utility. This renders an entire puzzle layout by generating an SVG string containing both the layout items and overlays, and rendering/writing it to a PNG or SVG file. There is also my GSoC project, which ultimately aims to add support for printing puzzles in the game and editor.

The Problem

Whenever a sizeable, yet practical number of puzzles are to be displayed at the same time, or in quick succession, there is a noticeable lag in the user interface resulting from the rendering of thumbnails. In other words, thumbnails take too long to render!

There are also ongoing efforts to add thumbnails to the list view in the game, for every puzzle, but the current rendering facility just can’t cut it. As for printing, this doesn’t really affect it since it’s not particularly a performance-critical operation.

The Task

My task was to profile the puzzle rendering pipeline to determine what the bottleneck(s) was/were. We would later use this information to determine the way forward, whether it be optimising the slow stages of the pipeline, replacing them or eliminating them altogether.

The Rendering Pipeline

The following is an outline of the main/top-level stages involved in rendering a puzzle from an IPUZ file to an image (say, an in-memory image buffer):

  1. ipuz_puzzle_from_file(): parses a file in the IPUZ format and returns an IpuzPuzzle object representing the puzzle.
  2. grid_state_new(): creates a GridState instance which represents the state of a puzzle grid at a particular instant, and contains a reference to the IpuzPuzzle object. This class is at the core of Crosswords and is pretty quick to instantiate.
  3. grid_layout_new(): creates a GridLayout instance (as earlier described) from the GridState.
  4. svg_from_layout(): generates an SVG string from the GridLayout.
  5. svg_handle_from_string(): creates an RsvgHandle (from librsvg) from the generated SVG string and sets a stylesheet on the handle, defining the colours of layout items and overlays.
  6. rsvg_handle_render_document(): renders the generated SVG onto a Cairo surface (say, an image surface, which is essentially an image buffer), via a Cairo context.

Profiling

To profile the rendering pipeline specifically, I wrote up a little program to fit my purposes, which can be found here (puzzle-render-profiler.c).

The Attempt with Sysprof

Initially, I used Sysprof, executing the profiler program under it. Unfortunately, because Sysprof is a sampling profiler and probably also due to its system-wide nature, the results weren’t satisfactory. Also, the functions of interest aren’t long-running functions and each run only once per execution. So the results weren’t accurate enough and somewhat incomplete (missed many nested calls).

Don’t get me wrong, Sysprof has its strengths and stands strong amongst profilers of its kind. I tried a couple of others, and Sysprof is the only one even worthy of mention here. Most importantly, I’m glad I got to use and explore Sysprof. The next time I use it won’t be my first!

Using Callgrind

Callgrind + QCachegrind is sincerely such a godsend!

Callgrind is a profiling tool that records the call history among functions in a program’s run as a call-graph. By default, the collected data consists of the number of instructions executed, their relationship to source lines, the caller/callee relationship between functions, and the numbers of such calls.

QCachegrind is a GUI to visualise profiling data. It’s mainly used as a visualisation frontend for data measured by Cachegrind/Callgrind tools from the Valgrind package.

After a short while of reading through Callgrind’s manual, I pieced together the combination of options (not much at all) that I needed for the task, and the rest is a story of exploration, learning and excitement. With the following command line, I was all set.

valgrind --tool=callgrind --toggle-collect=render_puzzle ./profiler puzzles/puzzle-sets/cats-and-dogs/doghouse.ipuz

where:

  • valgrind is the Valgrind.
  • --tool=callgrind selects the Callgrind tool to be run.
  • --toggle-collect=render_puzzle sets render_puzzle, the core function in the profiler program, as the target for Callgrind’s data collection.
  • ./profiler is a symlink to the executable of the profiler program.
  • puzzles/puzzle-sets/cats-and-dogs/doghouse.ipuz is a considerably large puzzle.

I visualised Callgrind’s output in QCachegrind, which is pretty intuitive to use. My focus is the Call Graph feature, which helps to graphically and interactively analyse the profile data. The graph can also be exported as an image or DOT (a graph description format in the Graphviz language) file. The following is a top-level view of the result of the profile run.

Top-level profile of the rendering pipeline

Note that Callgrind measures the number of instructions executed, not exactly execution time, but typically, the former translates proportionally to the latter. The percentages shown in the graph are instruction ratios, i.e the ratio of instructions executed within each function (and its callees) to the total number of instructions executed (within the portions of the program where data was collected).

This graph shows that loading the generated SVG (svg_handle_from_string) takes up the highest percentage of time, followed by rendering the SVG (rsvg_handle_render_document). Note that the SVG is simply being rendered to an image buffer, so no encoding, compression, or IO is taking place. The call with the HEX number, instead of a name, simply calls g_object_unref, under which dropping the rsvg::document::Document (owned by the RsvgHandle) takes the highest percentage. Probing further into svg_handle_from_string and rsvg_handle_render_document:

Profile of svg_handle_from_string
Profile of rsvg_handle_render_document

The performance of rsvg_handle_render_document improves very slightly after its first call within a program due to some one-time initialisation that occurs in the first call, but it is almost insignificant. From these results, it can be seen that the root of the problem is beyond the scope of Crosswords, as it is something to fix in librsvg. My mentors and I thought it would be quicker and easier to replace or eliminate these stages, but before making a final decision, we needed to see real timings first. For anyone who cares to see a colourful, humongous call graph, be my guest:

Full (sort of) profile of the rendering pipeline

Getting Real Timings

The next step was to get real timings for puzzles of various kinds and sizes to make sense of Callgrind’s percentages, determine the relationship between puzzle sizes and render times, and know the maximum amount of time we can save by replacing or eliminating the SVG stages of the pipeline.

To achieve this, I ran the profiler program over a considerably large collection of puzzles available to the GNOME Crosswords project and piped the output (it’s in CSV format) to a file, the sorted version of which can be found here (puzzle_render_profile-sorted_by_path.csv). Then, I wrote a Python script (visualize_results.py), which can also be found at the same URL, to plot a couple of graphs from the results.

Time vs Puzzle size (width * height) (lesser is better)
Time vs Number of layout elements (items + overlays) (lesser is better)

Both show a very similar profile from which I made the following deductions:

  • Total render times increase almost linearly with puzzle size, and so do the component stages of the rendering pipeline. The exceptions (those steep valleys in the graphs) are puzzles with significant amounts of NULL cells, the corresponding layout items of which are omitted during SVG generation since NULL cells are not to be rendered.
  • The stages of loading the generated SVG and rendering it take most of the time, significantly more than the other stages, just as the Callgrind graphs above show.

Below is a stacked bar chart showing the cumulative render times for all puzzles. Note that these timings are only valid relative to one another; comparison with timings from another machine or even the same machine under different conditions would be invalid.

Cumulative render times for all puzzles (lesser is better)

The Way Forward

Thankfully, the maintainer of librsvg, Federico Mena Quintero, happens to be one of my mentors. He looked into the profiling results and ultimately recommended that we cut out the SVG stages (svg_from_layout, svg_handle_from_string and rsvg_handle_render_document) entirely and render using Cairo directly. For context, librsvg renders SVGs using Cairo. He also pointed out some specific sources of the librsvg bottlenecks and intends to fix them. The initial work in librsvg is in !1178 and the MR linked from there.

This is actually no small feat, but is already in the works (actually more complete than the SVG rendering pipeline is, at the point of writing this) and is showing great promise! I’ll be writing about this soon, but here’s a little sneak peek to keep your taste buds wet.

Comparison of the new and old render pipeline timings with puzzles of varying sizes (lesser is better)

I’ll leave you to figure it out in the meantime.

Conclusion

Every aspect of the task was such a great learning experience. This happened to be my first time profiling C code and using tools like Valgrind and Sysprof; the majority of C code I had written in the past was for bare-metal embedded systems. Now, I’ve got these under my belt, and nothing is taking them away.

That said, I will greatly appreciate your comments, tips, questions, and what have you; be it about profiling, or anything else discussed herein, or even blog writing (this is only my second time ever, so I need all the help I can get).

Finally, but by far not the least important, a wise man once told me:

The good thing about profiling is that it can overturn your expectations :-)

H.P. Jansson

Thanks

Very big thank yous to my mentors, Jonathan Blandford and Federico Mena Quintero, for their guidance all through the accomplishment of this task and my internship in general. I’ve learnt a great deal from them so far.

Also, thank you (yes, you reading) so much for your time. Till next time… Anticipate!!!

Mid-July News

Misc news about the gedit text editor, mid-July edition! (Some sections are a bit technical).

gedit 49 in preparation

About version numbers: if all goes well, gedit 49 will match GNOME 49 and are expected for September! So gedit will be back on track to follow the GNOME version numbers. This is after some trials to not follow them, to release gedit versions when ready and more often, but at the end of the day this doesn't change much.

Gedit Development Guidelines

I've written the Gedit Development Guidelines, a series of small Markdown documents. It regroups the information in one place. It also documents stuff that needed to be written in commit messages, so to avoid repetition and to have more complete explanations, commit messages can now refer to the guideline documents.

Be warned that some of the gedit guidelines don't follow the more modern GNOME development practices. The rationales are clearly explained. In my humble opinion, what is deemed "modern" doesn't automatically make something better. (If you read this and are a developer of one of the modern things not used by gedit, take it as a user feedback, and remember that what truly matters is that the GTK development platform is still appreciated!).

Improved preferences

gedit screenshot - reset all preferences

gedit screenshot - spell-checker preferences

There is now a "Reset All..." button in the Preferences dialog. And it is now possible to configure the default language used by the spell-checker.

Python plugins

Most Linux distributions ship some older versions of some GNOME modules to make Python plugins in gedit (and other apps) still work. There has been some disturbances in the air, but my hope is that everything will be back to normal for GNOME 49.

File loading and saving

There are some known issues with the file loading implementation, and I'm working on it.

About the old iconv() function, I've created a utility class called GtkSourceIconv to (try to) tame the beast, except that the beast cannot be fully tamed, by design! Some aspects of its API are broken, and what I recommend for new code is to look at the libICU to see if it's better. (gtksourceiconv.c has a comment at the top with more details).

But instead of doing ground-breaking changes in the file loading and saving code (to switch to the libICU), I prefer a more incremental approach, and this means to still use iconv() (for the time being at least). The advantage is that incremental changes are directly useful and used; progress is made on gedit!

I attempted a while back a new implementation in libgedit-tepl (TeplEncoding, TeplFile, etc). The problem is that I created a different API, so switching the whole of gedit to it was more difficult. And the new implementation lacked several important features used by gedit.

Now I'm moving some of my experiments done in libgedit-tepl back to libgedit-gtksourceview, because it's the latter (GtkSourceEncoding, GtkSourceFile, etc) that is used by gedit's file loading and saving machinery. libgedit-gtksourceview doesn't guarantee a stable API, and I consider it part of gedit, so it's more malleable code :-)

Blender HDR and the reference white issue

The latest alpha of the upcoming Blender 5.0 release comes with High Dynamic Range (HDR) support for Linux on Wayland which will, if everything works out, make it into the final Blender 5.0 release on October 1, 2025. The post on the developer forum comes with instructions on how to enable the experimental support and how to test it.

If you are using Fedora Workstation 42, which ships GNOME version 48, everything is already included to run Blender with HDR. All that is required is an HDR compatible display and graphics driver, and turning on HDR in the Display Settings.

It’s been a lot of personal blood, sweat and tears, paid for by Red Hat across the Linux graphics stack for the last few years to enable applications like Blender to add HDR support. From kernel work, like helping to get the HDR mode working on Intel laptops, and improving the Colorspace and HDR_OUTPUT_METADATA KMS properties, to creating a new library for EDID and DisplayID parsing, and helping with wiring things up in Vulkan.

I designed the active color management paradigm for Wayland compositors, figured out how to properly support HDR, created two wayland protocols to let clients and compositors communicate the necessary information for active color management, and created documentation around all things color in FOSS graphics. This would have also been impossible without Pekka Paalanen from Collabora and all the other people I can’t possibly list exhaustively.

For GNOME I implemented the new API design in mutter (the GNOME Shell compositor), and helped my colleagues to support HDR in GTK.

Now that everything is shipping, applications are starting to make use of the new functionality. To see why Blender targeted Linux on Wayland, we will dig a bit into some of the details of HDR!

HDR, Vulkan and the reference white level

Blender’s HDR implementation relies on Vulkan’s VkColorSpaceKHR, which allows applications to specify the color space of their swap chain, enabling proper HDR rendering pipeline integration. The key color space in question is VK_COLOR_SPACE_HDR10_ST2084_EXT, which corresponds to the HDR10 standard using the ST.2084 (PQ) transfer function.

However, there’s a critical challenge with this Vulkan color space definition: it has an undefined reference white level.

Reference white indicates the luminance or a signal level at which a diffuse white object (such as a sheet of paper, or the white parts of a UI) appears in an image. If images with different reference white levels end up at different signal levels in a composited image, the result is that “white” in one of the images is still being perceived as white, while the “white” from the other image is now being perceived as gray. If you ever scrolled through Instagram on an iPhone or played an HDR game on Windows, you will probably have noticed this effect.

The solution to this issue is called anchoring. The reference white level of all images needs to be normalized in order for “white” ending up on the same signal level in the composited image.

Another issue with the reference white level specific to PQ is the prevalent myth, that the absolute luminance of a PQ signal must be replicated on the actual display a user is viewing the content at. PQ is a bit of a weird transfer characteristic because any given signal level corresponds to an absolute luminance with the unit cd/m² (also known as nit). However, the absolute luminance is only meaningful for the reference viewing environment! If an image is being viewed in the reference viewing environment of ITU-R BT.2100, (essentially a dark room) and the image signal of 203 nits is being shown at 203 nits on the display, it makes the image appear as the artist intended. The same is not true when the same image is being viewed on a phone with the summer sun blasting on the screen from behind.

PQ is no different from other transfer characteristics in that the reference white level needs to be anchored, and that the anchoring point does not have to correspond to the luminance values that the image encodes.

Coming back to the Vulkan color space VK_COLOR_SPACE_HDR10_ST2084_EXT definition: “HDR10 (BT2020) color space, encoded according to SMPTE ST2084 Perceptual Quantizer (PQ) specification”. Neither ITU-R BT.2020 (primary chromaticity) nor ST.2084 (transfer characteristics), nor the closely related ITU-R BT.2100 define the reference white level. In practice, the reference level of 203 cd/m² from ITU-R BT.2408 (“Suggested guidance for operational practices in high dynamic range television production”) is used. Notable, this is however not specified in the Vulkan definition of VK_COLOR_SPACE_HDR10_ST2084_EXT.

The consequences of this? On almost all platforms, VK_COLOR_SPACE_HDR10_ST2084_EXT implicitly means that the image the application submits to the presentation engine (what we call the compositor in the Linux world) is assumed to have a reference white level of 203 cd/m², and the presentation engine adjusts the signal in such a way that the reference white level of the composited image ends up at a signal value that is appropriate for the actual viewing environment of the user. On GNOME, the way to control this currently is the “HDR Brightness” slider in the Display Settings, but will become the regular screen brightness slider in the Quick Settings menu.

On Windows, the misunderstanding that a PQ signal value must be replicated one to one on the actual display has been immortalized in the APIs. It was only until support for HDR was added to laptops that this decision was revisited, but changing the previous APIs was already impossible at this point. Their solution was exposing the reference white level in the Win32 API and tasking applications to continuously query the level and adjust the image to match the new level. Few applications actually do this, with most games providing a built-in slider instead.

The reference white level of VK_COLOR_SPACE_HDR10_ST2084_EXT on Windows is essentially a continuously changing value that needs to be queried from Windows APIs outside of Vulkan.

This has two implications:

  • It is impossible to write a cross-platform HDR application in Vulkan (if Windows is one of the targets)
  • On Wayland, a “standard” HDR signal can just be passed on to the compositor, while on Windows, more work is required

While the cross-platform issue is solvable, and something we’re working on, the way Windows works also means that the cross-platform API might become harder to use because we cannot change the underlying Windows mechanisms.

No Windows Support

The result is that Blender currently does not support HDR on Windows.

Jeroen-Bakker saying

Jeroen-Bakker explaining the lack of Windows support

The design of the Wayland color-management protocol, and the resulting active color-management paradigm of Wayland compositors was a good choice, making it easy for developers to do the right thing, while also giving them more control if they so chose.

Looking forward

We have managed to transition the compositor model from a dumb blitter to a component which takes an active part in color management, we have image viewers and video players with HDR support and now we have tools for producing HDR content! While it is extremely exciting for me that we have managed to do this properly, we also have a lot of work ahead of us, some of which I will hopefully tell you about in a future blog post!

Philip Withnall

@pwithnall

GUADEC handbook

I was reminded today that I put together some notes last year with people’s feedback about what worked well at the last GUADEC. The idea was that this could be built on, and eventually become another part of the GNOME handbook, so that we have a good checklist to organise events from each year.

I’m not organising GUADEC, so this is about as far as I can push this proto-handbook, but if anyone wants to fork it and build on it then please feel free.

Most of the notes so far relate to A/V things, remote participation, and some climate considerations. Obviously a lot more about on-the-ground organisation would have to be added to make it a full handbook, but it’s a start.

Digital Wellbeing Contract

This month I have been accepted as a contractor to work on the Parental Controls frontent and integration as part of the Digital Wellbeing project. I’m very happy to take part in this cool endeavour, and very grateful to the GNOME Foundation for giving me this opportunity – special thanks to Steven Deobald and Allan Day for interviewing me and helping me connect with the team, despite our timezone compatibility 😉

The idea is to redesign the Parental Controls app UI to bring it on par with modern GNOME apps, and integrate the parental controls in the GNOME Shell lock screen with collaboration with gnome-control-center. There also new features to be added, such as Screen Time monitoring and setting limits, Bedtime Schedule and Web Filtering support. The project has been going for quite some time, and there has been a lot of great work put into both designs by Sam Hewitt and the backend by Philip Withnall, who’s been really helpful teaching me about the project code practices and reviewing my MR. See the designs in the app-mockups ticket and the os-mockups ticket.

We started implementing the design mockup MVP for Parental Controls, which you can find in the app-mockup ticket. We’re trying to meet the GNOME 49 release deadlines, but as always it’s a goal rather than certain milestone. So far we have finished the redesign of the current Parent Controls app without adding any new features, which is to refresh the UI for unlock page, rework the user selector to be a list rather than a carousel, and changed navigation to use pages. This will be followed by adding pages for Screen Time and Web Filtering.

Refreshed unlock page
Reworked user selector
Navigation using pages, Screen Time and Web Filtering to be added

I want to thank the team for helping me get on board and being generally just awesome to work with 😀 Until next update!

Andy Wingo

@wingo

guile lab notebook: on the move!

Hey, a quick update, then a little story. The big news is that I got Guile wired to a moving garbage collector!

Specifically, this is the mostly-moving collector with conservative stack scanning. Most collections will be marked in place. When the collector wants to compact, it will scan ambiguous roots in the beginning of the collection cycle, marking objects referenced by such roots in place. Then the collector will select some blocks for evacuation, and when visiting an object in those blocks, it will try to copy the object to one of the evacuation target blocks that are held in reserve. If the collector runs out of space in the evacuation reserve, it falls back to marking in place.

Given that the collector has to cope with failed evacuations, it is easy to give the it the ability to pin any object in place. This proved useful when making the needed modifications to Guile: for example, when we copy a stack slice containing ambiguous references to a heap-allocated continuation, we eagerly traverse that stack to pin the referents of those ambiguous edges. Also, whenever the address of an object is taken and exposed to Scheme, we pin that object. This happens frequently for identity hashes (hashq).

Anyway, the bulk of the work here was a pile of refactors to Guile to allow a centralized scm_trace_object function to be written, exposing some object representation details to the internal object-tracing function definition while not exposing them to the user in the form of API or ABI.

bugs

I found quite a few bugs. Not many of them were in Whippet, but some were, and a few are still there; Guile exercises a GC more than my test workbench is able to. Today I’d like to write about a funny one that I haven’t fixed yet.

So, small objects in this garbage collector are managed by a Nofl space. During a collection, each pointer-containing reachable object is traced by a global user-supplied tracing procedure. That tracing procedure should call a collector-supplied inline function on each of the object’s fields. Obviously the procedure needs a way to distinguish between different kinds of objects, to trace them appropriately; in Guile, we use an the low bits of the initial word of heap objects for this purpose.

Object marks are stored in a side table in associated 4-MB aligned slabs, with one mark byte per granule (16 bytes). 4 MB is 0x400000, so for an object at address A, its slab base is at A & ~0x3fffff, and the mark byte is offset by (A & 0x3fffff) >> 4. When the tracer sees an edge into a block scheduled for evacuation, it first checks the mark byte to see if it’s already marked in place; in that case there’s nothing to do. Otherwise it will try to evacuate the object, which proceeds as follows...

But before you read, consider that there are a number of threads which all try to make progress on the worklist of outstanding objects needing tracing (the grey objects). The mutator threads are paused; though we will probably add concurrent tracing at some point, we are unlikely to implement concurrent evacuation. But it could be that two GC threads try to process two different edges to the same evacuatable object at the same time, and we need to do so correctly!

With that caveat out of the way, the implementation is here. The user has to supply an annoyingly-large state machine to manage the storage for the forwarding word; Guile’s is here. Basically, a thread will try to claim the object by swapping in a busy value (-1) for the initial word. If that worked, it will allocate space for the object. If that failed, it first marks the object in place, then restores the first word. Otherwise it installs a forwarding pointer in the first word of the object’s old location, which has a specific tag in its low 3 bits allowing forwarded objects to be distinguished from other kinds of object.

I don’t know how to prove this kind of operation correct, and probably I should learn how to do so. I think it’s right, though, in the sense that either the object gets marked in place or evacuated, all edges get updated to the tospace locations, and the thread that shades the object grey (and no other thread) will enqueue the object for further tracing (via its new location if it was evacuated).

But there is an invisible bug, and one that is the reason for me writing these words :) Whichever thread manages to shade the object from white to grey will enqueue it on its grey worklist. Let’s say the object is on an block to be evacuated, but evacuation fails, and the object gets marked in place. But concurrently, another thread goes to do the same; it turns out there is a timeline in which the thread A has marked the object, published it to a worklist for tracing, but thread B has briefly swapped out the object’s the first word with the busy value before realizing the object was marked. The object might then be traced with its initial word stompled, which is totally invalid.

What’s the fix? I do not know. Probably I need to manage the state machine within the side array of mark bytes, and not split between the two places (mark byte and in-object). Anyway, I thought that readers of this web log might enjoy a look in the window of this clown car.

next?

The obvious question is, how does it perform? Basically I don’t know yet; I haven’t done enough testing, and some of the heuristics need tweaking. As it is, it appears to be a net improvement over the non-moving configuration and a marginal improvement over BDW, but which currently has more variance. I am deliberately imprecise here because I have been more focused on correctness than performance; measuring properly takes time, and as you can see from the story above, there are still a couple correctness issues. I will be sure to let folks know when I have something. Until then, happy hacking!

Nancy Nyambura

@nwnyambura

Outreachy Update: Two Weeks of Configs, Word Lists, and GResource Scripting

It has been a busy two weeks of learning as I continued to develop the GNOME Crosswords project. I have been mainly engaged in improving how word lists are managed and included using configuration files.

I started by writing documentation for how to add a new word list to the project by using .conf files. The configuration files define properties like display name, language, and origin of the word list so that contributors can simply add new vocabulary datasets. Each word list can optionally pull in definitions from Wiktionary and parse them, converting them into resource files for use by the game.

As an addition to this I also scripted a program that takes config file contents and turns them into GResource XML files. This isn’t the project bulk, but a useful tool that automates part of the setup and ensures consistency between different word list entries. It takes in a .conf file and outputs a corresponding .gresource.xml.in file, mapping the necessary resources to suitable aliases. This was a good chance for me to learn more about Python’s argparse and configparser modules.

Beyond scripting, I’ve been in regular communication with my mentor, seeking feedback and guidance to improve both my technical and collaborative skills. One key takeaway has been the importance of sharing smaller, incremental commits rather than submitting a large block of work all at once, a practice that not only helps with clarity but also encourages consistent progress tracking. I was also advised to avoid relying on AI-generated code and instead focus on writing clear, simple, and understandable solutions, which I’ve consciously applied to both my code and documentation.

Next, I’ll be looking into how definitions are extracted and how importer modules work. Lots more to discover, especially about the innards of the Wiktionary extractor tool.

Looking forward to sharing more updates as I get deeper into the project

Outreachy Update: Two Weeks of Configs, Word Lists, and GResource Scripting

It has been a busy two weeks of learning as I continued to develop the GNOME Crosswords project. I have been mainly engaged in improving how word lists are managed and included using configuration files.

I started by writing documentation for how to add a new word list to the project by using .conf files. The configuration files define properties like display name, language, and origin of the word list so that contributors can simply add new vocabulary datasets. Each word list can optionally pull in definitions from Wiktionary and parse them, converting them into resource files for use by the game.

As an addition to this I also scripted a program that takes config file contents and turns them into GResource XML files. This isn’t the project bulk, but a useful tool that automates part of the setup and ensures consistency between different word list entries. It takes in a .conf file and outputs a corresponding .gresource.xml.in file, mapping the necessary resources to suitable aliases. This was a good chance for me to learn more about Python’s argparse and configparser modules.

Beyond scripting, I’ve been in regular communication with my mentor, seeking feedback and guidance to improve both my technical and collaborative skills. One key takeaway has been the importance of sharing smaller, incremental commits rather than submitting a large block of work all at once, a practice that not only helps with clarity but also encourages consistent progress tracking. I was also advised to avoid relying on AI-generated code and instead focus on writing clear, simple, and understandable solutions, which I’ve consciously applied to both my code and documentation.

Next, I’ll be looking into how definitions are extracted and how importer modules work. Lots more to discover, especially about the innards of the Wiktionary extractor tool.

Looking forward to sharing more updates as I get deeper into the project

Copyleft-next Relaunched!

I am excited that Richard Fontana and I have announced the relaunch of copyleft-next.

The copyleft-next project seeks to create a copyleft license for the next generation that is designed in public, by the community, using standard processes for FOSS development.

If this interests you, please join the mailing list and follow the project on the fediverse (on its Mastodon instance).

I also wanted to note that as part of this launch, I moved my personal fediverse presence from floss.social to bkuhn@copyleft.org.