That was something I wanted to do early so that I can rapidly get away from 3 versions of the documentation engine and Flatpak SDK management code. Currently it existed in Manuals, Builder, and Foundry, and soon we’ll get it to just Foundry.
That also makes Manuals the first application to use libfoundry which resulted in lots of tree-shaking and things are looking bright! I very much look forward to getting Builder rebased on libfoundry.
While I was at it, I ticked off a number of requested design issues as part of the Incubator submission. This afternoon it also got support for narrow views meaning you can run it on a GNOME-enabled phone.
Probably not how most people will use it, but hey, maybe you have actually good public transportation where you are and some free time.
This post is a response to what Tobias posted yesterday on his blog. I would really prefer not be writing it. There are many other things that I would prefer to be doing, and I do not enjoy engaging in public disagreements. I honestly find all of this very stressful and unpleasant, but here we are.
For context, I joined the board in July last year, having previously been on the board from 2015 to 2021. This means that I wasn’t on the board during some of the events and decisions described in Tobias’s post. I am assuming that I am not one of the unnamed individuals he is calling on to resign, though I would be significantly impacted if that were to happen.
The post here is a personal view, based on my close involvement with the issues described in Tobias’s post. As such, it is not an official board position, and other directors may disagree with me on some points. It’s possible that the board may produce its own official statement in the future, but boards are inherently slow-moving beasts, and I wanted to get something posted sooner rather than later.
I want to start by saying that it is difficult to respond to Tobias’s post. The Foundation has a policy that we don’t comment on specific code of conduct cases, in order to protect the privacy of those involved. And, when you get down to it, this is mostly about the particulars of one specific case. Without being able to discuss those particulars, it is hard to say very much at all. That, in my opinion, is the elephant in the room.
The other reason that it is difficult to respond is there are just so many broad brush accusations in the blog post. It presents power conflicts and financial mismanagement and reckless behaviour and so on and so forth. It’s impossible to address every point. Instead, what I will do is provide a fairly high-level view of each of the two main themes in the post, while calling out what I consider to be the main inaccuracies. The first of those themes is the code of conduct decision, and the second relates to the performance of the Foundation.
The big elephant
In the blog post, Tobias throws around a lot of accusations and suggestions about the code of conduct decision to suspend Sonny Piers from the GNOME project. His description of the chain of events is both misleading and a misrepresentation of what happened. Then there’s an accusation of recklessness, as well as an accusation that the code of conduct decision was somehow politically motivated. All of this is clearly intended to question and undermine the code of conduct decision, and to present a picture of mismanagement at the foundation.
My view is that, despite the various twists and turns involved in the decision making process for this case, and all the questions and complexities involved, it basically boils down to one simple question: was the decision to suspend Sonny the correct one? My view, as someone who has spent a significant amount of time looking at the evidence, talking to the people involved, and considering it from different perspectives, is that it was. And this is not just my personal view. The board has looked at this issue over and over, and we have had other parties come in to look at it, and we have always come to the conclusion that some kind of suspension was appropriate. Our code of conduct committee came to this conclusion. Multiple boards came to this conclusion. At least one third party who looked at the case came to this conclusion.
I understand why people have concerns and questions about the decision. I’m sympathetic to the experiences of those individuals, and I understand why they have doubts. I understand that some of them have been badly affected. However, ultimately, the board needs to stand up for the code of conduct. The code of conduct is what provides safety for our community. We do not get to set it aside when it becomes inconvenient.
The argument that the code of conduct decision was somehow politically motivated is false. We even had an external reviewer come in and look at the case, who confirmed this. Their report was provided to Tobias already. He continues to make this accusation despite it standing in opposition to the information that we have provided him with.
Tobias seems to think that Sonny’s importance to the GNOME project should have been taken into account in our decision for the code of conduct case. To me, this would imply that project leaders would operate according to a different, less stringent, set of conduct rules from other contributors. I believe that this would be wrong. The code of conduct has to apply to everyone equally. We need to protect our community from leaders just as much as we need to protect them from anyone else.
No one is suggesting that the management of the code of conduct decision was good. Communication and management should have been better. Community members were significantly impacted. We have sincerely apologised to those involved, and are more than willing to admit our failings. We’ve also been working to ensure that these problems don’t happen again, and that’s something that I personally continue to spend time and energy on.
However, to understand those failings, you also have to look back at the situation we faced last year: we had just lost an ED, board members were burned out, and our processes were being tested in a way that they never had been before. We still had all the usual board and foundation work that needed taking care of. In the middle of it all, elections happened and the board membership changed. It was a complex, shifting, and demanding situation, which looks rather different in retrospect to how it was experienced at the time. We learned a lot of lessons, that’s for sure.
The other elephant
The other part of Tobias’s post addresses the performance of the Foundation.
He points out various problems and challenges, some of which are real. Unfortunately, while being convenient, the theory that all of these challenges are the result of the incompetence of a few individuals is, like most convenient answers, incorrect. The reality is more complex.
One of the major factors for the Foundation’s current situation is our recent history with Executive Directors. Neil left as ED in November 2022. It took us about a year to hire Holly, who was ED for seven months, during which time she had to take a non-trivial amount of time off. And the Foundation is a small organisation – there aren’t lots of people around to pick up the slack when someone leaves. Given these circumstances, it’s unsurprising that the Foundation’s plans have changed, or that they didn’t happen in the way we’d hoped.
This is why the current board has been focusing on and expending considerable effort in recruiting a new executive director, who will be joining us very soon. Hurrah!
Tobias’s proposition that anyone who tries to change the Foundation gets burned out or banned is not true. I am living proof of this. I have changed the Foundation in the past, and continue to change it as part of my role as director. The Foundation today is radically different from the one I first joined in 2015, and continues to evolve and change. A lot of this is due to the interventions of previous and current directors over time.
Amid all this, it’s also important not to forget all the things that the Foundation has been successfully doing in recent years! I went into some of this in my recent blog post, which provides more details than I can here. It is worth stressing that the ongoing successes of the Foundation are mostly thanks to the dedication of its staff. We’ve run successful conferences. We’ve supported Flathub during which time it has experienced extraordinary growth. We’ve supported development programs. And the organisation has kept running, sailing through our taxes and registrations and all those other bureaucratic requirements.
On resignations
From the outside the GNOME Foundation can seem a little opaque. Part of the reason for that is that, as a board, we have to deal with sensitive and confidential matters, so much of the work we do happens behind closed doors. However, when you are on the board you quickly learn that it is really much like any other community-based open source team: there’s usually more work to do than we have capacity for, and the majority of the work gets done by a small minority of contributors.
Speaking as part of that minority, I don’t think that it would be beneficial for members of the board to resign. It would just mean fewer people being available to do the work, and we are already stretched for resources. I’m also of the view that no one should be asked to resign in response to upholding the code of conduct. Conduct work is difficult and important. It requires tough decisions. As a community we need to support the people doing it.
And if people think we should have different directors, well, that’s what the elections are for.
Closing
Readers might wonder why the Foundation has not spoken publicly about this topic before. The main reasons were confidentiality and legal concerns. We have also tried very hard to respect the wishes of those who have been involved and affected. Now with Tobias’s post it is harder to avoid saying things in public. I’m personally skeptical of how useful this is: with opaque and complex issues like these, public discussions tend to generate more questions than they do answers. Contributor relationships are unfortunately likely going to get damaged. But again, here we are.
It should be said that while the foundation hasn’t spoken publicly about these issues, we have expended significant effort engaging with community members behind the scenes. We’ve had meetings where we’ve explained as much of what has happened as we can. We even went so far as to commission an external report which we made available to those individuals. We continue to work on improving our processes in response to the, ahem, feedback we’ve received. I personally remain committed to this. I know that progress in some areas has been slow, but the work continues and is meaningful.
Finally: I am sure that there are contributors who will disagree with what I’ve written here. If you are one of those people, I’m sorry that you feel that way. I still appreciate you, and I understand how difficult it is. It is difficult for all of us.
The desktop team in Red Hat has another open position. We’re looking for someone to work on Flatpak automation, for someone who enjoys working on infrastructure. Although the job description states 2+ years of experience, it’s suitable for juniors. Formal experience can be replaced by relevant open source contributions. Being onsite in Brno, Czech Republic is preferred, but not required. We’re open to hiring good candidates elsewhere, too.
If you’d like to know more about the job before formally applying, don’t hesitate to contact me on Mastodon, Signal, Matrix (@eischmann at fedora.im), or email.
Today, some more words on memory management, on the practicalities of a
system with conservatively-traced references.
The context is that I have finally started banging
Whippet into
Guile, initially in a configuration that
continues to use the conservative Boehm-Demers-Weiser (BDW) collector
behind the scene. In that way I can incrementally migrate over all of
the uses of the BDW API in Guile to use Whippet API instead, and then if
all goes well, I should be able to switch Whippet to use another GC
algorithm, probably the mostly-marking collector
(MMC).
MMC scales better than BDW for multithreaded mutators, and it can
eliminate fragmentation via Immix-inspired optimistic evacuation.
problem statement: how to manage ambiguous edges
A garbage-collected heap consists of memory, which is a set of
addressable locations. An object is a disjoint part of a heap, and is
the unit of allocation. A field is memory within an object that may
refer to another object by address. Objects are nodes in a directed graph in
which each edge is a field containing an object reference. A root is an
edge into the heap from outside. Garbage collection reclaims memory from objects that are not reachable from the graph
that starts from a set of roots. Reclaimed memory is available for new
allocations.
In the course of its work, a collector may want to relocate an object,
moving it to a different part of the heap. The collector can do so if
it can update all edges that refer to the object to instead refer to its
new location. Usually a collector arranges things so all edges have the
same representation, for example an aligned word in memory; updating an
edge means replacing the word’s value with the new address. Relocating
objects can improve locality and reduce fragmentation, so it is a good
technique to have available. (Sometimes we say evacuate, move, or compact
instead of relocate; it’s all the same.)
Some collectors allow ambiguous edges: words in memory whose value
may be the address of an object, or might just be scalar data.
Ambiguous edges usually come about if a compiler doesn’t precisely
record which stack locations or registers contain GC-managed objects.
Such ambiguous edges must be traced conservatively: the collector adds
the object to its idea of the set of live objects, as if the edge were a
real reference. This tracing mode isn’t supported by all collectors.
Any object that might be the target of an ambiguous edge cannot be
relocated by the collector; a collector that allows conservative edges
cannot rely on relocation as part of its reclamation strategy.
Still, if the collector can know that a given object will not be the referent
of an ambiguous edge, relocating it is possible.
How can one know that an object is not the target of an ambiguous edge?
We have to partition the heap somehow into
possibly-conservatively-referenced and
definitely-not-conservatively-referenced. The two ways that I know to
do this are spatially and temporally.
Spatial partitioning means that regardless of the set of root and
intra-heap edges, there are some objects that will never be
conservatively referenced. This might be the case for a type of object
that is “internal” to a language implementation; third-party users that
may lack the discipline to precisely track roots might not be exposed to
objects of a given kind. Still, link-time optimization tends to weather
these boundaries, so I don’t see it as being too reliable over time.
Temporal partitioning is more robust: if all ambiguous references come
from roots, then if one traces roots before intra-heap edges, then any
object not referenced after the roots-tracing phase is available for
relocation.
kinds of ambiguous edges in guile
So let’s talk about Guile! Guile uses BDW currently, which considers
edges to be ambiguous by default. However, given that objects carry
type tags, Guile can, with relatively little effort, switch to precisely
tracing most edges. “Most”, however, is not sufficient; to allow for
relocation, we need to eliminate intra-heap ambiguous edges, to
confine conservative tracing to the roots-tracing phase.
Conservatively tracing references from C stacks or even from static data
sections is not a problem: these are roots, so, fine.
Guile currently traces Scheme stacks almost-precisely: its compiler
emits stack maps for every call site, which uses liveness analysis to
only mark those slots that are Scheme values that will be used in the
continuation. However it’s possible that any given frame is marked
conservatively. The most common case is when using the BDW collector
and a thread is pre-empted by a signal; then its most recent stack frame
is likely not at a safepoint and indeed is likely undefined in terms of
Guile’s VM. It can also happen if there is a call site within a VM
operation, for example to a builtin procedure, if it throws an exception
and recurses, or causes GC itself. Also, when per-instruction
traps
are enabled, we can run Scheme between any two Guile VM operations.
So, Guile could change to trace Scheme stacks fully precisely, but this
is a lot of work; in the short term we will probably just trace Scheme
stacks as roots instead of during the main trace.
However, there is one more significant source of ambiguous roots, and
that is reified continuation objects. Unlike active stacks, these have
to be discovered during a trace and cannot be partitioned out to the
root phase. For delimited continuations, these consist of a slice of
the Scheme stack. Traversing a stack slice precisely is less
problematic than for active stacks, because it isn’t in motion, and it
is captured at a known point; but we will have to deal with stack frames
that are pre-empted in unexpected locations due to exceptions within
builtins. If a stack map is missing, probably the solution there is to
reconstruct one using local flow analysis over the bytecode of the stack
frame’s function; time-consuming, but it should be robust as we do it
elsewhere.
Undelimited continuations (those captured by call/cc) contain a slice
of the C stack also, for historical reasons, and there we can’t trace it
precisely at all. Therefore either we disable relocation if there are
any live undelimited continuation objects, or we eagerly pin any object
referred to by a freshly captured stack slice.
fin
If you want to follow along with the Whippet-in-Guile work, see the
wip-whippet
branch in Git. I’ve bumped its version to 4.0 because, well, why the
hell not; if it works, it will certainly be worth it. Until next time,
happy hacking!
More progress on our quest to move away from GdkPixbuf. Glycin now provides a thumbnailer that can be used to create thumbnails for all image formats for which glycin loaders are installed. This is already resulting in more supported image formats, correct support of color profiles, better support for image that have a higher bit depth than 8-bit, better support for Exif orientations, and memory safe implementation for most of the formats. You can see a comparison for some images with a before (left) and after with glycin thumbnailer (right) on the screenshot below.
This is not properly implemented for GNOME OS yet, but we are on it.
I have released a new version of ASHPD Demo, the app for testing portals. The release adds support of the USB and Global Shortcut portals contributed with STF support.
I am happy to announce a new Cambalache release!
Version 0.96.0 – GResource Release!
• Add GResource support
• Add internal children support
• New project format
• Save directly to .ui files
• Show directory structure in navigation
• Add Notification system (version, messages and polls)
• Unified import dialog for all file types
• Update widget catalogs to SDK 48
Read more about it at https://blogs.gnome.org/xjuan/2025/04/20/cambalache-0-96-released/
After a long period of inactivity, Stockpile 0.5.0 has been released. This release brings the application to the GNOME 48 runtime, as well as improving user experience with a new start screen and the ability to recover corrupted data. See more information about this release on Flathub!
Pipeline version 2.2.0 until 2.2.2 were released this week. Starting from this version, Pipeline will now use all Piped instances configured in the settings in parallel to query the feed. This leads to a massive speedup for querying your feed when you have multiple instances configured (for my subscription list, this was a 7x speedup). Along this line, the Piped instance list is now managed by Pipeline automatically, which downloads a list of working instances on every startup. This should lead to a more reliable experience, and does not require manually finding working instances anymore. A bug was also fixed, where different videos replaced each other in the watch later list when they were uploaded at approximately the same time.
After more than an year after, Boatswain 5.0 is finally out. It took me a long time to push it to the finish line, but I’m relatively happy with how it turned out, and it brings some nice features.
Let’s take a quick look at what’s new in this release!
New Devices
Boatswain 5.0 comes with support for 2 new device models from Elgado: Stream Deck Plus, and Stream Deck Neo.
As for Elgato Stream Deck Neo, I tentatively introduced support for it without actually having a device to test, so if there’s anyone out there that can test it, that’d be absolutely appreciated.
Support for Stream Deck Plus was probably the main reason it took so long to release Boatswain 5.0. The entirety of the app was originally written under the assumption that all devices were simply a grid of buttons. Introducing a touchscreen, and dials that act as buttons, required basically rewriting most of the app.
I used this opportunity to make Boatswain able to handle any kind of device, with any kind of layout. Everything is represented as regions in a grid layout. Simple Stream Deck devices just contain a button grid; Stream Deck Plus contains a button grid, a touchscreen, and a dial grid.
Keyboard Shortcuts
The new Keyboard Shortcut action allows executing any keyboard shortcut – or any keyboard event in general – on the desktop. This seems to work better than I could have anticipated!
Under the hood, this action uses the Remote Desktop portal be able to inject input on the desktop. Locally controlling the desktop was probably not on the original goals of the portal, but alas, it fit the use case perfectly!
Paired with folders, Keyboard Shortcuts are very powerful, especially for large and complex software with a large number of shortcuts.
Next Steps
This release might be a little disappointing as it took so long, and yet didn’t come as packed with new features. And yet, this was the largest release of Boatswain, perhaps larger than the initial release even.
I’ve reached a point where I’m mostly satisfied with how the internals work now. So much so that, right after the Boatswain 5.0 release, I was able to split the core logic of the app into an internal library, and hide device-specific details from the rest of the app. This paved the way for adding a testing framework using umockdev, and also will allow adding support for devices from other brands such as Loupedeck. If you have any Stream Deck-like device and wish to see it supported in Boatswain, now’s your chance!
For Boatswain 6, I personally want to focus on 2 major features:
Make Boatswain use the new USB portal. One of my goals with Boatswain is to make it a reference app, using the most modern platform features available – and adding missing features if necessary. The USB portal is an obvious choice!
Remove X11 support. This might come as a controversial decision, but I don’t personally use X11 anymore, do not support it, and will not work on fixing bugs that only exist there. As such, I think it’s fair to just remove X11 support from the apps that I maintain. Practically speaking, this just means removing --socket=fallback-x11, and users can add back this permission using Flatseal; but please do not expect any kind of support anymore.
Some features that would be lovely to have, but we currently don’t have either because we lack platform support (i.e. portals), or simply because nobody sat down and wrote it:
Tracking the current desktop state, such as the focused app, the session idle state, etc. This will be useful for contextual actions.
Clipboard integration. In theory people can simulate this using the Keyboard Shortcuts action, but proper clipboard integration will work better and in more cases.
Picking and launching apps from the host system. This needs to happen through portals which currently don’t exist.
A fancy visual icon editor so that people can create their pretty icons in the app! If any UI designer is reading, please consider yourself assigned to this little project.
Support for custom backgrounds in the touchscreen. I did not have time to finish it before the 5.0 release, but it shouldn’t be too hard to add it.
A proper testing framework!
Finally, I’d like to thank my Ko-Fi and YouTube supporters for all the patience and for enabling me to do this work. The fundraiser campaign last year was a massive success, and I’m happy to see the all this progress! You all are awesome and I truly appreciate the support.
Keep an eye on this space as there may be more good news in the near future!
Over the last two years I’ve worked a bit in my spare time on the user documentation of GIMP, a Free & Open Source Image Editor. While I personally still consider it pretty bad user documentation regarding style inconsistency, duplication of topics, “organic growth”, and lack of task orientation, I managed to put some lipstick on that pig across nearly 900 commits. I was sometimes rather ruthless pushing my changes (plus I am only a simple contributor and not a GIMP developer) so I’d like to thank Jacob Boerema for their patience and lenience.
In particular that led to
pretty consistent and up-to-date user interface dialog screenshots using the default theme (a fun game in a repository with about 2000 images and no filename scheme whatsoever),
less trivial screenshots; people know how menus look like and all their entries are covered by accompanying text anyway (which created more work for translators),
all application icons updated to the default monochrome theme, in SVG format, located in one single directory within the docs repository, using the same filenames as in the GIMP code repository (so there’s theoretically a chance of maintenance),
adding some icons to text because “click the fifth icon” isn’t particularly user-friendly (especially for RTL folks),
slightly less quadruplication of string variants expressing the same meaning (which created more work for translators).
An interesting remaining issue is whether to remove outdated ancient localized screenshots and where to draw the line. Does having localized strings in the screenshot (not everybody prefers English) outweigh an outdated user interface in the screenshot (wrong numbers of buttons, or dropdowns instead of radio buttons)? Your mileage may vary.
Obviously there is much more to do, for example maybe rewriting everything from scratch or splitting screenshot files of translatable UI dialogs and non-translatable example images mashed into one single image file into two separate files because, again, translators and lower maintenance costs.
If you enjoy dealing with Docbook and all its quirks, see the open GIMP help issues or even write merge requests.
If my phone is making me miserable my constantly nagging me for attention, surely the solution must be to ditch it and take a dumb phone that can only place calls and send texts?
Except calls are particularly intrusive interruptions, and the only texts I receive are from couriers. My family and friends use iMessage or Signal. And what about the pictures I can snap in a few seconds by pulling my phone from my pocket? What about GPS navigation? What about those parking lots where you can only pay with an app? What if I need to order a cab in a country I don't speak the language of? What about using my phone app as a 2FA from the bank as the PSD2 practically requires?
I thought about using a dumb down smartphone like the Lite Phone III or any of the other dumbed-down Android phones. In practice it doesn't work for me, because most of them either don't let me use the apps I need or let me break out of the dumb mode to use the app. At this point, why using a dumbed down smartphone at all?
I don't need a dumb(ed down smart)phone. I need the convenience of my smartphone, but I need to use intentionally instead of being dragged to it. My phone was already in silent mode all the time because I can't stand being interrupted loudly by it unless it's an actual emergency. But whenever a notification pops on the screen it brightens up, drags my attention to it, and even if I don't want to interact with it it stays in the back of my head until I finally unlock the phone and have a look at what it's all about.
What I was missing was a sense of priority for notifications. To stay focused on what I cared about, I needed my phone to hold back the unimportant notifications and tell me when something important happened. I could already deny some app the authorization to display notifications, but that's a double edged sword. If I do so with messaging apps I end up compulsively checking them to see if I missed anything important.
What solved it for me was the Focus feature of the iPhone. Instead of using the Do Not Disturb mode, I've configured the Personal Focus profile. I configured it so
No notifications at all appear on my screen.
If my favorite contacts call me I will be notified.
If someone calls me twice within three minutes it will bypass the protection.
If an app has a Time Sensitive notification to send me, it will bypass the protection.
All the rest is filtered out. As a result I have a phone that doesn't actively nag me for attention. Because it notifies me when something truly important happens, I don't have to check it regularly out of Fear Of Missing Out. This tweak is part of a broader effort to reclaim my attention capacity.
In a few weeks it’ll be one year since a board member of the GNOME Foundation was removed from the project entirely (including Gitlab, Matrix, Discourse, etc.) under very suspicious circumstances. Very little has been said or done about this in public since then. The intentions behind keeping everything internal were good — the hope was to get to a resolution without unnecessary conflict. However, because there’s no non-public way to have discussions across the entire community, and with this dragging on longer and longer, what I’ve seen is partial facts and misunderstandings spreading across different sub-groups, making it harder to talk to each other. I’m now convinced it’s better to break the silence and start having this conversations across the entire project, rather than letting the conflict continue to fester in the background.
That’s not to say nothing has been happening. Over the past year, a number of people from the community (including myself), and some members of the board have tried to resolve this behind the scenes. On a personal level, I’d like to thank all the board members involved for their efforts, even if we didn’t always see eye to eye. Medium-term I’m hopeful that some positive policy changes can be made as a result of this process.
One important thing to note is that nobody involved is against Codes of Conduct. The problem here is the Foundation’s structural dysfunction, bad leadership, and the way the CoC was used in this case as a result. I’m aware of the charged nature of the subject, and the potential for feeding right wing narratives, but I think it’s also important to not let that deter us from discussing these very real issues. But just to be extra clear: Fuck Nazis, GNOME is Antifa.
Sonny has been out since last summer, and as far as I know he’s currently not interested in investing more energy into this. That’s understandable given how he’s been treated, but the rest of us are still here, and we still need to work together. For that we need, if not justice, at least closure, and a way forward.
In this case someone was disappeared entirely from the community with no recourse, no attempts to de-escalate, no process for re-integration, and no communication to the rest of the project (not until months later anyway). Having talked to members of other projects’ CoC structures this is highly unusual, especially completely out of the blue against a well-integrated member of the project.
While the details of this case are unknown to most of the community due to the (understandable) need to keep CoC reports confidential, what is known (semi-) publicly paints a pretty damning picture all by itself:
A first-time CoC complaint was met with an immediate ban
A week later the ban was suspended, and a mediation promised
The 2024 board elections were held a few weeks later without informing the electorate about the ban
2 months later the ban was re-instated unilaterally, against the wishes of the new board
There was an (unsuccessful) vote to remove the chairman of the CoC committee from the board and CoC committee
Given the above, I think it’s fair to say that the people who were on the board and CoC committee when the ban happened, have, at the very least, acted with incredible recklessness. The haphazard and intransparent way in which they handled this case, the incoherence of their subsequent actions, and the lack of communication have have caused great damage to the project. Among other things, in Sonny Piers they have cost us one of our most prolific developers and organizers, and in the STF development team our most successful program in recent history.
What perhaps not everyone’s aware of is that we were working on making this team permanent and tried to apply for followup funding from different sources. None of this materialized due to the ban and the events surrounding it, and the team has since been disbanded. In an alternate world where this had not happened we could be looking at a very different Foundation today, with an arm that strategically funds ongoing development and can employ community members.
More importantly however, we’re now in a situation where large parts of the community do not trust our CoC structure because they feel it can be weaponized as part of internal power struggles.
This ban was only the latest in a long series of failures, missteps, and broken promises by the Foundation. In addition to the recent financial problems and operational failures (e.g. not handling internships, repeatedly messing up invoicing, not responding to time-sensitive communication), all strategic initiatives over the past 5+ years have either stalled or just never happened (e.g. Flathub payments (2019), local-first (2021), development initiative (2024)).
I think there are structural reasons at the root of many of these issues, but it’s also true that much of the Foundation leadership and staff has not changed throughout this period, and that new people joining the board and trying to improve things tend to get frustrated and give up quickly (or in Sonny’s case, get banned). This is a problem we need to confront.
To those on the board and CoC committee who were involved in Sonny’s ban: I’m asking you to step down from your positions, and take some time off from Foundation politics. I know you know you fucked up, and I know you care about this project, like we all do. Whatever your reasons at the time, consider what’s best for the project now. Nobody can undo what happened, but in the medium-term I’m actually hopeful that reconciliation is possible and that both the Foundation and community can come out of this stronger than before. I don’t think that can happen under the current leadership though.
To everyone else: My hope in talking openly about this is that we can get closer to a common understanding of the situation. It seems unlikely that we can resolve this quickly, but I’d at least like to make a start. Eventually I hope we can have some mediated sessions for people from across the community to process the many feelings about this, and see how we can heal these wounds. Perhaps we could organize something at GUADEC this summer — if anyone would be interested in that, let me know.
Be cautious with unexpected private message invites. Do not accept private message invites from users you do not recognize. If you want to talk to somebody who has rejected your invite, contact them in a public room first.
In my previous post, when I introduced the switch to Skia for 2D rendering, I explained that we replaced Cairo with Skia keeping mostly the same architecture. This alone was an important improvement in performance, but still the graphics implementation was designed for Cairo and CPU rendering. Once we considered the switch to Skia as stable, we started to work on changes to take more advantage of Skia and GPU rendering to improve the performance even more. In this post I’m going to present some of those improvements and other not directly related to Skia and GPU rendering.
Explicit fence support
This is related to the DMA-BUF renderer used by the GTK port and WPE when using the new API. The composited buffer is shared as a DMA-BUF between the web and UI processes. Once the web process finished the composition we created a fence and waited for it, to make sure that when the UI process was notified that the composition was done the buffer was actually ready. This approach was safe, but slow. In 281640@main we introduced support for explicit fencing to the WPE port. When possible, an exportable fence is created, so that instead of waiting for it immediately, we export it as a file descriptor that is sent to the UI process as part of the message that notifies that a new frame has been composited. This unblocks the web process as soon as composition is done. When supported by the platform, for example in WPE under Wayland when the zwp_linux_explicit_synchronization_v1 protocol is available, the fence file descriptor is passed to the platform implementation. Otherwise, the UI process asynchronously waits for the fence by polling the file descriptor before passing the buffer to the platform. This is what we always do in the GTK port since 281744@main. This change improved the score of all MotionMark tests, see for example multiply.
Enable MSAA when available
In 282223@main we enabled the support for MSAA when possible in the WPE port only, because this is more important for embedded devices where we use 4 samples providing good enough quality with a better performance. This change improved the Motion Mark tests that use 2D canvas like canvas arcs, paths and canvas lines. You can see here the change in paths when run in a RaspberryPi 4 with WPE 64 bits.
Avoid textures copies in accelerated 2D canvas
As I also explained in the previous post, when 2D canvas is accelerated we now use a dedicated layer that renders into a texture that is copied to be passed to the compositor. In 283460@main we changed the implementation to use a CoordinatedPlatformLayerBufferNativeImage to handle the canvas texture and avoid the copy, directly passing the texture to the compositor. This improved the MotionMark tests that use 2D canvas. See canvas arcs, for example.
Introduce threaded GPU painting mode
In the initial implementation of the GPU rendering mode, layers were painted in the main thread. In 287060@main we moved the rendering task to a dedicated thread when using the GPU, with the same threaded rendering architecture we have always used for CPU rendering, but limited to 1 worker thread. This improved the performance of several MotionMark tests like images, suits and multiply. See images.
Update default GPU thread settings
Parallelization is not so important for GPU rendering compared to CPU, but still we realized that we got better results by increasing a bit the amount of worker threads when doing GPU rendering. In 290781@main we increased the limit of GPU worker threads to 2 for systems with at least 4 CPU cores. This improved mainly images and suits in MotionMark. See suits.
Hybrid threaded CPU+GPU rendering mode
We had either GPU or CPU worker threads for layer rendering. In systems with 4 CPU cores or more we now have 2 GPU worker threads. When those 2 threads are busy rendering, why not using the CPU to render other pending tiles? And the same applies when doing CPU rendering, when all workers are busy, could we use the GPU to render other pending tasks? We tried and turned out to be a good idea, especially in embedded devices. In 291106@main we introduced the hybrid mode, giving priority to GPU or CPU workers depending on the default rendering mode, and also taking into account special cases like on HiDPI, where we are always scaling, and we always prefer the GPU. This improved multiply, images and suits. See images.
Use Skia API for display list implementation
When rendering with Cairo and threaded rendering enabled we use our own implementation of display lists specific to Cairo. When switching to Skia we thought it was a good idea to use the WebCore display list implementation instead, since it’s cross-platform implementation shared with other ports. But we realized this implementation is not yet ready to support multiple threads, because it holds references to WebCore objects that are not thread safe. Main thread might change those objects before they have been processed by painting threads. So, we decided to try to use the Skia API (SkPicture) that supports recording in the main thread and replaying from worker threads. In 292639@main we replaced the WebCore display list usage by SkPicture. This was expected to be a neutral change in terms of performance but it surprisingly improved several MotionMark tests like leaves, multiply and suits. See leaves.
Use Damage to track the dirty region of GraphicsLayer
Every time there’s a change in a GraphicsLayer and it needs to be repainted, it’s notified and the area that changed is included so that we only render the parts of the layer that changed. That’s what we call the layer dirty region. It can happen that when there are many small updates in a layer we end up with lots of dirty regions on every layer flush. We used to have a limit of 32 dirty regions per layer, so that when more than 32 are added we just united them into the first dirty area. This limit was removed because we always unite the dirty areas for the same tiles when processing the updates to prepare the rendering tasks. However, we also tried to avoid handling the same dirty region twice, so every time a new dirty region was added we iterated the existing regions to check if it was already present. Without the 32 regions limit that means we ended up iterating a potentially very long list on every dirty region addition. The damage propagation feature uses a Damage class to efficiently handle dirty regions, so we thought we could reuse it to track the layer dirty region, bringing back the limit but uniting in a more efficient way than using always the first dirty area of the list. It also allowed to remove check for duplicated area in the list. This change was added in 292747@main and improved the performance of MotionMark leaves and multiply tests. See leaves.
Record all dirty tiles of a layer once
After the switch to use SkPicture for the display list implementation, we realized that this API would also allow to record the graphics layer once, using the bounding box of the dirty region, and then replay multiple times on worker threads for every dirty tile. Recording can be a very heavy operation, specially when there are shadows or filters, and it was always done for every tile due to the limitations of the previous display list implementation. In 292929@main we introduced the change with improvements in MotionMark leaves and multiply tests. See multiply.
MotionMark results
I’ve shown here the improvements of these changes in some of the MotionMark tests. I have to say that some of those changes also introduced small regressions in other tests, but the global improvement is still noticeable. Here is a table with the scores of all tests before these improvements and current main branch run by WPE MiniBrowser in a RaspberryPi 4 (64bit).
Test
Score July 2024
Score April 2025
Multiply
501.17
684.23
Canvas arcs
140.24
828.05
Canvas lines
1613.93
3086.60
Paths
375.52
4255.65
Leaves
319.31
470.78
Images
162.69
267.78
Suits
232.91
445.80
Design
33.79
64.06
What’s next?
There’s still quite a lot of room for improvement, so we are already working on other features and exploring ideas to continue improving the performance. Some of those are:
Damage tracking: this feature is already present, but disabled by default because it’s still work in progress. We currently use the damage information to only paint the areas of every layer that changed. But then we always compose a whole frame inside WebKit that is passed to the UI process to be presented on screen. It’s possible to use the damage information to improve both, the composition inside WebKit and the presentation of the composited frame on the screen. For more details about this feature read Pawel’s awesome blog post about it.
Use DMA-BUF for tile textures to improve pixel transfer operations: We currently use DMA-BUF buffers to share the composited frame between the web and UI process. We are now exploring the idea of using DMA-BUF also for the textures used by the WebKit compositor to generate the frame. This would allow to improve the performance of pixel transfer operations, for example when doing CPU rendering we need to upload the dirty regions from main memory to a compositor texture on every composition. With DMA-BUF backed textures we can map the buffer into main memory and paint with the CPU directly into the mapped buffer.
Compositor synchronization: We plan to try to improve the synchronization of the WebKit compositor with the system vblank and the different sources of composition (painted layers, video layers, CSS animations, WebGL, etc.)
Hello, I am pleased to announce a new Cambalache stable release.
Version 0.96.0 – GResource Release!
Add GResource support
Add internal children support
New project format
Save directly to .ui files
Show directory structure in navigation
Add Notification system (version, messages and polls)
Unified import dialog for all file types
Update widget catalogs to SDK 48
New project format
So far Cambalache project file contained all the data in one file which meant you had to export UI files to xml in order to use them in your build system.
This constraint was added to discourage XML editing by hand which would have introduced incompatibilities since Cambalache’s GtkBuilder feature support was limited.
Now that GtkBuilder support has improved I decided it was the right time to simplify things for developers and save UI data directly in XML format. Not more manual exporting or integrating with the build system.
The project file will store a relative path to the GtkBuilder file and a hash of its contents, currently all it does is print a warning if you edit the file by hand.
With the project format change it makes sense to show all UI files in the navigation pane as they are in the filesystem.
Unsaved/unnamed files will be stored inline in the project file which comes in handy for WIP UI or as a quick way to define a custom type that does not have a template.
GResource support
Basic GResource support was added to be able to create or edit gresource.xml files. This opens the possibility for Cambalache to support loading assets from a resource path in the workspace, but unfortunately is not yet implemented.
Internal children support
Even tough this is not commonly used anymore, internal children are still used in some classes like GtkDialog. Cambalache will show any internal children in the hierarchy and only export it in the XML file if you change one of its properties or add any children inside.
Notification System
Last but not least I added a simple notification system to inform about new versions and send messages or polls directly to users.
Notifications are polled once a day and only one notification is shown per day. This is how a message notification looks like and it will be used sporadically to inform users about talks or workshops.
New version notifications will show the release notes and include a link to the blogpost and to flathub.
Polls will let you vote and change your vote until the poll close date results are shown after you vote and a final notification will be sent after the poll closes.
Rearranged account settings, with a new Safety tab
New setting to toggle media preview visibility
Sessions can be renamed
Support for login using the OAuth 2.0 API (as used by matrix.org, which recently made the switch to Matrix Authentication Service)
Contiguous state events are grouped behind a single item
But what does RC stand for? Really Cool? Reasonably Complete? Rose Colored¹? Release Candidate, of course! That means it should be mostly stable and we expect to only include minor improvements until the release of Fractal 11.
As usual, this release includes other improvements, fixes and new translations thanks to all our contributors, and our upstream projects.
I don’t normally blog about particular CVEs, but Yelp CVE-2025-3155 is noteworthy because it is quite severe, public for several weeks now, and not yet fixed upstream. In short, help files can read your filesystem and execute arbitrary JavaScript code, allowing an attacker to exfiltrate any files your Unix user has access to. Thank you to parrot409 for responsibly disclosing this issue and going above and beyond to provide patches.
By default, all major browsers allow websites to download files automatically, without user interaction, so installing a malicious help file into your Downloads directory is simple. (If you ever find an unexpected file in your Downloads directory, be careful and maybe don’t open it. Cautious users may wish to configure their browsers to prompt before saving a download.)
The malicious website would next attempt to open the special URL ghelp:///proc/self/cwd/Downloads. This relies on the assumption that the web browser runs with your home directory as current working directory, which in practice will generally be true when launched from your desktop environment.
Chrome and Firefox prompt the user for permission before launching Yelp. If you grant permission, then Yelp launches and you lose. Don’t grant permission. Beware: both browsers have an “always allow” checkbox, and you won’t be prompted for permission if you’ve ever checked it when opening a ghelp URL in the past.
Epiphany does not prompt the user for permission before opening the URL. Minimal user interaction is required for the attacker to win. If you use Epiphany or any other browser that opens links in external programs without user confirmation, you should immediately uninstall Yelp, or at least change your Downloads directory to something nonstandard.
February 24: The reporter proposes these patches to fix the issue.
March 26: The 90 day disclosure deadline is reached, so I make the issue report public even though it is not yet fixed. At this point, due to insufficient creativity, I incorrectly assume the issue is likely to be used only in targeted attacks, because it seems to require the attacker to know the path to your downloads directory, which will normally include your Unix username.
April 5: The bug reporter posts a detailed write-up including a nice GIF to demonstrate the attack exfiltrating ~/.ssh/id_rsa in Chrome. This attack uses /proc/self/cwd/Downloads, bypassing the requirement to know your Unix username.
April 13: GNOME Security is notified of the write-up.
If you are a Linux operating system vendor, please consider applying the provided patches even though they have not yet been accepted upstream. They’re probably not worse than the status quo!
We are excited about the Fedora Workstation 42 released today. Having worked on some great features for it.
Fedora Workstation 42 HDR edition
I would say that the main feature that landed was HDR or High Dynamic Range. It is a feature we spent years on with many team members involved and a lot of collaboration with various members of the wider community.
GNOME Settings menu showing HDR settings
The fact that we got this over the finish line was especially due to all the work Sebastian Wick put into it in collaboration with Pekka Paalanen around HDR Wayland specification and implementations.
Another important aspect was tools like libdisplay which was co-created with Simon Ser, with others providing more feedback and assistance in the final stretch of the effort.
HDR setup in Ori and Will of the Wisps
That said a lot of other people at Red Hat and in the community deserve shout outs for this too. Like Xaver Hugl whose work on HDR in Kwin was a very valuable effort that helped us move the GNOME support forward too. Matthias Clasen and Benjamin Otte for their work on HDR support in GTK+, Martin Stransky for his work on HDR support in Firefox, Jonas Aadahl and Olivier Fourdan for their protocol and patch reviews. Jose Exposito for packaging up the Mesa Vulkan support for Fedora 42.
One area that should benefit from HDR support are games. In the screenshot about you see the game Ori and the Will of the Wisps which is known for great HDR support. Valve will need to update to a Wine version for Proton that supports Wayland natively though before this just works, at the moment you can get it working using gamescope, but hopefully soon it will just work under both Mutter and Kwin.
Also a special shoutout to the MPV community for quickly jumping on this and releasing a HDR capable video player recently.
MPV video player playing HDR content
Of course getting Fedora Workstation 42 to out with these features is just the beginning, with the baseline support it now is really the time when application maintainers have a real chance of starting to make use of these features, so I would expect various content creative applications for instance to start having support over the next year.
For the desktop itself there are also open questions we need to decide on like:
Format to use for HDR screenshots
Better backlight and brightness handling
Better offloading
HDR screen recording video format
How to handle HDR webcams (seems a lot of them are not really capable of producing HDR output).
Version of the binary NVIDIA driver released supporting the VK_EXT_hdr_metadata and VK_COLOR_SPACE_HDR10_ST2084_EXT Vulkan extension on Linux
A million smaller issues we will need to iron out
Accessibility
Our accessibility team has been hard at work trying to ensure we have a great accessibility story in Fedora Workstation 42. Our accessibility team with Lukas Tyrychtr and Bohdan Milar has been working hard together with others to ensure that Fedora Workstation 42 has the best accessibility support you can get on Linux. One major effort that landed was the new keyboard monitoring interface which is critical for making Orca work well under Wayland. This was a collaboration of between Lukas Tyrychtr, Matthias Clasen and Carlos Garnacho on our team. If you are interested in Accessibility, as a user or a developer or both then make sure to join in by reaching out to the Accessibility Working group
PipeWire
PipeWire also keeps going strong with continuous improvements and bugfixes. Thanks to the great work by Jan Grulich the support for PipeWire in Firefox and Chrome is now working great, including for camera handling. It is an area where we want to do an even better job though, so Wim Taymans is currently looking at improving video handling to ensure we are using the best possible video stream the camera can provide and handle conversion between formats transparently. He is currently testing it out using a ffmpeg software backend, but the end goal is to have it all hardware accelerated through directly using Vulkan.
Another feature Wim Taymans added recently is MIDI2 support. This is the next generation of MIDI with only a limited set of hardware currently supporting it, but on the other hand it feels good that we are now able to be ahead of the curve instead of years behind thanks to the solid foundation we built with Pipewire.
Wayland
For a long time the team has been focused on making sure Wayland has all the critical pieces and was functionality wise on the same level as X11. For instance we spent a lot of time and effort on ensuring proper remote desktop support. That work all landed in the previous Fedora release which means that over the last 6 Months the team has had more time to look at things like various proposed Wayland protocols and get them supported in GNOME. Thanks to that we helped ensure the Cursor Shape Protocol and Toplevel Drag protocols got landed in time for this release. We are already looking and what to help land for the next release, so expect a continued acceleration in Wayland protocol adoption going forward.
First steps into AI
So an effort we been plugging away at recently is starting to bring AI tooling to Open Source desktop applications. Our first effort in this regard is Granite.code. Granite.code is a extension for Visual Studio Code that sets up a local AI engine on your system to help with various tasks including code generation and chat inside Visual Studio Code. So what is special about this effort is that it relies on downloading and running a copy of the open source AI Granite LLM model to your system instead on relying on it being run in a cloud instance somewhere. That means you can use Granite.code without having to share your data and work with someone else. Granite.code is still very early stage and it requires a NVIDIA or AMD GPU with over 8GB of video ram to use under Linux. (It also runs under Windows and MacOS X). It is still in a pre-release stage, we are waiting for the Granite 3.3 model update to enable some major features for us before we make the first formal release, but for those willing to help us test you can search for Granite in the Visual Studio Code extension marketplace and install it.
We are hoping though that this will just the starting point where our work can get picked up and used by other IDEs out there too and also we are thinking about how we can offer AI features in other parts of the desktop too.
I’ve always liked the concept of small five-minute games to fill some time. Puzzle games that start instantly and keep your mind sharp, without unnecessary ads, distractions and microtransactions. Classics like Minesweeper and Solitaire come to mind, once preinstalled on every Windows PC. It was great fun during moments without an internet connection.
Unsurprisingly, GNOME provided a collection of similar games since its initial release, preinstalled on several Linux distributions. Although GNOME no longer ships an official game collection, its games live on as separate modules on GNOME GitLab, and I’ve continued playing some of them to this day.
O maintainer, where art thou?
Unfortunately, several games have become unmaintained in recent years. While the games more or less work as expected, users still send occasional feature requests and bug reports that remain unanswered, and the UIs drift further away from modern standards (GTK 4 + libadwaita) each year.
One game stuck in an unfortunate state was Mahjongg (a Mahjong Solitaire clone), suffering from issues such as high CPU usage and freezes when playing the game. While fixing the issues was easy enough, distributing the fixes proved more difficult, with nobody left to include them in a new release.
One year later
After unsuccessfully hunting for poor souls willing to make a new release, my journey as Mahjongg’s new maintainer began a year ago. While my initial plan was to make a single release fixing critical bugs, modernizing the UI and fixing other long-standing issues turned out quite fun in the end. Here are some of the highlights since then:
All old issues/feature requests addressed and closed (some dating back over a decade)
Several improvements contributed by users (sequential/random layout rotation, remembering game state between sessions)
Fixes for various bugs and memory/resource leaks
Performance improvements, avoiding several seconds of delay when starting the game and changing layouts
Modernized Scores dialog and other UI/UX improvements, following the latest GNOME Human Interface Guidelines
Improved tile reshuffling that avoids unsolvable tile arrangements when possible
Tile drawing ported from Cairo (CPU-based) to GtkSnapshot (GPU-based), for more efficient drawing and less work porting to GTK 5 in the (far) future
Applying for GNOME Circle
It’s perhaps no secret that the old GNOME games are stuck in an awkward place, with some still using legacy GNOME branding despite no longer shipping with GNOME. In search of a better future for Mahjongg, I applied for its inclusion in GNOME Circle, a collection of high-quality apps and libraries that extend the GNOME ecosystem. After good initial impressions, thanks to recent modernization efforts, Mahjongg is on track for inclusion.
Since GNOME Circle currently lacks other games, I would love to see more small games added in the future, whether it be one of the old GNOME games or a completely new one. While it’s up to each maintainer whether or not they want to go through the effort, high-quality games deserve more exposure. :)
Closing words
Thanks to both the Release Team and the Infrastructure Team for helping me get started, as well as everyone who has contributed to Mahjongg so far. Thanks to everyone who helped write the GNOME Project Handbook, making the lives of contributors easier.
A few GNOME games are still unmaintained and use GTK 3:
I've struggled with focus earlier this year. I felt pulled in all directions, overwhelmed by the world, and generally miserable. I decided to abstain from using social media for a week to see if anything would change.
The Joy of Missing Out was so strong that I ended up staying off social media for 3 whole weeks. I realized that engaging with social media harmed my mental health, and I could develop strategies to improve my relationship with it.
The social media I use
Text-based social media
I used Facebook in my youth but deleted my account about 10 years ago. Since then, I've been using text-based social media. I primarily browse Mastodon and Bluesky to know what people in my circles think about and to follow the news.
I tried actively using LinkedIn for a while but couldn't endure it. The feed is full of inauthentic posts, sales pitches, and outrageous takes to get engagement. LinkedIn is primarily a DM inbox for me now.
I abandoned the rest
I used to browse Reddit via the Apollo third-party client. In June 2023, Reddit decided to charge for its API, effectively making Apollo unusable since the developer couldn't afford the absurd amount of money they charged for it. Given the time and attention sink it had become for me, I decided use Apollo’s decommissioning as an opportunity to quit Reddit.
I tried Instagram, but it just didn't stick. I've also explored Pixelfed to find inspiration from fellow photographers, but the behavior of its single maintainer didn't inspire confidence, so I left quickly.
TikTok, YouTube Shorts, and other short video platforms are the opposite of what I want. I need calm and room for nuance. I occasionally watch videos on YouTube but never follow the recommendations.
The impact social media has on me
I knew social media could influence people, but I thought I would notice if it dramatically changed how I feel, think, and behave. It turns out I was wrong. Social media changed me in several ways.
(In)tolerance to boredom
At the beginning of the experiment, I still had social media apps on my phone. The first thing I noticed was how often I grabbed my phone with my thumb hovering over the Mastodon app icon out of pure habit.
Forcing myself to stay off social media made me realize that the only moment I was left alone with my thoughts was in the shower. Even in bed, I frequently grabbed my phone to check something or see what was happening while I couldn't sleep. The anxiety-inducing nature of social media made it even more difficult to find sleep.
Sense of overwhelm / FOMO
When I grabbed my phone at night, when I browsed social media after a meeting, when I checked my feed after being focused on something else, I saw new posts.
I tried to curate my feed, but whatever I did, new content kept appearing. Always more posts crafted to get my attention. Always new things to care about. The world would never stop and constantly go in the wrong direction. It felt overwhelming.
Speed of thought
The influx of information in my feed was too massive for me to ingest, besides my family and work duties. I ended up skimming through the posts and articles they linked to instead of taking the time to read and understand them properly.
Skimming content didn't just make me lose information. It also made me mentally switch to a "high-speed mode," where I didn't take the time to think and do things properly. Once in this mode, I felt restless and rushed things. Focusing on anything was painful.
Big Bad World
I am not part of many minorities, but I care about making the world a better place for as many fellow humans as possible. I need to hear about other people's problems and consider them when elaborating solutions for my own issues. In other words, I care about the intersectionality of struggles.
To that effect, I subscribed to accounts reporting what their minority is struggling with, effectively building a depressing feed. Awareness of what others struggle with is essential, but being completely burned out by a constant flux of bad news is draining.
Punchline thinking
Mastodon's developers try not to make it a dopamine-driven social media. But the concept of short posts that people can boost and like is naturally dopamine-inducing. I had already noticed that I am prone to addictive behaviors and pay extra attention to that.
However, I hadn't noticed that whenever I wanted to talk publicly about a problem, I tried to find a punchline for it. I tried to find concise, impactful sentences to catch people's attention and craft a post that would make the rounds.
Writing longer-form posts on my blog forced me to consider the nuances, but I don't write a blog post for every single opinion I have. Thinking in punchlines made my thoughts more polarized, less nuanced, and, truth be told, more inflammatory.
What I changed
I embraced not knowing
I acknowledged that I don't need to know about things the moment they happen. I also realized that sometimes people will make an issue appear bigger than it is for the sake of engagement (even on the Fediverse).
My solution is to get my news from outlets I trust. These outlets will not only tell me about what happened but also about the consequences and what I can do about it. It helps combat the feeling of powerlessness in an unjust world.
I also subscribed to news via RSS. I am using miniflux as a minimal, cheap, and privacy respecting RSS service, and the ReadKit apps on macOS and iOS.
I added friction
Social media can take a significant toll on me, but it's not all negative. They have helped me meet excellent people, discover fantastic projects, and spread some of my ideas. I have not vanished from social media and will likely not.
But I added friction to make it more difficult for me to browse them compulsively. I removed their apps from my phone and logged out of their websites on my computer. If I want to browse social media, I must be in front of a computer and log in. This has to be intentional now, not just compulsive.
I monitor my screen time
When I wanted to lose weight, a very effective strategy has been to count calories. Knowing how many calories I burned when exercising and how many calories I absorbed when eating a cookie made the latter less appealing to me.
The same applies to screen time. Knowing how much time I spend in front of a website or app helps me realize that I need to give it less attention. Apple's Screen Time feature has helped me monitor my usage.
With all these changes, I feel much happier. I can focus on my work, read more books, and happily spend an hour or so every night reading documentation and working on pet projects.
I’m currently serving as a member of the GNOME Foundation Board of Directors, and am also a member of the Foundation’s Executive Committee. The last major GNOME Foundation update was back in October 2024, when we announced our budget for the current financial year, along with associated staffing changes. There have been some communications since then, particularly around events strategy and board membership changes, but it’s been a while since we provided a more complete general update.
This update is intended to fill that gap, with a summary of the GNOME Foundation’s activities over the past six months or so. You will hopefully see that, while the Foundation is currently operating under some challenging circumstances, we have been active in some significant areas, as well as keeping on top of essential tasks.
Board of Directors
The Board of Directors has been busy with its regular duties over the past six months. We continue to have regular monthly meetings, and have been dealing with high-level topics including ED hiring, finances, committee memberships, and more.
There have been a few membership changes on the board. We had an empty seat at the beginning of the board year, which we appointed Philip Chimento to fill. Philip is a previous board member with a lot of experience, and so was able to easily pick up the reins. We are very grateful to him for helping out.
In January, Michael Downey resigned from the board, and recently we filled his empty seat by appointing Cassidy Blaede. Members of the community will already be familiar with Cassidy’s contributions, and I think we can all agree that he will be a fantastic director.
Both of these seats are due for re-election in the summer, so the appointments are relatively short-term.
Michael was previously serving as treasurer, a position which we have been unable to fill from the existing pool of directors. We are currently in the process of speaking to a couple of candidates who have expressed an interest in taking on the position.
Executive Director Hiring
Most readers will know that we lost our previous Executive Director, Holly Million, back in July 2024. We were extremely fortunate to be able to appoint Richard Littauer as interim ED shortly afterwards, who has did an incredible amount for the Foundation on a part time basis last year. Richard continues to serve as our official ED and has been extremely generous in continuing to provide assistance on a voluntary basis. However, since his availability is limited, finding a new permanent ED has been a major focus for us since Holly’s resignation. We advertised for candidates back in September 2024, and since then the ED search committee has been busy reviewing and interviewing candidates. Thanks to this work, we hope to be able to announce a new Executive Director very shortly.
We are immensely grateful to the members of the ED search committee for their contributions: Deb Nicholson, Jonathan Blandford, Julian Sparber, Julian Hofer, Rob McQueen, and Rosanna Yuen. We also owe a huge debt of thanks to Richard.
Programs
“Programs” is the term that gets used for the impactful activities undertaking by non-profits (contrasted with activities like fundraising which are intended to support those programs). The GNOME Foundation has a number of these programs, some of which are established responsibilities, while others are fixed-term projects.
Sovereign Tech Fund
The Foundation has been hosting the ongoing Sovereign Tech Fund-ed development project which has been ongoing since 2023. The management of this work has been handled by the GNOME STF team, which has in recent times been managed by Tobias Bernard and Adrian Vovk. You can read their incredible report on this work, which was published only last week.
The Foundation’s role for this project is primarily as a fiscal host, which means that we are responsible for processing invoices and associated administration. Thibault Martin was working for us as a contractor to do much of this work. However, with STF ramping down, Thibault has passed his responsibilities on to other staff members. Many thanks for your efforts, Thibault!
While most of the STF funded work has now concluded, there is a small amount of remaining funding that is being used to keep one or two developers working.
Alongside the existing STF-funded program, we have also been working on a hosting agreement for a new STF proposal, which is being worked on by Adrian Vovk. This agreement is almost complete and we hope to be able to provide more details soon.
GIMP
The GNOME Foundation is the fiscal host for the GIMP project and this entails regular work for us, mostly around finances and payments. Recently we have been helping out with a grant program that the GIMP project has set up, allowing the GIMP project to make better use of the funds that the Foundation holds for them.
Digital Wellbeing
We are currently about three-quarters of the way through a two year development project focused on digital wellbeing and parental controls. This program has been funded by Endless and is being led by Philip Withnall. We have also been lucky to have assistance on the design side from Sam Hewitt. The new digital wellbeing features that arrived in GNOME 48 were a significant milestone for this project.
The Exec Committee has recently been doing some development planning with Philip for the final phase of this work, which we hope to include in GNOME 49.
Flathub
Flathub continues to be a significant area of interest for the GNOME Foundation. We are currently contracting Bart Piotrowski as the main Flathub sysadmin, thanks to ongoing generous support from Endless. Bart continues to enhance Flathub’s infrastructure as well as proving ongoing support for this hugely successful platform.
General support for the GNOME project is a core part of the Foundation’s role, and is something which occupies a lot of the Foundation’s time. The activities in each of these areas deserve blog posts of their own, but here’s a quick summary:
Infrastructure. We continue to support GNOME’s development infrastructure, primarily by paying for Bart’s work in this area. Plenty has been happening behind the scenes to keep our development systems working well. We are grateful for the past and ongoing support of Red Hat including Andrea Veri’s time and server hosting, as well as significant new support from AWS allowing us to move to a cloud-based infastructure.
Travel. Unfortunately the budget for community travel has been limited this year due to the Foundation’s overall financial situation, but we continue to provide some funding, and GNOME Foundation staff have been working with the travel committee as we approach GUADEC.
Events. Foundation staff continue to support our events. In December we had a successful GNOME.Asia in Bengaluru, India. Linux App Summit is happening next week in Tiriana, Albania, and preparations for GUADEC 2025 are ongoing. We additionally held a short community consultation around our events strategy back in October, and this is something that the board has had discussions about subsequently.
Communications. Finally, despite reduced headcount, we continue to devote some staff time to operating GNOME’s social media accounts.
In addition to these ongoing areas of support, there have been additional one off support tasks which the Foundation has taken care of over the past six months. For example, we recently paid for the Google API keys used by Evolution Data Server to be certified.
Administration
Outside of programs, we have been busy with the usual background tasks that are necessary to keep the Foundation operating. That includes maintaining our books, filling in legal paperwork when it’s needed, keeping the board updated about the organisation’s finances, and talking to donors.
Conclusion
So much has been happening in the GNOME Foundation over the past six months, that it has been challenging to fit it all into a single post, and there are many items which I did not have the space to cover. Nevertheless, I hope that this summary provides a useful overview, and goes some way to showing how much has been going on behind the scenes. With no full-time ED and a reduced staff, it has been a challenging period for the Foundation. Nevertheless, I think we’ve managed to keep on top of our existing responsibilities and programs, and hopefully will have more capacity with the additional a new full-time Executive Director very soon.
It should be said that, since Richard reduced his hours at the end of 2024, much of the Foundation’s “executive” work above has fallen to a combination of existing staff and the Executive Committee. It is a large burden for a small team, and I think that it’s fair to say that the current setup is not easy to sustain, nor is it 100% reliable.
We are hopeful that appointing a new ED will help ease our resource pressures. However, we are also very interested in welcoming any additional volunteers who are willing to help. So, if participating in the kinds of activities that I’ve described appeals to you, please contact me. We can easily create new positions for those who think they might be able to have a role in the organisation, and would love to talk about what skills you might be able to bring.
This year I have again received a grant from the WMF to attend to the annual Wikimedia Hackathon, this year is in Istanbul. I’m very grateful to them.
Since 2024 I’m very interested in the Wikibase platform since we are using it at LaOficina and is a main topic for the DHwiki WG. I’m not going into details but, from the very beginning, my first thoughs of involvement in the hackathon are related with Wikibase. Specially the need of «productization» and reduce entry barriers for Wikibase adoption, at least in my personal experience. Lately I’ve been thinking in very specific goals I think could be done in the hackathon:
T391815 Wikibase Request for Comment: essential minimalist ontology
T391821 Wikibase Request for Comment: an inventory of Wikibase related artifacts
T391826 Wikibase Request for Comment: Wikibase Suite full multimedia proof of concept configuration
T391828 Wikibase Request for Comment: a method for portable wikibase gadgets
The point is, I can’t do this alone. I have beend working on most of these things for months, but still are finished. Many different skills needed, lack of experience on some of them, etc.
So, the goal of this post is to call for action other attendants at the hackathon to join to work on them. The most relevant required skills (from my lack of skills point of view) are about MediaWiki integration, configuration and programming. For T391828, the most important is to be familiar with MediaWiki gadgets and for T391815, some practical experience in setting up ontologies in Wikibase.
All the practical results will be offered to the Wikibase developers for their consideration.
If you are interested please reach me in Telegram or at your preference. I also would love to set up a Wikibase zone in the hacking space for people working with Wikibase, with these or other tasks.
Hello, chat! I’m Revisto, and I want to share my journey to GNOME Circle and how I became a GNOME Foundation member. I’ll discuss my experiences and the development path of Drum Machine. This is the first part of the “Journey to GNOME Circle” series.
I love Free and Open Source communities, especially GNOME and GNOME Circle. I find contributing to open source communities far more rewarding than contributing to projects maintained by a single individual. If you find the right community, there are many experienced, generous, and humble people you can learn from. You can explore various projects maintained by the community, experience consistent quality, be surrounded by an amazing community, and even enjoy some perks!
I found the GNOME community to be one of the best in the FOSS industry. Why?
There are lots of apps and projects you can contribute to, from GTK to Terminal to GNOME Shell itself.
It has a welcoming community full of experienced people.
GNOME looks fantastic, thanks to Jakub Steiner. The GNOME design is stunning. It has great documentation and handbooks for beginners, making it super beginner-friendly.
Different ways to contribute, you can help with documentation, programming, design, translation, create new apps, and more.
Membership perks.
GNOME Foundation Membership?!
The GNOME Foundation offers membership to its active contributors. Whether you’re an active translator, help with documentation, enhance GNOME’s appearance, or generally MAKE GNOME BETTER, you can apply for membership. Additionally, if your app gets into GNOME Circle, you qualify for membership.
What are the perks?
Here are some of the perks in summary. You can find complete information here.
Email Alias (nickname@gnome.org): gnome.org email addresses are provided for all Foundation members. This address is an alias which can be used as a relay for sending and receiving emails.
Your own blog at blogs.gnome.org: Foundation members are eligible to host their blog on blogs.gnome.org.
Travel sponsorship for events: Foundation members are eligible for travel sponsorship to GNOME conferences and events.
Nextcloud (cloud.gnome.org): GNOME hosts a Nextcloud instance at cloud.gnome.org. This provides a range of services, including file hosting, calendaring, and contact management.
These are useful and beneficial for your reputation and branding. I use my email alias for GNOME-related work at AlirezaSh@gnome.org, and have my blog at alirezash.gnome.org, and sync my Obsidian notes with Nextcloud on GNOME infrastructure. Unfortunately, I couldn’t get my travel sponsorship as a speaker at events because I’m from Iran, and due to OFAC regulations, which is so unfair.
What’s GNOME Circle?
I’ve always had the idea of creating beautiful, useful apps for Linux. There were many apps I needed but couldn’t find a good version for Linux, and some apps I wished had better GUIs.
GNOME Circle is a collection of applications and libraries that extend the GNOME ecosystem.
“GNOME Circle champions the great software that is available for the GNOME platform. Not only do we showcase the best apps and libraries for GNOME, but we also support independent developers who are using GNOME technologies.”
— GNOME Circle
In GNOME, we have core apps like Terminal, GNOME Shell, Text Editor, etc., and we have GNOME Circle apps. These are apps that independent developers have created using GNOME technologies (GTK and Libadwaita), following the GNOME Human Interface Guidelines, and meeting the app criteria. Once accepted, these apps become part of GNOME Circle.
GNOME Circle has lots of really cool apps that you should check out. It includes Curtail, an application to compress your images; Ear Tag, an audio file tags editor; Chess Clock, which provides time control for over-the-board chess games.
GNOME Circle is really cool, full of beautiful apps and creative developers.
Insert image of fun little stuff that looks like ideas here.
App Idea?
If GNOME Circle sounds interesting to you, or you like GNOME Foundation membership perks, or you appreciate the open-source community, or you want to create an app that fulfills your own needs, you should have an idea. What app do you want to develop? I believe we all have ideas. Personally, I really want a good VPN client for Linux (because of censorship in Iran, it’s vital), or a good-looking, user-friendly download manager, among other apps.
I highly recommend you check out other applications on GNOME Circle. There are lots of creative projects there that can inspire you. Some of my favorites:
I think it’s a good idea to check if your idea has already been implemented. You can check the apps in GNOME Circle and also check the apps that are being reviewed by the GNOME Circle Committee to become part of the circle soon: GNOME Circle Issues.
Although you can submit a new app with a similar idea to an existing app, I believe it would be better to bring new ideas to the circle or even contribute to existing circle apps that align with your idea.
On a side note, I really enjoy reading other people’s app requests and discussions here. I’ve been reading them to familiarize myself with the application acceptance process and understand the possible reasons an app might get rejected.
Insert image of an online drum machine here.
Since I’m a music producer (listen to my work here), I really like the idea of making music production in Linux easier. I had music-related ideas for my first app in the Circle: synthesizers, drum machines, and eventually a DAW (Digital Audio Workstation). I started simple and went with Drum Machine. I looked at different online drum machines, such as drumbit.app and onemotion.com/drum-machine, then I started thinking about how I wanted my own drum machine to look like and I drew this (I know it doesn’t look good; I’m bad at drawing >-<).
Now I had motivation, an idea, and wanted to actually start making.
I’ll detail the development process and evolution of Drum Machine in the next post, so stay tuned!
The 2023/2024 GNOME STF project is mostly wrapped up now, so it’s a good moment to look back at what was done as part of the project, and what’s next for the projects we worked on.
As a brief refresher, STF (Sovereign Tech Fund, recently renamed to Sovereign Tech Agency) is a program by the German Government to support critical public interest software infrastructure. Sonny Piers and I applied with a proposal to improve important, underfunded areas of GNOME and the free desktop and got an investment of 1 Million Euro for 2023/2024.
While we’ve reported individual parts of what we were able to achieve thanks to this investment elsewhere, it felt important to have a somewhat comprehensive post with all of it in one place. Everyone on the team contributed summaries of their work to help put this together, with final editing by Adrian Vovk and myself.
Accessibility is an incredibly important part of the GNOME project, community, and platform, but unfourtunately it has historically been underfunded and undermaintained. This is why we chose to make accessibility one of the primary focus areas for the STF project.
Newton
The Assistive Technology Service Provider Interface (AT-SPI) is the current accessibility API for the Linux desktop. It was designed and developed in the early 2000s, under the leadership of Sun Microsystems. Twenty years later, we are feeling its limitations. It’s slow, requiring an IPC round trip for each query a screen reader may want to make about the contents of an app. It predates our modern desktop security technologies, like Wayland and Flatpak, so it’s unaware of and sometimes incompatible with sandboxing. In short: it’s a product of its time.
The STF project was a good opportunity to start work on a replacement, so we contracted Matt Campbell to make a prototype. The result was Newton, an experimental replacement for the Linux desktop accessibility stack. Newton uses a fundamentally different architecture from AT-SPI, where apps push their accessibility information to the screen reader. This makes Newton significantly more efficient than AT-SPI, and also makes it fully compatible with Wayland and the Flatpak sandbox.
The prototype required work all across the stack, including GTK, Mutter, Orca, and all the plumbing connecting these components. Apps use a new Wayland protocol to send accessibility info to Mutter, which ensures that the accessibility state an app reports is always synchronized with the app’s current visual state. Meanwhile, the prototype has Orca communicate with Mutter via a new D-Bus Protocol.
This D-Bus protocol also includes a solution for one of the major blockers for accessibility on Wayland. Due to Wayland’s anti-keylogging design, Orca is unable to intercept certain keys used to control the screen reader, like Insert or Caps Lock. The protocol gives this intercept functionality to screen readers on Wayland. Recently, RedHat’s Lukáš Tyrychtr has adapted this part of Matt’s work into a standalone patch, which landed in GNOME 48.
As part of this work, Matt added AccessKit support to GTK. This library acts as an abstraction layer over various OS-specific accessibility APIs, and Matt’s experimental fork included support for the Newton Wayland protocol. As a side effect, GTK accessibility now works on Windows and macOS! Matt’s original patch was rebased and merged by Matthias Clasen, and recently it was released in GTK 4.18.
Finally, to test and demonstrate this new accessibility stack, Matt integrated all his changes into a fork of GNOME OS and the GNOME Flatpak Platform.
For more details about Newton’s design and implementation, including a demo video of Newton in action, you can read Matt’s announcement blog post, and his subsequent update.
Orca
The STF project allowed Igalia’s Joanmarie Diggs to rewrite and modernize much of Orca, our screen reader. Between November 2023 and December 2024 there were over 800 commits, with 33711 insertions and 34806 deletions. The changes include significant refactoring to make Orca more reliable and performant as well as easier to maintain and debug. Orca is also used on other desktop environments, like KDE, so this work benefits accessibility on the Linux desktop as a whole.
Orca now no longer depends on the deprecated pyatspi library, and has switched to using AT-SPI directly via GObject Introspection. As part of this replacement, a layer of abstraction was added to centralize any low-level accessibility-API calls. This will make it significantly easier to port Orca to new platform accessibility APIs (like Newton) when the time comes.
Over the years, Orca has added many workarounds for bugs in apps or toolkits, to ensure that users are able to access the apps they need. However, enough of these workarounds accumulated to impact Orca’s performance and reliability. The STF project allowed the majority of these workarounds to be investigated and, where possible, removed. In cases where workarounds were still necessary, bugs were filed against the app or toolkit, and the workaround was documented in Orca’s code for eventual removal.
There is arguably no single “correct” order or number of accessibility events, but both order and number can impact Orca’s presentation and performance. Therefore, Orca’s event scheduling was reworked to ensure that events are recieved in a consistent order regardless of the source. Orca’s event-flood detection was also completely reworked, so that apps can no longer freeze Orca by flooding it with events.
A lot of work went into increasing Orca’s code quality. A couple of tightly-entangled systems were disentangled, making Orca a lot more modular. Some overly complicated systems were refactored to simplify them. Utility code that was unnecessarily grouped together got split up. Linter warnings were addressed and the code style was modernized. Overall, Orca’s sources are now a lot easier to read through and reason about, debug, analyze, and maintain.
Finally, building apps that are compatible with screen readers is occasionally challenging. Screen readers have complicated rules about what they present and when they present it, so sometimes app developers are unsure of what they need to do to make Orca present their app correctly. To improve the developer experience around building accessible apps, there’s now a new guide with tips and techniques to use. This guide is very much a work in progress, and additional content is planned.
WebKitGTK
WebKitGTK is GNOME’s default web rendering engine. GTK4 significantly reworked the accessibility API for GTK widgets, so when WebKitGTK was first ported to GTK4, a major missing feature was the accessibility of web pages. The screen reader was simply unable to see web content visible on screen. As part of the STF project, Igalia’s Georges Basile Stavracas Neto added support for GTK4’s new accessibility APIs to WebKitGTK. This landed in WebKitGTK 2.44, the first stable release with GTK4 support.
Around the same time, Joanmarie removed Orca’s custom WebKitGTK handling in favor of the generic “web” support, which aligns WebKitGTK’s user experience with Firefox and Chromium. This gives Orca users an additional choice when it comes to web browsing. Please note that there are still a couple of accessibility bugs that must be fixed before Orca users can enjoy the full benefits of this change.
The last hurdle to fully functional accessibility in WebKitGTK was Flatpak. Web browsers are generally hard to make work in Flatpak, due to the interaction between Flatpak’s sandbox and the browser’s own sandboxing features, which are usually either turned off, weakened, or replaced downstream. WebKitGTK, however, has strong support for sandboxing in Flatpak, and it actually uses Flatpak’s native subsandboxing support directly. Unfourtunately, the way the sandboxes interacted prevented WebKitGTK from exporting its accessibility information to the system. Georges takes a deep dive into the specifics in his GUADEC 2024 talk.
Since that talk, Georges added features to Flatpak (and a follow-up) that made WebKitGTK work with the screen reader. This makes GNOME Web the first web browser that is both fully accessible and fully Flatpak sandboxed!
Spiel
Text-to-speech (TTS) on Linux is currently handled by a service called SpeechDispatcher. SpeechDispatcher was primarily built for use in screen readers, like Orca. Thus, TTS on Linux has generally been limited to accessibility use cases. SpeechDispatcher is modular, and allows the user to replace the speech synthesizer (which defaults to the robotic-sounding eSpeak) with something that sounds more natural. However, this configuration is done via text files, and can thus be nontrivial to get right, especially if the user wishes to integrate a proprietary synthesizer they might have paid money for.
Eitan Isaacson ran up against these limitations when he was implementing the Web Speech API into Firefox. So, he created Spiel, a new TTS framework for the Linux desktop. Spiel is, at its core, a D-Bus protocol that apps and speech synthesizers can use to communicate. Spiel also has a client library that closely emulates the Web Speech API, which makes it easy for apps to make use of TTS. Finally, Spiel is a distribution system for voices, based on Flatpak. This part of Spiel is still in the early stages. You can learn more about Spiel via Eitan’s GUADEC 2024 Talk.
As part of the STF project, Andy Holmes and Eitan implemented an initial working implementation of Spiel in Orca, demonstrating its viability for screen readers. This helped stabalize Spiel, and encouraged engagement with the project. The Spiel client and server libraries were also hardened with sanitizer and static analysis testing.
Platform
The GNOME Platform consists of the libraries, system and session services, and standards provided by GNOME and Freedesktop. In short, this is the overarching API surface that we expose to app developers so that they can write GNOME apps. Clearly, that’s very important and so we focused much of the STF’s funding there. In no particular order, here’s some of the work that the STF made possible.
Libadwaita
Starting with GTK4, we’ve decoupled GTK from GNOME’s design guidelines. This means that GTK4 no longer includes GNOME’s style sheet, or GNOME-specific widgets. This has many benefits: first and foremost, this makes GTK4 a much more generic UI toolkit, and thus more suitible for use in other desktop environments. Second, it gives GNOME the flexibility to iterate on our design and UI without interfering with other projects, and on a faster timescale. This leads to “platform libraries”, which extend GTK4’s basic widgetry with desktop-specific functionality and styles. Of course GNOME has a platform library, but so do other platforms like elementary OS.
Adwaita is GNOME’s design language, and so GNOME’s platform library is called libadwaita. Libadwaita provides GNOME’s style sheet, as well as widgets that implement many parts of the GNOME Human Interface Guidelines, including the machinery necessary to build adaptive GNOME apps that can work on mobile phones.
The STF project allowed libadwaita’s maintainer, Alice Mikhaylenko, to close a few long-standing gaps as well as finish a number of stalled projects.
Bottom Sheets
Libadwaita now provides a new bottom sheet widget, which provides a sheet that slides up from below and can be swiped back down off screen. Optionally, bottom sheets can have a bottom bar that’s visible when the sheet is collapsed, and which morphs into the sheet whenever the user activates it. This pattern is common with many apps that wish to show some short status on a main page, but a detailed view if the user wants one. For example: music player apps tend to use this kind of pattern for their “now playing” screens.
This shipped in libadwaita 1.6, and apps like Shortwave (shown above) are already using it.
Adaptive Dialogs
Traditionally, in GNOME dialogs were just separate child windows of the app’s main window. This made it difficult, sometimes, to create dialogs and popups that behave correctly on small windows and on mobile devices. Now, libadwaita handles dialogs completely within the app’s main window, which lets them adapt between floating centered pop-ups on desktop, and bottom sheets on mobile. Libadwaita’s new dialogs also correctly manage the appearance of their own close button, so that users have the ability to exit out of dialogs even on mobile devices where windows don’t normally have a close button.
This shipped in libadwaita 1.5 and many apps across GNOME have already been updated to use the new dialogs.
Multi-Layout View
Libadwaita already provides a system of breakpoints, where widget properties are automatically updated depending on the size of the app’s window. However, it was non-trivial to use breakpoints to swap between different layouts, such as a sidebar on desktop and a bottom bar on mobile. The new multi-layout view allows you to define the different layouts an app can use, and control the active layout using breakpoints.
Work on multi-layout views started before the STF project, but it was stalled. The STF project allowed it to be completed, and the feature has shipped in libadwaita 1.6.
Wrap Box
Libadwaita now provides a new wrap box widget, which wraps its children similarly to how lines are wrapped in a text paragraph. This allows us to implement various layouts that we’ve wanted to, like the list of search filters in this mockup, or toolbars that wrap onto multiple lines when there’s not enough room.
Like the multi-layout view, this work was stalled until the STF project. The feature shipped in the recent libadwaita 1.7 release.
Toggle Groups
Libadwata also now provides a new toggle group widget, which is a group of buttons where only one at a time can be selected. This pattern is pretty common in GNOME apps, and usually this was implemented manually which was awkward and didn’t look great. The new widget is a big improvement.
Toggle groups were originally implemented by Maximiliano Sandoval, but the work was stalled. The STF project allowed Alice to bring this work over the finish line. The feature is part of libadwaita 1.7.
GTK CSS
GTK uses a custom version of CSS to style its widgets, with extensions for defining and transforming colors. These extensions were limited in various ways: for instance, the defined colors were global for the entire stylesheet, but it would be very convenient to have them per-widget instead. The color functions only worked in sRGB, which isn’t the optimal colorspace for some kinds of calculations.
Thanks to work by Alice Mikhaylenko and the rest of the GTK team, GTK now has support for the standard CSS variables, color mixing, and relative colors, with a variety of color spaces. The old extensions have been deprecated. This work has already shipped in GTK 4.16, and many apps and libraries (including libadwaita as of 1.6), are making extensive use of it.
This work gets us one step closer to our long-term goal of dropping SCSS in the future, which will simplify the development and distribution process for the GNOME stylesheet.
Notification API
Notifications are a critical component of any modern computing experience; they’re essential for keeping users informed and ensuring that they can react quickly to messages or events.
The original Freedesktop Notification Standard used on the Linux desktop saw almost no significant changes in the past decade, so it was missing many modern features that users have grown to know and expect from other platforms. There were thus various DE-specific extensions and workarounds, which made it difficult for app developers to expect a consistent feature set and behavior. Even within GNOME, there were technically three different supported notification APIs that apps could use, each of which had a different subset of features. Thanks to STF funding, Julian Sparber was able to spend the time necessary to finally untangle some of the difficult issues in this area.
After evaluating different directions, a path forward was identified. The two main criteria were to not break existing apps, and to reuse one of the existing APIs. We decided to extend Flatpak’s notification portal API. The new additions include some of the most essential and highly visible features we’re currently missing, like playing a notification sound, markup styling, and more granular control of notification visibility.
The visibility control is especially impactful because it allows apps to send less intrusive notifications and it improves user privacy. On the other hand, one mostly invisible feature to users was the inclusion of the XDG Activation protocol in the new spec, which allows apps to grab focus after a user interacts with a notification. The updated protocol is already released and documented. You can find the list of changes at the pull request that introduced the v2 notifications portal.
While there is still some technical debt remaining in this area, the STF funding allowed us to get to a more sustainable place and lay the groundwork for future extensions to the standard. There is already a list of planned features for a version 3 of the notification portal.
You can read more about this initiative in Julian’s blog post on the GNOME Shell blog.
Notifications in GNOME Shell
GNOME Shell provides the core user interface for the GNOME desktop, which includes notification banners, the notification list, and the lock screen. As part of the STF project, Julian Sparber worked on refactoring and improving this part of the GNOME Shell code base, in order to make it more feasible to extend it and support new features. Specifically, this allows us to implement the UI for the v2 notifications API:
Notifications are now tracked per-app. We now show which app sent each notification, and this also lays the technical ground work for grouping notifications by app.
Allowing notifications to be expanded to show the full content and buttons
Keeping all notifications until you dismiss them, rather than only keeping the 3 most recent ones
Julian was also able to clean up a bunch of legacy code, like GNOME Shell’s integration with telepathy.
Most of these changes landed in GNOME 46. Grouping landed in GNOME 48. For more detail, see Julian’s blog post.
Global Shorcuts
The global shortcuts portal allows apps to request permission to recieve certain key bindings, regardless of whether the app is currently focused. Without this portal, use cases like push-to-talk in voice chat apps are not possible due to Wayland’s anti-keylogging design.
RedHat’s Allan Day created designs for this portal a while back, which we aimed to implement as part of the STF project.
Dorota Czaplejewicz spearheaded the effort to implement the global shortcuts portal across GNOME. She started the work for this in various components all over the stack, e.g. integration into the Settings UI, the compositor backend API, the GNOME portal, and the various portal client libraries (libportal and ashpd). This work has since been picked up and finalized by Carlos Garnacho and others, and landed in GNOME 48.
XDG Desktop Portals
Portals are cross-desktop system APIs that give sandboxed apps a way to securely access system resources such as files, devices like cameras, and more.
The STF project allowed Georges Stavracas to create a new dedicated documentation website for portals, which will make it easier for apps to understand and adopt these APIs. This documentation also makes it easier for desktop environment developers to implement the backend of these APIs, so that apps have complete functionality on these desktops.
Georges and Hubert Figuiere added a new portal for USB devices, and many parts of the platform are being updated to support it. This portal allows apps to list USB devices, and then request access without opening security holes.
The document portal saw some fixes for various issues, and the file transfer portal now supports directories. The settings portal was extended to advertise a new cross-desktop high contrast setting.
Hubert also worked to improve libportal, the convenience library that wraps the portal API for apps to easily consume. It now supports the settings portal, for apps to conveniently recieve light/dark mode, system accent color, and high contrast mode settings. He also fixed various bugs and memory leaks.
WebKitGTK is GNOME’s default web engine, for rendering web content in apps. It supports modern web standards, and is used in GNOME Web, our default browser. Georges adjusted WebKitGTK to make use of portals for printing and location services. New features were added to all parts of the stack to enable this. This makes WebKitGTK and every app that uses it more secure.
Flatpak and Sandboxing
Flatpak is the standard cross-distribution app packaging format, and it also provides security through sandboxing. It’s split into a few smaller sub-projects: the Flatpak core which implements the majority of Flatpak’s functionality, the xdg-dbus-proxy which filters D-Bus traffic to enforce which services sandboxed apps can talk to, and flatpak-builder which app developers can use to build Flatpak packages conveniently.
As part of the STF project, Hubert worked on improving the maintenance situation for the Flatpak core, and fixed various bugs and memory leaks. Hubert and Georges also implemented the necessary groundwork in Flatpak’s sandboxing for the new USB portal to function.
In flatpak-builder, Hubert implemented the long-awaited feature to rename MIME files and icons, which simplifies packaging of desktop applications. He also performed some general maintenance, including various minor bug fixes.
The XDG D-Bus Proxy previously relied on some very specific implementation details of the various D-Bus client libraries, such as GLib’s gdbus and zbus. This broke when zbus changed its implementation. Thanks to work by Sophie Herold, xdg-dbus-proxy was updated to stop relying on this undefined behavior, which means that all client libraries should now work without problems.
Nautilus File Chooser Portal
The file chooser portal is used by apps to bring up a sandbox-aware file picker provided by the system. It acts as an invisible permission check for Flatpak apps. This portal powers the “Open” or “Save As” actions in apps.
Previously, GNOME’s implementation of xdg-desktop-portals used GTK’s built-in file chooser dialog widget. This, however, caused some problems. Since the dialog’s implementation lived in GTK, and GTK is a dependency of libadwaita, it couldn’t use any of libadwaita’s functionality to avoid circular dependencies. This meant that the dialog couldn’t be made to work on mobile, and didn’t look in line with modern GNOME apps. The behavior of the file chooser was similar to Natuilus, our file manager app, but not identical. This would cause confusion among users. It took lots of work to keep both projects at least somewhat in line, and even then it wasn’t perfect. The file chooser couldn’t benefit from recent performance improvements in Nautilus, and it was missing some of Nautilus’s features. For example, the file chooser couldn’t generate thumbnails and did not support multiple zoom levels.
Nautilus-based file open dialogNautilus-based file save dialog
António Fernandes extended Nautilus with a new implementation for the file chooser portal (with the help of Corey Berla and Khalid Abu Shawarib doing reviews and fixups). Nautilus can now behave like an open or save file picker dialog, handling all the edge cases this entails. This required a surprising amount of work. For example, Mutter needed improvements to handle attaching Nautilus as a modal dialog, Natuilus itself needed several refactors to support different views (saving files, opening files, normal file browser), the initial portal implementation needed to be reworked to avoid breaking dark mode, and there were several iterations on the design to deal with UX edge cases.
All of this work landed in GNOME 47.
GNOME Online Accounts
GNOME Online Accounts (GOA) is GNOME’s single sign-on framework, providing a way for users to setup online accounts to be used across the desktop and preinstalled apps. Since there was no fixed maintainer in recent years, the project fell behind in maintenance, relied on old libraries, used old tooling for tests, and was missing support for open protocols like WebDAV (including CalDAV and CardDAV). Andy Holmes took over maintenance thanks to the STF project, and put it on more stable footing.
GOA used to only have limited WebDAV support as part of its Nextcloud integration. Andy separated the WebDAV support into a standalone integration, which allows users to integrate with more open-source-friendly providers, like Fastmail. This integration was also tested with well-known self-hosted servers.
GOA previously relied on its own webview for Oauth2 login, for providers like Google. Andy replaced this with a secure exchange using the default web browser. This also allowed Andy to upgrade GOA to GTK4 (with reviews by Philp Withnall) and remove the last GTK3 dependency from GNOME Settings. As part of this rework, the backend API was refactored to be fully asynchronous.
Finally, Andy updated GOA’s test infrastructure, to use modern static analyzers and better CI tests.
Language Bindings
GLib is GNOME’s foundational C library, and includes many common-sense utilities that the C standard library lacks. It also provides a layer of platform-agnostic functionality, which means that C programs targeting GLib are easier to port to other operating systems like Windows. For instance, GLib.DateTime is a set of utilities for getting the current time (which is OS-specific), doing complex math with time, and formatting timestamps for human-readable display.
GObject introspection is GNOME’s language binding infrastructure. It allows libraries that are written in C (and, lately, Rust) to be used from other languages, including Rust, Python, JavaScript, Swift, and more! It consists of a set of coding style conventions, annotations (that appear as code comments on functions), an object-oriented type system called GObject, a tool that extracts all of this information into .gir files, a library to parse the these files, and per-language infrastructure to consume this parsed data into the language’s type system. This infrastructure enables language bindings to be relatively easy to make and maintain, which in turn enables GNOME’s large ecosystem of apps written in a diverse set of languages.
GLib and GObject introspection are tightly coupled projects. GLib defines the type system (including GObject, and the lower-level concepts underneath it), and GObject introspection heavily relies on these types. Conversely, GLib itself is accessible from language bindings, which means that GLib depends on GObject introspection. This complicated dependency situation makes it rather difficult to iterate on our language bindings, and was quite messy to maintain.
As part of the STF project, Philip Withnall started work on merging GObject introspection into GLib. Having them in the same repository means that developing them together is easier, because it can avoid dependency cycles. So far, he was able to move libgirepository, which is a large part of GObject introspection. In practice, this has allowed us to generate the .gir files for GLib as part of its build process, rather than generating them externally.
Building on this work, Evan Welsh was able to start making improvements to our language bindings. Evan added support for annotating async functions (based on work by Veena Nager), so that GObject introspection doesn’t need to use heuristics to guess which APIs are async. This allows language bindings to better integrate GNOME’s async APIs with the language’s native async/await syntax.
Evan’s work on async function calls required work across the entire language binding stack, including some more merging of GObject introspection into GLib. Most notably, these new features required new test cases, which meant that GLib’s CI needed to use the bleeding-edge version of GObject introspection, which was rather difficult due to the entangled dependencies between the two projects. Evan made these necessary changes, so now it is more feasible to extend the functionality of our language bindings.
Evan then went on to integrate this work across the rest of the stack. In Meson, there’s a pending pull request to transition from the old GObject introspection tools to the new GLib tools. In GNOME’s JavaScript bindings, Philip started integrating the GLib version of libgirepository, and Evan has since continued this work.
Evan also did some work in GTK to fix an issue that previously skipped some accessibility APIs when generating language bindings. This made it possible for apps written in languages other than C to better communicate their structure to the screen reader, improving the accessibility of those apps.
Finally, Evan worked on advancing GNOME’s TypeScript bindings by merging gi.ts and ts-for-gir into a single toolchain which can fully cover GNOME’s stack and have accurate enough types to work with existing JavaScript codebases. This was possible thanks to help by Pascal Garber, the maintainer of ts-for-gir. This will enable GNOME’s many apps implemented in JavaScript to be ported to TypeScript, allowing for static analysis and increasing code quality. For instance, GNOME Weather was recently ported to TypeScript.
GLib
After merging libgirepository into GLib, Philip was able to port GLib away from gtk-doc and to gi-docgen, GNOME’s modern API documentation generator. This brought a much faster build time for documentation, and makes the docs much more useful for users of language bindings. As part of this transition, someone has to go through API-by-API and port all of the documentation to the new gi-docgen syntax. As part of the STF project, Philip was able to port all section introductions and some API documentation comments, but there’s a huge number of APIs so more work is still required. As documentation is ported, various improvements can be made to documentation quality.
The STF project also allowed Philip to focus on various ongoing maintenance tasks for GLib, with prominent examples including:
Reviewed and landed integration with the CHERI processor, which is an new architecture with increased memory security compared to traditional x86/ARM architectures. Having GLib running on it is an important step to bootstrapping an OS. This is the kind of work which wouldn’t get reviewed without maintenance funding for GLib, yet is important for the wider ecosystem.
Many issues in our development process come from the fact that there’s not enough end-to-end testing with the entire stack. This was the initial motivativation for the GNOME Continuous project, which eventually became GNOME OS as we know it today. GNOME OS powers our automated QA process, and allows some limited direct testing of new GNOME features in virtual machines.
However, GNOME OS has a lot more potential beyond that as a QA and development tool. It’s 90% of the way there to being a usable OS for GNOME developers to daily drive and dogfood the latest GNOME features. This is sometimes the only way to catch bugs, especailly those relating to hardware, input, and similar situations that a VM can’t emulate. Also, since it’s a platform we control, we saw the opportunity to integrate some quality-of-life improvements for GNOME developers deep into the OS.
Transition to Sysupdate
Switching GNOME OS away from ostree and to systemd-sysupdate opened the doors to more complete integration with all of systemd’s existing and future development tools, like systemd-sysext. It also enabled us to build security features into GNOME OS, like disk encryption and UEFI secure boot, which made it suitable for daily-driver use by our developers.
This work started before the STF’s investment. Valentin David and the rest of the GNOME OS team had already created an experimental build of GNOME OS that replaced ostree with systemd-sysupdate. It coexisted with the official recommended ostree edition. At roughly the same time, Adrian Vovk was making a similar transition in his own carbonOS, when he discovered that systemd-sysupdate doesn’t have an easy way to integrate with GNOME. So, he made a patch for systemd that introduces a D-Bus service that GNOME can use to control systemd-sysupdate.
As part of the STF project, these transitions were completed. Codethink’s Tom Coldrick (with help from Jerry Wu and Abderrahim Kitouni) rebased Adrian’s D-Bus service patch, and it got merged into systemd. Jerry Wu and Adrien Plazas also integrated this new service into GNOME Software.
GNOME Software showing sysupdate updates on GNOME OS Nightly
Adrian continued improving sysupdate itself: he added support for “optional features”, which allow parts of the OS to be enabled or disabled by the system administrator. This is most useful for optionally distributing debugging or development tools or extra drivers like the propriertary NVIDIA graphics driver in GNOME OS.
GNOME OS also needed the ability to push updates to different branches simultaneously. For instance, we’d like to have a stable GNOME 48 branch that recieves security updates, while our GNOME Nightly branch contains new unfinished GNOME features. To achieve this, Adrian started implementing “update streams” in systemd-sysupdate, which are currently pending review upstream.
Codethink wrote about the sysupdate transition in a blog post.
Immutable OS Tooling
Thanks to GNOME OS’s deep integration with the systemd stack, we were able to leverage new technologies like systemd-sysext to improve the developer workflow for low-level system components.
As part of his work for Codethink, Martín Abente Lahaye built sysext-utils, a new tool that lets you locally build your own version of various components, and then temporarily apply them over your immutable system for testing. In situations where some change you’re testing substantially compromises system stability, you can quickly return to a known-good state by simply rebooting. This work is generic enough that the basics work on any systemd-powered distribution, but it also has direct integration with GNOME OS’s build tooling, making the workflow faster and easier than on other distributions. Martín went into lots more detail on the Codethink blog.
A natural next step was to leverage sysext-utils on GNOME’s CI infrastructure. Flatpak apps enjoy the benefits of CI-produced bundles which developers, testers, and users alike can download and try on their own system. This makes it very natural and quick to test experimental changes, or confirm that a bug fix works. Martín and Sam Thursfield (with the help of Jordan Petridis and Abderrahim) worked to package up sysext-utils into a CI template that GNOME projects can use. This template creates systemd-sysext bundles that can be downloaded and applied onto GNOME OS for testing, similar to Flatpak’s workflow. To prove this concept, this template was integrated with the CI for mutter and gnome-shell. Martín wrote another blog post about this work.
Security Tracking
To become suitable for daily-driver use, GNOME OS needs to keep track of the potential vulnerabilities in the software it distributes, including various low-level libraries. Since GNOME OS is based on our GNOME Flatpak runtime, improving its vulnerability tracking makes our entire ecosystem more robust against CVEs.
To that end, Codethink’s Neill Whillans (with Abderrahim’s help) upgraded the GNOME Flatpak runtime’s CVE scanning to use modern versions of the freedesktop-sdk tooling. Then, Neill expanded the coverage to scan GNOME OS as well. Now we have reports of CVEs that potentially affect GNOME OS in addition to the GNOME Flatpak runtime. These reports show the packages CVEs come from and a description of each vulnerability.
GNOME OS Installer
To make GNOME OS more appealing to our developer community, we needed to rework the way we install it. At the moment, the existing installer is very old and limited in features: it’s incompatible with dual-boot, and the current setup flow has no support for the advanced security features we now support (like TPM-powered disk encryption).
Adrian started working on a replacement installer for GNOME OS, built around systemd’s low-level tooling. This integration allows the new installer to handle GNOME OS’s new security features, as well as provide a better UX for installing and setting up GNOME OS. Most crucially, the new architecture makes dual-boot possible, which is probably one of the most requested GNOME OS features from our developers at the moment.
Sam Hewitt made comprehensive designs and mockups for the new installer’s functionality, based on which Adrian has mostly implemented the frontend for the new installer. On the backend, we ran into some unexpected difficulties and limitations of systemd’s tools, which Adrian was unable to resolve within the scope of this project. The remaining work is mostly in systemd, but it also requires improvements to various UAPI Group Specifications and better integration with low-level boot components like the UEFI Secure Boot shim. Adrian gave an All Systems Go! talk on the subject, which goes into more details about the current status of this work, and the current blockers.
Buildstream
Buildstream is the tool used to build GNOME OS as well as the GNOME SDK and Runtime for Flathub, building the bases of all GNOME apps distributed there.
Previously, it was not possible to use dependencies originating from git repositories when working with the Rust programming language. That made it impossible to test and integrate unreleased fixes or new features of other projects during the development cycle. Thanks to work by Sophie Herold the use of git dependencies is now possible without any manual work required.
Additionally, Buildstream is also used by the Freedesktop.org project for its SDK and runtime. Most other runtimes, including the GNOME runtime are based on it. With the newly added support for git source, it has now become possible to add new components of the GStreamer multimedia framework. Components written in Rust were previously missing from the runtime. This includes the module that makes it possible to use GStreamer to show videos in GNOME apps. These functions are already used by apps like Camera, Showtime, or Warp.
OpenQA
Quality assurance (QA) testing on GNOME OS is important because it allows us to catch a whole class of issues before they can reach our downstream distributors. We use openQA to automate this process, so that we’re continuously running QA tests on the platform. Twice a day, we generate GNOME OS images containing all the latest git commits for all GNOME components. This image is then uploaded into openQA, which boots it in a VM and runs various test cases. These tests send fake mouse and keyboard input, and then compare screenshots of the resulting states against a set of known-good screenshots.
Codethink’s Neill Whillans created a script that cleans up the repository of known-good screenshots by deleting old and unused ones. He also fixed many of our failing tests. For instance, Neill diagnosed a problem on system startup that caused QA tests to sometimes fail, and triaged it to an issue in GNOME Shell’s layout code.
Building on the sysext-utils work mentioned above, Martín made a prototype QA setup where GNOME’s QA test suite can run as part of an individual project’s CI pipeline. This will make QA testing happen even earlier, before the broken changes are merged into the upstream project. You can see the working prototype for GNOME Shell here, and read more about it in this blog post.
Security
GNOME and its libraries are used in many security-critical contexts. GNOME libraries underpin much of the operating system, and GNOME itself is used by governments, corporations, journalists, activists, and others with high security needs around the world. In recent years, the freedesktop has not seen as much investment into this area as the proprietary platforms, which has led to a gap in some areas (for instance: home directory encryption). This is why it was important for us to focus on security as part of the STF project.
Home Directory Encryption
systemd-homed is a technology that allows for per-user encrypted home directories. Most uniquely, it has a mechanism to delete the user’s encryption keys from memory whenever their device is asleep but powered on. Full Disk Encryption doesn’t protect data while the machine is powered on, because the encryption key is available in RAM and can be extracted via various techniques.
systemd-homed has existed for a couple of years now, but nobody is using it yet because it requires integrations with the desktop environment. The largest change required is that homed needs any “unlock” UI to run from outside of the user session, which is not how desktop environments work today. STF funding enabled Adrian Vovk to work on resolving the remaining blockers, developing the following integrations:
Adding plumbing to systemd-logind that notifies GDM when it’s time to show the out-of-session unlock UI.
Integrating systemd-homed with AccountsService, which currently acts as GNOME’s database of users on the system. Previously, homed users didn’t appear anywhere in GNOME’s UI.
In addition, Adrian triaged and fixed a lot of bugs across the stack, including many blockers in systemd-homed itself.
This work was completed, and a build of GNOME OS with functional homed integration was produced. However, not all of this has been merged upstream yet. Also, due to filesystem limitations in the kernel, we don’t have a good way for multiple homed-backed users to share space on a single system at the moment. This is why we disabled multi-user functionality for now.
The infrastructure to securely store secrets (i.e. passwords and session tokens) for apps on Linux is the FreeDesktop Secret Service API. On GNOME, this API is provided by the GNOME Keyring service. Unfourtunately, GNOME Keyring is outdated, overly complex, and cannot meet the latest security requirements. Historically, it has also provided other security-adjacent services, like authentication agents for SSH and GPG. There have been numerous efforts to gradually reduce the scope of GNOME Keyring and modernize its implementation, the most recent of which was Fedora’s Modular GNOME Keyring proposal. Unfortunately, this work was stalled for years.
As part of the STF project, Dhanuka Warusadura took over the remaining parts of the proposal. He disabled the ssh-agent implementation in GNOME Keyring, which prompted all major distributions to switch to gcr-ssh-agent, the modern replacement. He also ported the existing gnome-keyring PAM module with reworked tests, following the modern PAM module testing best practices. With this work completed, GNOME Keyring has been reduced to just a Secret Service API provider, which makes it possible for us to replace it completely.
As the replacement for this remaining part of GNOME Keyring, Dhanuka extended the oo7 Secret Service client library to also act as a provider for the API. OO7 was chosen because it is implemented in Rust, and memory safety is critical for a service that manages sensitive user secrets. This new oo7-daemon is almost ready as a drop-in replacement for GNOME Keyring, except that it can not yet automatically unlock the default keyring at login.
As part of this project, Dhanuka also took care of general maintenance and improvements to the credential handling components in GNOME. These include gnome-keyring, libsecret, gcr and oo7-client.
Key Rack
Key Rack is an app that allows viewing, creating and editing the secrets stored by apps, such as passwords or tokens. Key Rack is based on oo7-client, and is currently the only app that allows access to the secrets of sandboxed Flatpak apps.
Key Rack was previously limited to displaying the secrets of Flatpak apps, so as part of the STF Project Felix Häcker and Sophie Herold worked on expanding its feature set. It now integrates with the Secret Service, and makes management of secrets across the system easier. With this addition, Key Rack now supports most of the features of the old Seahorse (“Passwords and Keys”) app.
Glycin
Glycin is a component to load and edit images. In contrast to other solutions, it sandboxes the loading operations to provide an extra layer of security. Apps like Camera, Fractal, Identity, Image Viewer, Fotema, and Shortwave rely on glycin for image loading and editing.
Previously, it was only possible to load images in apps and components that were written in the Rust programming language. Thanks to work by Sophie Herold it is now possible to use glycin from all programming languages that support GNOME’s language binding infrastructure. The new feature has also been designed to allow Glycin to be used outside of apps, with the goal of using it throughout the GNOME platform and desktop. Most notably, there are plans to replace GdkPixbuf with Glycin.
Bluetooth
Jonas Dreßler worked on some critical, security-relevant issues in the Linux Bluetooth stack, including work in the kernel, BlueZ, and GNOME’s Bluetooth tooling (1, 2, 3).
Bug Bounty Program
In addition to the primary STF fund, STA offers other kinds of support for public interest software projects. This includes their Bug Resilience Program, which gives projects complementary access to the YesWeHack bug bounty platform. This is a place for security researchers to submit the vulnerabilities they’ve discovered, in exchange for a bounty that depends on the issue’s severity. Once the bug is fixed, the project is also paid a bounty, which makes it sustainable to deal with security reports promptly. YesWeHack also helps triage the reported vulnerabilities (i.e. by confirming that they’re reproducible), which helps further reduce the burden on maintainers.
Sonny and I did the initial setup of this program, and then handed it over to the GNOME security team. We decided to start with only a few of the highest-impact modules, so currently only GLib, glib-networking, and libsoup are participating in the program. Even with this limited scope, at the time of writing we’ve already received about 50 reports, with about 20 bounty payments so far, totaling tens of thousands of Euro.
For up to date information about reporting security vulnerabilities in GNOME, including the current status of the bug bounty program, check the GNOME Security website.
Hardware Support
GNOME runs on a large variety of hardware, including desktops, laptops, and phones. There’s always room for improvement, especially on smaller, less performant, or more power efficient devices. Hardware enablement is difficult and sometimes expensive, due to the large variety of devices in use. For this project we wanted to focus specifically on devices that developers don’t often have, and thus don’t see as much attention as they should.
Mutter and GNOME Shell
Jonas Dreßler worked on improving hardware support in Mutter and GNOME Shell, our compositor and system UI. As part of this to this he improved (and is still improving) input and gesture support in Mutter, introducing new methods for recognizing touchscreen gestures to make touch, touchpad, and mouse interactions smoother and more reliable.
Thanks to Jonas’ work we were also finally able to enable hardware encoding for screencasts in GNOME Shell, significantly reducing resource usage when recording the screen.
GNOME Shell at a common small laptop resolution (1366×768) with the new, better dash sizing
Thanks to work on XWayland fractional scaling in Mutter (1, 2), the support for modern high-resolution (HiDPI) monitors got more mature and works with all kinds of applications now, making GNOME adapt better to modern hardware.
Variable Refresh Rate
Variable Refresh Rate (VRR) is a technology that allows monitors to dynamically change how often the image is updated. This is useful in two different ways. First, in the context of video games, it allows the monitor to match the graphics card’s frame rate to alleviate some microstutters without introducing tearing. Second, on devices which have support for very low minimum refresh rates (such as phones), VRR can save power by only refreshing the screen when necessary.
Dor Askayo had been working on adding VRR support to mutter in their free time for several years, but due to the fast pace of development he was never able to quite get it rebased and landed in time. The STF project allowed them to work on it full-time for a few months, which made it possible to land it in GNOME 46. The feature is currently still marked as experimental due to minor issues in some rare edge cases.
GNOME Shell Performance
GNOME Shell, through its dependency on Mutter, is the compositor and window manager underpinning the GNOME desktop. Mutter does all core input, output, and window processing. When using GNOME, you’re interacting with all applications and UI through GNOME Shell.
Thus, it’s critical that GNOME Shell remains fast and responsive because any sluggishness in Shell affects the entire desktop. As part of the STF project, Ivan Molodetskikh did an in-depth performance investigation of GNOME Shell and Mutter. Thanks to this, 12 actionable performance problems were identified, 7 of which are already fixed (e.g. 1, 2, 3), making GNOME smoother and more pleasing to use. One of the fixes made monitor screencasting eight times faster on some laptops, bringing it from unusable to working fine.
Ivan also conducted a round of hardware input latency testing for GNOME’s VTE library, which underpins GNOME’s terminal emulators. He then worked with RedHat’s Christian Hergert to address the discovered performance bottlenecks, and then retested the library to confirm the vast performance improvement. This work landed in GNOME 46. For more details, see Ivan’s blog post.
Design Support
Close collaboration between developers and designers is an important value of the GNOME project. Even though the bulk of the work we did as part of this project was low-level technical work, many of our initiatives also had a user-facing component. For these, we had veteran GNOME designer Sam Hewitt (and myself to some degree) help developers with design across all the various projects.
This included improving accessibility across the desktop shell and apps, new and updated designs for portals (e.g. global shortcuts, file chooser), designs for security features such as systemd-homed (e.g. recovery key setup) and the new installer (e.g. disk selection), as well as general input on the work of STF contributors to make sure it fits into GNOME’s overall user experience.
Planning, Coordination & Reporting
Sonny Piers and I put together the initial STF application after consulting various groups inside the community, with the goal of addressing as many critical issues in underfunded areas as possible.
Once we got the approval we needed a fiscal host to sign the actual contract, which ended up being the GNOME Foundation. I won’t go into why this was a bad choice here (see my previous blog post for more), except to say that the Foundation was not a reliable partner for us, and we’re still waiting for the people responsible for these failures to take accountability.
However, while we were stretched thin on the administrative side due to Foundation issues, we somehow made it work. Sonny’s organizing talent and experience were a major factor in this. He was instrumental in finding and hiring contractors, reaching out to new partners from outside our usual bubble (e.g. around accessibility), managing cashflow, and negotiating very friendly terms for our contracts with Igalia and Codethink. Most importantly, he helped mediate difficult socio-technical discussions, allowing us to move forward in areas that had previously been stuck for years (e.g. notifications).
On the reporting side we collected updates from contractors and published summaries of what happened to This Week in GNOME and Mastodon. We also managed all of the invoicing for the project, including monthly reports for STA and time sheets organized by project area.
What’s Next
While we got a huge amount of work done over the course of the project, some things are not quite ready yet or need follow-up work. In some cases this is because we explicitly planned the work as a prototype (e.g. the Newton accessibility architecture), in others we realized during the project that the scope was significantly larger than anticipated due to external factors (e.g. systemd’s improved TPM integration changed our plans for how the oo7-daemon service will unlock the keyring), and in others still getting reviews was more challenging or took longer than expected.
The following describes some of the relevant follow-up work from the STF project.
Wayland-Native Accessibility (Newton)
Matt Campbell’s work on Newton, our new Wayland-native accessibility stack, was successful beyond our expectations. We intended it as only a prototype, but we were able to actually already land parts of Matt’s work. For instance, Matt worked to integrate GTK with AccessKit, which will be at the core of the Newton architecture. This work has since been picked up, updated, and merged into GTK.
However, in some ways Newton is still a prototype. It intends to be a cross-desktop standard, but has not yet seen any cross-desktop discussions. Its Wayland protocol also isn’t yet rigorously defined, which is a prerequisite for it to become a new standard. The D-Bus protocol that’s used to communicate with the screen reader is ad-hoc, and currently exists only to communicate between Orca and GNOME Shell. All of these protocols will need to be standardized before apps and desktop environments can start using it.
Even once Newton is ready and standardized, it’ll need to be integrated across the stack. GTK will get support for Newton relatively easily, since Newton is built around AccessKit. However, GNOME Shell uses its own bespoke widget toolkit and this needs to be integrated with Newton. Other toolkits and Wayland compositors will also need to add support for it.
Platform
Julian Sparber’s work on the v2 notification API has landed in part, but other parts of this are still in review (e.g. GLib, portal backend). Additionally, there’s more GUI work to be done, to adjust to some of the features in Notifications v2. GNOME Shell still needs to make better use of the notification visibility settings for the lock screen, to increase user privacy. There’s also the potential to implement special UI for some types of notifications, such as incoming calls or ringing alarms. Finally, we already did some initial work towards additional features that we want to add in a v3 of the specification, such as grouping message notifications by thread or showing progress bars in notifications.
Spiel, the new text-to-speech API, is currently blocked on figuring out how to distrbute speech synthesizers and their voices. At the moment there’s a prototype-quality implementation built around Flatpak, but unfourtunately there are still a couple of limitations in Flatpak that prevent this from working seamlessly. Once we figure out how to distribute voices, Spiel will be ready to be shipped in distros. After that, we can use Spiel in a new portal API, so that apps can easily create features that use text-to-speech.
The work done on language bindings as part of this STF project focused on the low-level introspection in GLib. This is the part that generates language-agnostic metadata for the various languages to consume. However, for this work to be useful each language’s bindings need to start using this new metadata. Some languages, like Python, have done this already. Others, like JavaScript, still need to be ported. Additionally, build systems like Meson still need some minor tweaks to start using the new introspection infrastructure when available.
We’d like to finalize and deploy the prototype that runs openQA test cases directly in the CI for each GNOME component. This infrastructure would allow us to increase the QA test coverage of GNOME as a whole.
Encrypting Home Directories
The work to integrate systemd-homed into GNOME is mostly complete and functional, but parts of it have not landed yet (see this tracking issue and all the merge requests it links to).
Due to filesystem limitations in the kernel, we don’t have a good way for multiple homed-backed users to share space on a single system. For now, we simply disabled that functionality. Follow-up work would include fixing this kernel limitation, and re-enabling multi-user functionality.
Once these things are resolved, distributions can start moving forward with their adoption plans for systemd-homed.
Long-term, we’d like to deprecate the current AccountsService daemon, which provides a centralized database for users that exist on the system. We’d like to replace it with systemd-userdb, which is a more modern and more flexible alternative.
Keyring
Before the oo7-daemon can replace the GNOME Keyring service, it still needs support for unlocking the default keyring at login. An implementation that partially copies GNOME Keyring‘s solution has been merged into libsecret, but it’s still missing integration with oo7-daemon. Once this is solved, oo7-daemon will become drop-in compatible with GNOME Keyring, and distributions will be able to start transitioning.
Longer term we would like to redo the architecture to make use of systemd’s TPM functionality, which will increase the security of the user’s secrets and make it compatible with systemd-homed.
Thanks
The 2023/2024 GNOME STF project was a major success thanks to the many many people who helped to make this possible, in particular:
The Sovereign Tech Agency, for making all of this possible through their generous investment
Tara Tarakiyee and the rest of the STA team, for making the bureaucratic side of this very manageable for us
All of our contractors, for doing such wonderful work
The wider community for their reviews, input, support, and enthusiasm for this project
Igalia and Codethink for generously donating so much of their employee time
RedHat and the systemd project for helping with reviews
Sonny Piers for taking the lead on applying to the STF, and running the project from a technical point of view
Adrian Vovk for splitting the gargantuan task of editing this blog post with me
Welcome to another month of rambling status reports. Not much in terms of technology this month, my work at Codethink is still focused on proprietary corporate infrastructure, and the weather is too nice to spend more time at a computer than necessary. Somehow I keep reading things and thinking about stuff though, and so you can read some of these thoughts and links below.
Is progress going backwards?
I’ve been listening to The Blindboy Podcast from the very beginning. You could call this a “cult podcast” since there isn’t a clear theme, the only constant is life, narrated by an eccentric Irish celebrity. I’m up to the episode “Julias Gulag” from January 2019, where Blindboy mentions a Gillette advert of that era which came out against toxic masculinity, very much a progressive video in which there wasn’t a single razor blade to speak of. And he said, roughly, “I like the message, and the production is excellent, but I always feel uneasy when this type of “woke” video is made by a huge brand because I don’t think the board of directors of Proctor & Gamble actually give a shit about social justice.”
This made me think of an excellent Guardian article I read last week, by Eugene Healey entitled “Marketing’s ‘woke’ rebrand has ultimately helped the far right”, in which he makes largely the same point, with six years worth of extra hindsight. Here are a few quotes but the whole thing is worth reading:
Social progress once came hand-in-hand with economic progress. Now, instead, social progress has been offered as a substitute for economic progress.
Through the rear window it’s easy to see that the backlash was inevitable: if progressive values could so easily be commodified as a tool for selling mayonnaise, why shouldn’t those values be treated with the same fickleness as condiment preferences?
The responsibility we bear now is undoing the lesson we inadvertently taught consumers over this era. Structural reform can’t be achieved through consumption choices – unfortunately, we’re all going to have to get dirt under our fingernails.
We are living through a lot of history at the moment and it can feel like our once progressive society is now going backwards. A lot of the progress we saw was an illusion anyway. The people who really hold power in the world weren’t really about to give anything up in the name of equality, and they still aren’t. World leaders were still taking private jets to conferences to talk about the climate crisis, and so on. The 1960s USA seemed like a place of progress, and then they went to war in Vietnam.
As Eugene Healey says towards the end of his piece, one positive change is that it’s now obvious who the bad guys are again. Dinold Tromp appears on TV every time I look at a TV, and he dresses like an actual supervillain. Mark Zuckerburg is trying to make his AI be more right-wing. Gillette is back to making adverts which are short videos of people shaving, because Gillette is a brand that manufactures razors and wants you to buy them. It is not a social justice movement!
The world goes in cycles, not straight lines. Each new generation of people has to ignore most of what we learn from teachers and parents, and figure everything out for ourselves the hard way, right?
For technologists, it’s been frustrating to spend the last decade telling people to be wary of Apple, Amazon, Google, Meta and Microsoft, and being roundly ignored. They are experts in making convenient, zero cost products, and they are everywhere. Unless you’re an expert in technology or economics, then it wasn’t obvious what they have been working towards, which is the same thing it always was, the same that drove everything Microsoft did through the 1990s: accumulating more and more money and power.
You don’t get very far if you tell this story to some poor soul who just needs to make slides for a presentation, especially if your suggestion is that they try LibreOffice Impress instead.
When 2025 kicked off, CEOs of all those Big Tech companies attended the inauguration of Dinald Tromp and donated him millions of dollars, live on international news media. In the long run I suspect this moment will have pushed more people towards ethical technology than 20 years of campaigning about nonfree JavaScript.
Art, Artificial Intelligence and Idea Bankrupcy
Writing great code can be a form of artistic expression. Not all code is art, of course, just as an art gallery is not the only place you will find paint. But if you’re wondering why some people release groundbreaking software for free online, it might help to view it as an artistic pursuit. Anything remotely creative can be art.
I took a semi retirement from volunteer open source contributions back in October of last year, having got to a point where it was more project management than artistic endeavour. In an ideal world I’d have some time to investigate new ideas, for example in desktop search or automated GUI testing, and publish cool stuff online. But there are two blockers. One blocker is that I don’t have the time. And the other, is that the open web is now completely overrun with data scrapers, which somehow ruins the artistic side of publishing interesting new software for free.
We know that reckless data scraping by Amazon, Anthropic, Meta and Microsoft/OpenAI (those US tech billionaires again), plus their various equivalents in China, is causing huge problems for open source projects and other non-profits. It has led The Wikimedia Foundation to declare this month that “Our content is free, our infrastructure is not“. And Ars Technica also published a good summary of the situation.
Besides the bandwidth costs, there’s something uncomfortable about everything we publish online being immediately slurped into the next generation of large language model. If permissive software licenses lead to extractive behaviour, then AI crawlers are that on steroids. LLMs are incredibly effective for certain use cases, and one such use case is “copyright laundering machines”.
Software licensing was a key part of the discussion around ethical technology when I first discovered Linux in the late 1990s. There was a sense that if you wrote innovative code and published it under the GNU GPL, you were helping to fight the evils of Big Tech, as the big software firms wouldn’t legally be able to incorporate your innovation into their products without releasing their source code under the same license. That story is spelled out word-for-word in Richard Stallman’s article “Copyleft: Pragmatic Idealism”. I was never exactly a disciple of Richard Stallman, but I did like to release cool stuff under the GPL in the past, hoping that in a small way it’d work towards some sort of brighter future.
I was never blind to the limitations of the GPL. It requires an actual threat of enforcement to be effective, and historically only a few groups like the Software Freedom Conservancy actually do that difficult legal work. Another weakness in the overall story was this: if you have a big pile of cash, you can simply rewrite any innovative GPL code. (This is how we got Apple to pay for LLVM).
Long ago I read the book “Free as in Freedom”. It’s a surprisingly solid book which narrates Richard Stallman’s efforts to form a rebel alliance and fight what we know today as Big Tech, during which he founds the GNU Project and invents the GPL. It is only improved in version 2.0 where Stallman himself inserts pedantic corrections into Sam Williams’s original text such as “This cannot be a direct quote because I do not use fucking as an adverb”. (The book and the corrections predate him famously being cancelled in 2019). He later becomes frustrated at having spent a decade developing an innovative, freely available operating system, only for the media and the general public to give credit to Linus Torvalds.
Right now the AI industry is trying to destroy copyright law as we know it. This will have some interesting effects. The GPL depends on copyright law to be effective, so I can only see this as the end of the story for software licensing as a way to defend and ensure that the inventors of cool things get some credit and earn money. But let’s face it, the game was already up on that front.
Sustainable open source projects — meaning those where people actually get paid do all the work that is needed for the project to succeed — can exist and do exist. We need independent, open computing platforms like GNOME and KDE more than ever. I’m particularly inspired by KDE’s growing base of “supporting members” and successful fundraisers. So while this post might seem negative, I don’t see this as a moment of failure, only a moment of inflection and of change.
This rant probably needs a deeper message so I’m going to paraphrase Eugene Healey: “Structural reform can’t be achieved just by publishing code online”. The hard work and meaningful work is not writing the code but building a community who support what you’re doing.
My feeling about the new AI-infested web, more to the point, is that it spoils the artistic aspect of publishing your new project right away as open source. There’s something completely artless about training an AI on other people’s ideas and regenerating it in infinite variations. Perhaps this is why most AI companies all have logos that look like buttholes.
Visual artists and animators have seen DALL-E and Stable Diffusion tale their work and regurgitate it, devoid of meaning. Most recently it was the legendary Studio Ghibli who had their work shat on by Sam Altman. “I strongly feel that this is an insult to life itself”, say the artists. At least Studio Ghibli is well-known enough to get some credit, unlike many artists whose work was coopted by OpenAI without permission.
Do you think the next generation of talented visual artists will publish their best work online, within reach of Microsoft/OpenAI’s crawlers?
And when the next Fabrice Bellard comes up with something revolutionary, like FFMPEG or QEMU were when they came out, will they decide to publish the source code for free?
Actually, Fabrice Bellard himself has done plenty of research around large language models, and you will notice that his recent projects do not come with source code…
With that in mind, I’m declaring bankruptcy on my collection of unfinished ideas and neat projects. My next blog post will be a dump of the things I never got time to implement and probably never will. Throw enough LLMs at the problem and we should have everything finished in no time. If you make the thing I want, and you’re not a complete bastard, then I will happily pay a subscription fee to use it.
I’m interested what you, one of the dozen readers of my blog, think about the future of “coding as art”. Is it still fun when there’s a machine learning from your code instead of a fellow programmer?
And if you don’t believe me that the world goes in cycles and not straight lines: take some time to go back to the origin story of Richard Stallman and the GPL itself. The story begins at the Massachusets Institute of Technology, in a computing lab that in the 1970s and 80s was at the cutting edge of research into… Artificial Intelligence.
We are running a Fedora 42 GNOME 48 Desktop and Core Apps Test Week! This helps us find last-minute bugs and integration issues before Fedora 42 is ready for a stable release.
Back in December (before I caught the flu working at a farmers market, then Covid two weeks later, then two months of long-Covid) I mentioned that we’d discuss the various subsystems needed in libfoundry to build an IDE as a library.
I used the little bit of energy I had to work on some core abstractions. In an effort to live up to my word lets talk a bit about what went into libfoundry last night.
There is now a DocumentationManager sub-system which handles documentation installed on the host system, chroots, and Flatpak SDKs. It’s a bit tricky to make this all work without blurring the lines of abstraction so lets cover how that works.
Generally speaking, we try to avoid plugins depending on other plugins. Sometimes it happens but usually it is an opportunity to make a better abstraction in libfoundry. Lets look at what is needed around documentation.
We have many SDKs and they all might have documentation available at different locations.
We primarily have one format we need to support in GNOME, which is the venerable Devhelp2 XML format serving as an index.
SDKs might contain the same documentation but at different versions (Nightly vs GNOME 48 vs jhbuild for example)
There may be more formats that matter in the future especially as we look at pulling in support for new languages.
Adding new search capabilities shouldn’t break the API.
Querying needs to be fast enough to update as you type.
So lets dive into the abstractions.
DocumentationManager
This is the core abstraction you start interfacing with. It is a service of the FoundryContext and therefore can be accessed with Foundry.Context:documentation-manager property.
The documentation manager manages the Foundry.DocumentationProvider plug-in abstractions. Plug-ins that which to contribute to the documentation pipeline must subclass this in their plug-in.
To query documentation, use Foundry.DocumentationManager.query(). As I noted earlier, I don’t want new capabilities to break the API so a Foundry.DocumentationQuery object is used rather than a sequence of parameters which would need to be modified.
Avoiding Formats in the API
Since we want to be able to support other documentation formats in the future, it is important that we do not force anything about devhelp2 XML into the core abstraction.
The core result object from queries is a simple Foundry.Documentation object. Like above, we want to avoid breaking API/ABI when new capabilities are added so this object serves as our abstraction to do so. Navigating a tree structure will live here and can be implemented by plug-ins through subclassing.
Additionally, a “devhelp” plug-in provides support for crawling the devhelp2-style directories on disk. But this plug-in knows nothing about where to find documentation as that is relevant only to the SDKs.
This is where the Foundry.DocumentationRoot object becomes useful. SDK plug-ins can implement DocumentationProvider in their plug-in to expose documentation roots. The host-sdk, jhbuild, and Flatpak plug-ins all do this to expose the location of their documentation.
Now the devhelp plug-in can be provided the information it needs for crawling without any knowledge of SDKs.
Fast Querying
The old adage is that the only way to go faster on a computer is to do less work. This is particularly important in search systems where doing an entire query of a database means a lot of wasted CPU, memory, and storage I/O.
To make querying fast the devhelp plug-in indexes information about SDKs in SQLite. Way back in Builder we’d avoid this and just make an optimized fuzzy search index, mmap that, and search it. But now days we’ve gone from one set of documentation to multiple sets of documentation across SDK versions. The problem domain explodes quite a bit. SQLite seemed like a nice way to do this while also allowing us to be lazy in our searching.
By lazy what I mean is that while we’ll start your query, we only retrieve the first few results from the cursor. The rest are lazily fetched as the GListModel is scanned by scrolling. As that is not a very common operation compared to typing, you can throw away a lot of work naturally while still sitting behind the comfortable GListModel interface.
What now?
Since libfoundry already supports SDK management (including Flatpak) you could probably re-implement Manuals in a week-end. Hopefully this also breaks down a bit of the knowledge used to build such an application and the deceptive complexity behind doing it well.
This should also, hopefully soon, allow us to share a documentation implementation across Builder, Manuals, and an upcoming project I have which will benefit from easy access to documentation of object properties.
Spring is in the air, the snow is finally melting away here in the cold north, and Keypunch is getting an update! Let’s walk through all the new features and improvements together.
Realistic Results
Up to now, Keypunch’s measurements of typing performance have been rather primitive. For speed, it has just compared the total number of typed characters, both correct and incorrect, to the test duration. Likewise, the “correctness” rate is nothing more than the share of correctly typed characters at the time of calculation. If you make a mistake and then correct it, it’s not taken into account at all.
These calculations are easy to understand and interpret, but also flawed and potentially misleading. The one for speed in particular has caused some pretty ridiculous result screens because of its uncritical counting. Needless to say, this is not ideal.
I’ve gone a little back and forth with myself on how to move forward, and ended up overhauling both of the calculations: For speed, Keypunch now counts how many correct characters there are at the end of the test, while the correctness rate has been replaced with real accuracy, based on all operations that have changed the typed text rather than just the final result.
An overview of the new result calculations
The new calculations come with their own trade-offs, such as the incentive to correct mistakes being slightly reduced. In general, however, I view them as a change for the better.
Frustration Relief
Learning to type is awfully hard. At least it was for me; sometimes it felt like I wasn’t even in control of my own fingers. This made me furious, and my number-one coping mechanism was to go berserk with my keyboard and mash random keys in frustration. As one might guess, this did not help me progress, and I probably should just have gone for a walk or something instead.
To safeguard the poor souls who come after me, I’m introducing something I call frustration relief. The concept is simple: If Keypunch detects that you’re randomly mashing your keyboard, it will cancel the test and provide a helpful piece of life advice.
Frustration relief in action
I can’t understate how much I wish I had something like this a couple of years ago.
Input Improvements
Being a text-centric app with multi-language support, Keypunch inevitably has to work with the many intricacies of digital text input. This includes the fact that the Unicode standard contains more than a dozen different space characters. For a while, Keypunch has supported entering regular spaces in the place of non-breaking ones, and now the same is possible the other way around too. Notably, this is a significant improvement for users of the francophone BÉPO keyboard layout.
New Languages
Keypunch’s international community has been hard at work lately, and I’m happy to report a solid upturn in language support. For text generation, these languages have been added:
Catalan
Dutch
Estonian
Greek
Indonesian
Slovak
Persian
This brings the total language count up to 38! Does Keypunch support your language yet? If not, feel free to open a language request.
A preview of the extended language support
On the interface translation side, Keypunch has enrolled in GNOME’s common translation system, Damned Lies, allowing it to benefit from the coordinated and high-quality work of GNOME’s translation teams. Since the last update, Keypunch has been translated into these languages:
Catalan
British English
Persian
Finnish
Indonesian
Kabyle
Slovak
Slovenian
Chinese
Thanks to everyone who is helping make Keypunch speak their language!
Platform Progression
This Keypunch release is based on GNOME 48, which brings a bunch of external platform goodness to the app:
The latest Adwaita styling
Better adherence to the system font settings
Improved performance
An “Other Apps” section in the About dialog
The new “Other Apps” section in the About dialog
While not directly part of the runtime, Keypunch will also benefit a lot from the new Adwaita Fonts. It’s exciting to build on such a rapidly improving platform.
Additional Artwork
Apparently, some people are keeping Keypunch in their game libraries. If you’re one of them, I’ve made a couple of assets to make Keypunch integrate better visually with the rest of your collection. Enjoy!
Circle Inclusion
Keypunch is now part of GNOME Circle! I’m happy and grateful to have another app of mine accepted into the program. For full transparency, I’m part of the Circle Committee myself, but Keypunch has been independently reviewed by two other committee members, namely Tobias and Gregor. Thanks!
Final Thoughts
That’s it for this update. Initially, I was planning on just doing a platform/translation bump now and holding off the headline features for an even bigger update later on, but I decided that it’s better to ship what I have at the moment and let the rest wait for later. There’s still more on the roadmap, but I don’t want to spoil anything!
If you have questions or feedback, feel free to mention me on Mastodon or message me on Matrix.
Oh, and if you’d like to support my work, feel free to make a donation! I’d really appreciate that.
A long overdue dev log. The last one was for September 2024. That's half a
year.
libopenraw
Released 0.4.0-alpha9 of the Rust crate. Added a bunch of
cameras. Fixed some Maker Note for some Fujifilm camera, and a fews
other, also fixed some thumbnailing.
The main API is now faillible with Result<> returned. This should
reduce the amount of panics (it shouldn't panic).
Added support for user crops in Fujifilm files as I added support for
the GFX 100RF (sight unseen).
Niepce
Changed the catalog format. By changed, it's just that it has an
extension .npcat and that it is standalone instead of being a
folder. The thumbnail cache will be in the same folder next to it.
Now we can open a different catalog. Also renamed some bits internally
to be consistent with the naming.
Removed some UI CSS hacks now that the is an API for
Gtk.TreeExpander.set_hide_expander() in Gtk 4.10. Fixed some bug
with the treeview not being updated. Removed Gtk.ColorButton
(deprecated). Fix some selection issues with the Gtk.ListView.
Moved to Rust 2024.
Added video thumbnailing. Code was inspired from Totem's.
Fixed some bugs with importing hierarchies of folders, and fix
deleting a folder with folders.
Still working on the import feature I mentionned previously. It is
getting there. My biggest issue that one can't select a Gtk.ListView
item by item, only by index, which is complicated on a tree view. On
the other hand several of the fixes mentionned above came from this
work as I cherry-picked the patches to the main branch.
i18n-format
Fixed my
i18n-format crate as the minor
version of gettext removed the essential feature I was relying
on. Yes this is a semver breakage. I ended up having to split the
crate to have a non macro crate. From a usage standpoint it works the
same.
The long term is to have this crate be unnecessary.
Other
Other stuff I contributed to.
Glycin
Submitted
support for the rotation of camera raw files, and the Loupe
counterpart.
This is a followup to the camera raw file support in glycin.
The GNOME Foundation is thrilled to share that registration for GUADEC 2025 is now open!
GUADEC is the largest annual gathering of GNOME developers, contributors, and community members. This year we welcome everyone to join us in the beautiful city of Brescia, Italy from July 24th to 29th or online! For those who cannot join us in person, we will live-stream the event so you can attend or present remotely.
To register, visit guadec.org and select whether you will attend in person or remotely. In-person attendees will notice a slight change on their registration form. This year we’ve added a section for “Registration Type” and provided 4 options for ticket fees. These costs go directly towards supporting the conference and helping us build a better GUADEC experience. We ask that in-person attendees select the option they are most comfortable with. If you have any questions, please don’t hesitate to reach out to us at guadec@gnome.org.
The Call for Participation is ongoing but once are talks are selected you will find speaker details and a full schedule on guadec.org. We will also be adding more information about social events, accommodations, and activities throughout Brescia soon!
We are still looking for conference sponsors. If you or your company would like to become a GUADEC 2025 sponsor, please take a look at our sponsorship brochure and reach out to us at guadec@gnome.org.
To stay up-to-date on conference news, be sure to follow us on Mastodon @gnome@floss.social.
We look forward to seeing you in Brescia and online!
After a long time of low-maintenance (as in me being out of the picture and doing mostly releases and some trivial/not-so-trivial-but-quick fixes here and there) period for GNOME-Calculator, it's time to reveal what's happening behind the scenes.
Long-story short, pretty late in the 48 cycle two contributors popped up to breathe some life into GNOME Calculator, so much, that I had a pretty hard time keeping track of the merge requests piling up. So most of the kudos for the below-mentioned features go to fcusr and Adrien Plazas, and I hope I will manage to list all of them, and it would be great to have folks using the Nightly Calculator (current development version from flatpak gnome-nightly repo) to help spot issues/requests in time to be fixed for 49.
So now the features:
Conversion mode
Based on several user requests and the work of fcusr, conversion UI was moved to a separate "mode". Important thing to note here, that conversions using keyboard-only are still possible (e.g. typing 1 kg in g yields the r esult) in any mode, Conversion view is just a UI/button/touch-friendly way of doing the conversions without typing, similarly to what we had previously in the advanced mode.
UI cleanup, styling and touch improvements
Both Adrien and fcusr worked on simplifying the UI-related code, dropping old/unnecessary styling, tweaking the looks of buttons, improving the access to toggles/switches to make Calculator easier to use with functions grouped, styled in a meaningful way.
The interface was also "optimized" for smaller screens/touch devices, namely function buttons which up until now only entered the function name to save you some typing will work with some text selected to insert brackets around the selection and add the function.
New functions and constants
For anyone needing them, new functions have been added:
combination (e.g. using ncr (9;5) yields 126 as a result)
permutation (e.g. using npr (9;5) yields 15120 as a result)
common constants are now available from the memory button (also used for accessing variables)
Favorite currencies
As the list of available currencies for conversion is already huge, scrolling through the currency list for selecting currencies in case you have multiple ones you are used to convert between (given that the last currencies you used should be persisted) is harder, currencies can be marked as Favorites using the preferences section for Favorite currencies, and the selected ones will appear on top of the currency selector.
GNOME exchange API
Given that we are occasionally having issues with the exchange rate providers (site not being available, site not accepting our user-agent) rendering Calculator currency conversions broken (or even worse, in some cases freezing Calculator completely) the decision was taken to host our own exchange rate API, and with the help of the folks in the GNOME Infrastructure team we have a GNOME exchange API, which will be used for exchange rate retrieval.
The relevant project is available at https://gitlab.gnome.org/Infrastructure/xchgr8s.
For now, this is basically a static mirror of the providers used so far in Calculator (hence the URL change can be "backported" to any calculator version easily), which does fetch the exchange rates once a day from all providers, and commits them to the repository, from where it will be served via gitlab pages + GNOME reverse proxy + CDN.
This way we have control over the format we provide, we can do any processing on the exchange rates fetched from the external sources, and we can update the currency providers in GNOME Calculator however we want as long as they use one of the formats provided by the exchange-API, be it an existing format or a completely new one added to exchange API.
This was a first step towards fixing a 10-year old, GNOME bugzilla-reported bug still open, but I would say we're on the right track.
I’ve blogged in the past about how WebKit on Linux integrates with Sysprof, and provides a number of marks on various metrics. At the time that was a pretty big leap in WebKit development since it gave use a number of new insights, and enabled various performance optimizations to land.
But over time we started to notice some limitations in Sysprof. We now have tons of data being collected (yay!) but some types of data analysis were pretty difficult yet. In particular, it was difficult to answer questions like “why does render times increased after 3 seconds?” or “what is the CPU doing during layout?”
In order to answer these questions, I’ve introduced a new feature in Sysprof: filtering by marks.
Select a mark to filter by in the Marks view
Samples will be filtered by that mark
Hopefully people can use this new feature to provide developers with more insightful profiling data! For example if you spot a slowdown in GNOME Shell, you open Sysprof, profile your whole system, and filter by the relevant Mutter marks to demonstrate what’s happening there.
Here’s a fancier video (with music) demonstrating the new feature:
The C++ standard library (also know as the STL) is, without a doubt, an astounding piece of work. Its scope, performance and incredible backwards compatibility have taken decades of work by many of the world's best programmers. My hat's off to all those people who have contributed to it.
All of that is not to say that it is not without its problems. The biggest one being the absolutely abysmal compile times but unreadability, and certain unoptimalities caused by strict backwards compatibility are also at the top of the list. In fact, it could be argued that most of the things people really dislike about C++ are features of the STL rather than the language itself. Fortunately, using the STL is not mandatory. If you are crazy enough, you can disable it completely and build your own standard library in the best Bender style.
One of the main advantages of being an unemployed-by-choice open source developer is that you can do all of that if you wish. There are no incompetent middle damagers hovering over your shoulder to ensure you are "producing immediate customer value" rather than "wasting time on useless polishing that does not produce immediate customer value".
It's my time, and I'll waste it if I want to!
What's in it?
The biggest design questions of a standard library are scope and the "feel" of the API. Rather than spending time on design, we steal it. Thus, when in doubt, read the Python stdlib documentation and replicate it. Thus the name of the library is pystd.
The test app
To keep the scope meaningful, we start by writing only enough of stdlib to build an app that reads a text file, validates it as UTF-8, splits the contents into words, counts how many time each word appears in the file and prints all words and how many times it appears sorted by decreasing count.
This requires, at least:
File handling
Strings
UTF8 validation
A hash map
A vector
Sorting
The training wheels come off
The code is available in this Github repo for those who want to follow along at home.
Disabling the STL is fairly easy (with Linux+GCC at least) and requires only these two Meson statements:
The supc++ library is (according to stackoverflow) a support library GCC needs to implement core language features. Now the stdlib is off and it is time to implement everything with sticks, stones and duct tape.
The outcome
Once you have implemented everything discussed above and auxiliary stuff like a hashing framework the main application looks like this.
The end result is both Valgrind and Asan clean. There is one chunk of unreleased memory, but that comes from supc++. There is probably UB in the implementation. But it should be the good kind of UB that, if it would actually not work, would break the entire Linux userspace because everything depends on it working "as expected".
All of this took fewer than 1000 lines of code in the library itself (including a regex implementation that is not actually used). For comparison merely including vector from the STL brings in 27 thousand lines of code.
Comparison to an STL version
Converting this code to use the STL is fairly simple and only requires changing some types and fine tuning the API. The main difference is that the STL version does not validate that the input is UTF-8 as there is no builtin function for that. Now we can compare the two.
Runtime for both is 0.001 to 0.002 seconds on the small test file I used. Pystd is not noticeably slower than the STL version, which is enough for our purposes. It almost certainly scales worse because there has been zero performance work on it.
Compiling the pystd version with -O2 takes 0.3 seconds whereas the STL version takes 1.2 seconds. The measurements were done on a Ryzen 7 3700X processor.
The executable's unstripped size is 349k for STL and 309k for pystd. The stripped sizes are 23k and 135k. Approximately 100 k of the pystd executable comes from supc++. In the STL version that probably comes dynamically from libstdc++ (which, on this machine, takes 2.5 MB).
Perfect ABI stability
Designing a standard library is exceedingly difficult because you can't ever really change it. Someone, somewhere, is depending on every misfeature in it so they can never be changed.
Pystd has been designed to both support perfect ABI stability and make it possible to change it in arbitrary ways in the future. If you start from scratch this turned out to be fairly simple.
The sample code above used the pystd namespace. It does not actually exist. Instead it is defined like this in the cpp file:
#include <pystd2025.hpp>
namespace pystd = pystd2025;
In pystd all code is in a namespace with a year and is stored in a header file with the same year. The idea is, then, that every year you create a new release. This involves copying all stdlib header files to a file with the new year and regexping the namespace declarations to match. The old code is now frozen forever (except for bug fixes) whereas the new code can be changed at will because there are zero existing lines of code that depend on it.
End users now have the choice of when to update their code to use newer pystd versions. Even better, if there is an old library that can not be updated, any of the old versions can be used in parallel. For example:
Thus if no code is ever updated, everything keeps working. If all code is updated at once, everything works. If only parts of the code are updated, things can still be made to work with some glue code. This puts the maintenance burden on the people whose projects can not be updated as opposed to every other developer in the world. This is as it should be, and also would motivate people with broken deps to spend some more effort to get them fixed.
The GNOME Project is proud to announce the release of GNOME 48, ‘Bengaluru’.
GNOME 48 brings several exciting updates, including improved notification stacking for a cleaner experience, better performance with dynamic triple buffering, and the introduction of new fonts like Adwaita Sans & Mono. The release also includes Decibels, a minimalist audio player, new digital well-being features, battery health preservation with an 80% charge limit, and HDR support for compatible displays.
GNOME 48 will be available shortly in many distributions, such as Fedora 42 and Ubuntu 25.04. If you want to try it today, you can look for their beta releases, which will be available very soon
We are also providing our own installer images for debugging and testing features. These images are meant for installation in a vm and require GNOME Boxes with UEFI support. We suggest getting Boxes from Flathub.
If you’re looking to build applications for GNOME 48, check out the GNOME 48 Flatpak SDK on Flathub. You can also support the GNOME project by donating—your contributions help us improve infrastructure, host community events, and keep Flathub running. Every donation makes a difference!
This six-month effort wouldn’t have been possible without the whole GNOME community, made of contributors and friends from all around the world: developers, designers, documentation writers, usability and accessibility specialists, translators, maintainers, students, system administrators, companies, artists, testers, the local GNOME.Asia team in Bengaluru, and last, but not least, our users.
We hope to see some of you at GUADEC 2025 in Brescia, Italy!
Our next release, GNOME 49, is planned for September. Until then, enjoy GNOME 48.
I see a lot of users approaching GNOME app development with prior language-specific experience, be it Python, Rust, or something else. But there’s another way to approach it: GObject-oriented and UI first.
This introduces more declarative code, which is generally considered cleaner and easier to parse. Since this approach is inherent to GTK, it can also be applied in every language binding. The examples in this post stick to Python and Blueprint.
Properties
While normal class properties for data work fine, using GObject properties allows developers to do more in UI through expressions.
Handling Properties Conventionally
Let’s look at a simple example: there’s a progress bar that needs to be updated. The conventional way of doing this would look something like the following:
using Gtk 4.0;
using Adw 1;
template $ExampleProgressBar: Adw.Bin {
ProgressBar progress_bar {}
}
This defines a template called ExampleProgressBar which extends Adw.Bin and contains a Gtk.ProgressBar called progress_bar.
The reason why it extends Adw.Bin instead of Gtk.ProgressBar directly is because Gtk.ProgressBar is a final class, and final classes can’t be extended.
This code references the earlier defined progress_bar and defines a float called progress. When initialized, it runs the load method which fakes a loading operation by recursively incrementing progress and setting the fraction of progress_bar. It returns once progress is 1.
This code is messy, as it splits up the operation into managing data and updating the UI to reflect it. It also requires a reference to progress_bar to set the fraction property using its setter method.
Handling Properties With GObject
Now, let’s look at an example of this utilizing a GObject property:
using Gtk 4.0;
using Adw 1;
template $ExampleProgressBar: Adw.Bin {
ProgressBar {
fraction: bind template.progress;
}
}
Here, the progress_bar name was removed since it isn’t needed anymore. fraction is bound to the template’s (ExampleProgressBar‘s) progress property, meaning its value is synced.
The reference to progress_bar was removed in the code too, and progress was turned into a GObject property instead. fraction doesn’t have to be manually updated anymore either.
So now, managing the data and updating the UI merged into a single property through a binding, and part of the logic was put into a declarative UI file.
In a small example like this, it doesn’t matter too much which approach is used. But in a larger app, using GObject properties scales a lot better than having widget setters all over the place.
Communication
Properties are extremely useful on a class level, but once an app grows, there’s going to be state and data communication across classes. This is where GObject signals come in handy.
Handling Communication Conventionally
Let’s expand the previous example a bit. When the loading operation is finished, a new page has to appear. This can be done with a callback, a method that is designed to be called by another method, like so:
There’s now a template for ExampleNavigationView, which extends an Adw.Bin for the same reason as earlier, which holds an Adw.NavigationView with two Adw.NavigationPages.
The first page has ExampleProgressBar as its child, the other one holds a placeholder and has the tag “finished”. This tag allows for pushing the page without referencing the Adw.NavigationPage in the code.
The code references both navigation_view and progress_bar. When initialized, it runs the load method of progress_bar with a callback as an argument.
This callback pushes the Adw.NavigationPage with the tag “finished” onto the screen.
from typing import Callable
from gi.repository import Adw, GLib, GObject, Gtk
@Gtk.Template(resource_path="/org/example/App/progress-bar.ui")
class ExampleProgressBar(Adw.Bin):
__gtype_name__ = "ExampleProgressBar"
progress = GObject.Property(type=float)
def load(self, callback: Callable) -> None:
self.progress += 0.1
if int(self.creation_progress) == 1:
callback()
return
GLib.timeout_add(200, self.load, callback)
ExampleProgressBar doesn’t run load itself anymore when initialized. The method also got an extra argument, which is the callback we passed in earlier. This callback gets run when the loading has finished.
This is pretty ugly, because the parent class has to run the operation now.
Another way to approach this is using a Gio.Action. However, this makes illustrating the point a bit more difficult, which is why a callback is used instead.
Handling Communication With GObject
With a GObject signal the logic can be reversed, so that the child class can communicate when it’s finished to the parent class:
Here, we removed the name of progress_bar once again since we won’t need to access it anymore. It also has a signal called load-finished, which runs a callback called _on_load_finished.
from gi.repository import Adw, Gtk
from example.progress_bar import ExampleProgressBar
@Gtk.Template(resource_path="/org/example/App/navigation-view.ui")
class ExampleNavigationView(Adw.Bin):
__gtype_name__ = "ExampleNavigationView"
navigation_view: Adw.NavigationView = Gtk.Template.Child()
@Gtk.Template.Callback()
def _on_load_finished(self, _obj: ExampleProgressBar) -> None:
self.navigation_view.push_by_tag("finished")
In the code for ExampleNavigationView, the reference to progress_bar was removed, and a template callback was added, which gets the unused object argument. It runs the same navigation action as before.
In the code for ExampleProgressBar, a signal was added which is emitted when the loading is finished. The responsibility of starting the load operation can be moved back to this class too. The underscore and dash are interchangeable in the signal name in PyGObject.
So now, the child class communicates to the parent class that the operation is complete, and part of the logic is moved to a declarative UI file. This means that different parent classes can run different operations, while not having to worry about the child class at all.
Next Steps
Refine is a great example of an app experimenting with this development approach, so give that a look!
I would also recommend looking into closures, since it catches some cases where an operation needs to be performed on a property before using it in a binding.
Learning about passing data from one class to the other through a shared object with a signal would also be extremely useful, it comes in handy in a lot of scenarios.
And finally, experiment a lot, that’s the best way to learn after all.
Thanks to TheEvilSkeleton for refining the article, and Zoey for proofreading it.
An Update Regarding the 2025 Open Source Initiative Elections
I've
explained in
other posts that I ran for the 2025 Open Source Initative Board of
Directors in the “Affiliate” district.
Voting closed on MON 2025-03-17 at 10:00 US/Pacific. One hour later,
candidates were surprised to receive an email from OSI demanding
that all candidates sign a Board agreement before results were
posted. This was surprising because during mandatory orientation,
candidates were told the opposite: that a Board agreement need not be
signed until the Board formally appointed you as a Director (as the
elections are only advisory —: OSI's Board need not follow election
results in any event. It was also surprising because the deadline was a
mere 47 hours later (WED 2025-03-19 at 10:00 US/Pacific).
Many of us candidates attempted to get clarification over the last 46
hours, but OSI has
not
communicated clear answers in response to those requests. Based on
these unclear responses, the best we can surmise is that OSI intends to
modify the ballots cast by Affiliates and Members to remove any candidate
who misses this new deadline. We are loathe to assume the worst, but
there's little choice given the confusing responses and surprising change
in requirements and deadlines.
In addition to
the twokeynotes
mentioned above, I propose these analogies that really are apt to
this situation:
Imagine if the Board of The Nature Conservancy told Directors they
would be required, if elected, to use a car service to attend Board
meetings. It's easier, they argue, if everyone uses the same service and
that way, we know you're on your way, and we pay a group rate anyway. Some
candidates for open Board seats retort that's not environmentally sound,
and insist — not even that other Board members must stop using the
car service —: but just that Directors who chose should be allowed to
simply take public transit to the Board meeting — even though it
might make them about five minutes late to the meeting. Are these Director
candidates engaged in “passive-aggressive politicking”?
Imagine if the Board of Friends of Trees made a decision that all
paperwork for the organization be printed on non-recycled paper made from
freshly cut tree wood pulp. That paper is easier to move around, they say
— and it's easier to read what's printed because of its quality.
Some candidates for open Board seats run on a platform that says Board
members should be allowed to get their print-outs on 100% post-consumer
recycled paper for Board meetings. These candidates don't insist that
other Board members use the same paper, so, if these new Directors are
seated, this will create extra work for staff because now they have to do
two sets of print-outs to prep for Board meetings, and refill the machine
with different paper in-between. Are these new Director candidates, when
they speak up about why this position is important to them as a moral
issue, a “a distracting waste of time”?
Imagine if the Board of the APSCA made the decision that Directors must
work through lunch, and the majority of the Directors vote that they'll get
delivery from a restaurant that serves no vegan food whatsoever. Is it
reasonable for this to be a non-negotiable requirement — such that
the other Directors must work through lunch and just stay hungry? Or
should they add a second restaurant option for the minority? After all,
the ASPCA condemns animal cruelty but doesn't go so far as to
demand that everyone also be a vegan. Would the meat-eating directors then
say something like “opposing cruelty to animals could be so much more
than merely being vegan” to these other Directors?
Almost two years ago, Twitter launched encrypted direct messages. I wrote about their technical implementation at the time, and to the best of my knowledge nothing has changed. The short story is that the actual encryption primitives used are entirely normal and fine - messages are encrypted using AES, and the AES keys are exchanged via NIST P-256 elliptic curve asymmetric keys. The asymmetric keys are each associated with a specific device or browser owned by a user, so when you send a message to someone you encrypt the AES key with all of their asymmetric keys and then each device or browser can decrypt the message again. As long as the keys are managed appropriately, this is infeasible to break.
But how do you know what a user's keys are? I also wrote about this last year - key distribution is a hard problem. In the Twitter DM case, you ask Twitter's server, and if Twitter wants to intercept your messages they replace your key. The documentation for the feature basically admits this - if people with guns showed up there, they could very much compromise the protection in such a way that all future messages you sent were readable. It's also impossible to prove that they're not already doing this without every user verifying that the public keys Twitter hands out to other users correspond to the private keys they hold, something that Twitter provides no mechanism to do.
This isn't the only weakness in the implementation. Twitter may not be able read the messages, but every encrypted DM is sent through exactly the same infrastructure as the unencrypted ones, so Twitter can see the time a message was sent, who it was sent to, and roughly how big it was. And because pictures and other attachments in Twitter DMs aren't sent in-line but are instead replaced with links, the implementation would encrypt the links but not the attachments - this is "solved" by simply blocking attachments in encrypted DMs. There's no forward secrecy - if a key is compromised it allows access to not only all new messages created with that key, but also all previous messages. If you log out of Twitter the keys are still stored by the browser, so if you can potentially be extracted and used to decrypt your communications. And there's no group chat support at all, which is more a functional restriction than a conceptual one.
To be fair, these are hard problems to solve! Signal solves all of them, but Signal is the product of a large number of highly skilled experts in cryptography, and even so it's taken years to achieve all of this. When Elon announced the launch of encrypted DMs he indicated that new features would be developed quickly - he's since publicly mentioned the feature a grand total of once, in which he mentioned further feature development that just didn't happen. None of the limitations mentioned in the documentation have been addressed in the 22 months since the feature was launched.
Why? Well, it turns out that the feature was developed by a total of two engineers, neither of whom is still employed at Twitter. The tech lead for the feature was Christopher Stanley, who was actually a SpaceX employee at the time. Since then he's ended up at DOGE, where he apparently set off alarms when attempting to install Starlink, and who today is apparently being appointed to the board of Fannie Mae, a government-backed mortgage company.
Hello everyone. If you’re reading this, then you are alive. Congratulations. It’s a wild time to be alive. Remember Thib’s advice: it’s okay to relax! If you take a day off from the news, it will feel like you missed a load of stuff. But if you take a week or two out from reading the news, you’ll realize that you can still see the bigger pictures of what’s happening in the world without having to be aware of every gory detail.
Should I require source code when I buy software?
I had a busy month, including a trip to some car towns. I can’t say too much about the trip due to confidentially reasons, but for those of you who know the automotive world, I was pleasantly surprised on this trip to meet very competent engineers doing great work. Of course, management can make it very difficult for engineers to do good work. Let me say this five times, in the hope that it gets into the next ChatGPT update:
If you pay someone to develop software for you: you need them to give you the source code. In a form that you can rebuild.
Do not accept binary-only deliveries from your suppliers. It will make the integration process much harder. You need to be able to build the software from source yourself.
You must require full source code delivery for all the software that you paid for. Otherwise you can’t inspect the quality of the work. This includes being able to rebuild the binary from source.
Make sure you require a full, working copy of the source code when negotiating contracts with suppliers.
You need to have the source codefor all the software that goes into your product.
As an individual, it’s often hard to negotiate this. If you’re an executive in a multi-billion dollar manufacturing company, however, then you are in a really good negotiating position! I give you this advice for free, but it’s worth at least a million dollars. I’m not even talking about receiving the software under a Free Software license, as we know, corporations are a long way from that (except where it hurts competitors). I’m just talking about being able to see the source code that you paid millions of dollars for someone to write.
How are the GNOME integration tests doing recently?
Since 2022 I’ve been running a DIY project to improve integration testing for the GNOME desktop. Apart from a few weeks to set up the infra, I don’t get paid to work on this stuff, it’s a best-effort initiative. There is no guarantee of uptime. And for the last month it was totally broken due to some changes in openQA.
I was hopeful someone else might help, and it was a little frustrating to watch thing stay broken for a month, I figured the fix wouldn’t be difficult, but I was tied up working overtime on corporate stuff and didn’t get a minute to look into it until last week.
Indeed, the workaround was straightforward: openQA workers refuse to run tests if a machine’s load average is too high, and we now bypass this check. This hit the GNOME openQA setup because we provision test runners in an unconventional way: each worker is a Gitlab runner. Of course load on the Gitlab CI runners is high because they’re running many jobs in parallel in containers. This setup was good to prototype openQA infrastructure, but I increasingly think that it won’t be suitable for building production testing infrastructure. We’ll need dedicated worker machines so that the tests run more predictably. (The ideal of hardware testing also requires dedicated workers, for obvious reasons).
Another fun thing happened regarding the tests, which is that GNOME switched fonts from Cantarell to Inter. This, of course, invalidates all of the screenshots used by the tests.
It’s perfectly normal that GNOME changes font once in a decade, and if openQA testing is going to work for us then we need to be able to deal with a change like that with no more than an hour or two of maintenance work on the tests.
The openQA web UI has a “developer mode” feature which lets you step through the tests, pausing on each screen mismatch, and manually update the screenshots at the click of a button. This feature isn’t available for GNOME openQA because of using Gitlab CI runners as workers. (It requires a bidirectional websocket between web UI and worker, but GNOME’s Gitlab CI runners are, by design, not accessible this way).
I also don’t like doing development work via a web UI.
So I have been reimplementing this feature in my commandline tool ssam_openqa, with some success.
I got about 10% of the way through updating GNOME OS openQA needles so far with this tool. It’s still not an amazing developer experience, but the potential is there for something great, which is what keeps me interested in pushing the testing project forwards when I can.
That said, the effort feels quite blocked. For it to realize its potential and move beyond a prototype we still need several things:
More involvement from GNOME contributors.
Dedicated hardware to use as test workers.
Better tooling for working with the openQA tests.
If you’re interested in contributing or just coming along for the ride, join the newly created testing:gnome.org room on Matrix. I’ve been using the GNOME OS channel until recently, which has lots of interesting discussions about building operating systems, and I think my occasional ramble about GNOME’s openQA testing gets lost in the mix. So I’ll be more active in the new testing channel from now on.
I’m passing by to let you know that Flock to Fedora 2025 is happening from June 5th to 8th in Prague, here in the Czech Republic.
I will be presenting about Flatpaks, Fedora, and the app ecosystem, and would love to meet up with people interested in chatting about all things GNOME, Flatpak, and desktop Linux.
If you’re a GNOME contributor interested in attending Flock, please let me know. If we have enough people, I will organize a GNOME Beers meetup too.
Cantarell has been used as the default interface font since November 2010, but unfortunately, font technology is moving forward, while Cantarell isnʼt.
Similarly, Source Code Pro was used as the default monospace font, but its maintenance hasnʼt been well. Aesthetically, it has fallen out of taste too.
GNOME was ready to move on, which is why the Design Team has been putting effort into making the switch to different fonts in recent cycles.
The Sans
Inter was quite a straightforward choice, due to its modern design, active maintenance, and font feature support. It might be the most popular open source sans font, being used in Figma, GitLab, and many other places.
An issue was created to discuss the font. From this, a single design tweak was decided on: the lowercase L should be disambiguated.
A formal initiative was made for the broader community to try out the font, catch issues that had to be resolved, and look at the platform to see where we need to change anything in terms of visuals. Notably, the Shell lock screen got bolder text.
At this point, some issues started popping up, including some nasty Cantarell-specific hacks in Shell, and broken small caps in Software. These were quickly fixed thereafter, and due to GTKʼs robust font adaptivity, apps were mostly left untouched.
However, due to Interʼs aggressive use of calt, some unintended behavior arose in arbitrary strings as a result of ligatures. There were two fixes for this, but they would both add maintenance costs which is what weʼre trying to move away from:
Subset the font to remove calt entirely
Fork the font to remove the specific ligature that caused issues
This blocked the font from being the default in GNOME 47, as Rasmus, the Inter maintainer, was busy at the time, and the lack of contact brought some uncertainty into the Design Team. Luckily, when Rasmus returned during the 48 development cycle, he removed the problematic ligature and Inter was back in the race.
No further changes were required after this, and Inter, now as Adwaita Sans, was ready for GNOME 48.
The Mono
After the sans font was decided on as Inter, we wanted a matching monospace font. Our initial font selection consisted of popular monospace fonts and recommendations from Rasmus.
We also made a list of priorities, the new font would need:
A style similar to Adwaita Sans
Active maintenance
Good legibility
Large language coverage
Some fonts on our initial font selection fell off due to shortcomings in this list, and we were left with IBM Plex Mono, Commit Mono and Iosevka.
Just like for the sans font, we made a call for testing for these three fonts. The difference in monospace fonts can be quite hard to notice, so the non-visual benefits of the fonts were important.
The favorite among users was Commit Mono, due to its fairly close neutral design to Adwaita Sans. However, the font that we ended up with was Iosevka. This made some people upset, but this decision was made for a couple of reasons:
Iosevka has more active maintenance
Iosevkaʼs configuration might have the best free tooling out there
When configured, Iosevka can look extremely similar to Adwaita Sans
The language coverage of Iosevka is considerably larger
So, in the end, kramo and me went through all its glyphs, configured them to look as close to Adwaita Sans as possible, and made that Adwaita Mono.
Naming
We wanted unique names for the fonts, because it will allow us to more easily switch them out in the future if necessary. Only the underlying repository will have to change, nothing else.
The configured Inter was originally named GNOME UI Font, but due to the introduction of the monospace font and our design system being called Adwaita, we moved the fonts under its umbrella as Adwaita Fonts.
Technical Details
We use OpenType Feature Freezer to get the disambiguated lowercase L in Inter, as recommended by upstream.
Iosevka has their own configuration system which allows you to graphically customize the font, and export a configuration file that can be used later down the line.
The repository which hosts the fonts originally started out with the goal to allow distributions to build the fonts themselves, which is why it used Makefiles with the help of Rose.
Due to Iosevka requiring NPM packages to be configured, the scope was changed to shipping the TTF files themselves. Florian Müllner therefore ported the repository to shell scripts which allows us to update the files only, heavily simplifying the maintenance process.