Allan Day

@aday

GNOME Foundation Update, 2026-02-19

Welcome to another GNOME Foundation update post, covering highlights from the past two weeks (this week and last week). It’s been a busy time, particularly due to conference planning and our upcoming audit – read on to find out more!

Linux App Summit 2026

We were thrilled to be able to announce the location and dates of this year’s Linux App Summit this week. The conference will happen in Berlin on the 16th and 17th of May, at Betahaus Berlin. More information is available on the LAS website.

As usual, we are very pleased to be collaborating with KDE on this year’s LAS. Our partnership on LAS has been a real success that we hope to continue.

Travel sponsorship for LAS 2026 is available for Foundation members through the Travel Committee, so head over to the travel page if you would like to attend and need financial support.

February’s Board meeting

The Board of Directors it’s regular monthly meeting last week, on 9th February. Highlights from the meeting included:

  • We finally caught up on our minutes, approving the minutes from a total of nine meetings. This was a big relief, and hopefully we will be able to stay on top of the minutes now that we’re caught up.
  • The Board was thrilled to formally add Nirbheek Chauhan as a member of the Travel Committee. Many contributors will know Nirbheek as a longstanding GStreamer hacker, and he’s already been doing some great work to help with travel. Thanks Nirbheek!
  • The Board approved a new document retention and destruction policy, which is something that we are encouraged to have by regulators.
  • I gave an update on the operational highlights from the last month, including fundraising, conference planning, and audit preparation.
  • The Board considered a proposal for an exciting new program that we’re hoping to launch very soon. More details to follow soon.

The next Board meeting is scheduled for March 9th.

Audit submissions

As I’ve mentioned in previous updates, the GNOME Foundation is due to be audited very soon. This is a routine occurrence for non-profits like us, but this is our first formal audit, so there’s a good deal of learning and setup to be done.

Last week was the deadline to submit all the documentation for the audit, which meant that many of us were extremely busy finalising numbers, filling in spreadsheets, and tidying up other documentation ready to send it all to the auditors.

Our finance team *really* went the extra mile for us to get everything ready on time, so I’d like to give them a huge thank you for helping us out.

The audit inspection itself will happen in the first week of March, so preparations continue, as we assemble and organise our records, update our policies, and so on.

GUADEC 2026

Planning for this summer’s conference has continued over the past two weeks. In case you missed it, the location and dates have been announced, and accommodation bookings are open at a reduced rate. In the background we are gearing up to open the call for papers, and the sponsorship effort is on its way. Now is a good time to start thinking about any talk proposals that you’d like to submit.

Membership certificates

A cool community effort is currently underway to provide certificates for GNOME Foundation members. This is a great idea in my opinion, as it will allow contributors to get official recognition which can be used for job applications and so on. More volunteers to help out would definitely be welcome.

That’s it for this week. Thanks for reading, and feel free to ask questions in the comments.

Andy Wingo

@wingo

free trade and the left, bis: from cobden to lenin

A week ago we discussed free trade, and specifically took a look at the classical mechanism by which free trade is supposed to improve overall outcomes, as measured by GDP.

As I described it, the value proposition of free trade is ambiguous at best: there is an intangible sense that a country might have a higher GDP with lower trade barriers, but with a side serving of misery as international competition forces some local industries to close, and without any guarantee about how that trade advantage would be distributed among the population. Why bother? And why is my news feed full of EU commissioners signing new trade agreements? Where are these ideas coming from?

stave 2

logo of the cobden club, with motto 'free trade, peace, goodwill among nations'

I asked around, placed some orders, and a week later a copy of Marc-William Palen’s Pax Economica came in the mail. I was hoping for a definitive, reasoned argument for or against free trade, from a leftist’s perspective (whatever that means). The book was both more and less than that. Less, in the sense that its narrative is most tightly woven in the century before the end of the second World War, becoming loose and frayed as it breezes through decolonization, the rise of neoliberalism, the end of history, and what it describes as our current neomercantilist moment. Less, also, in that Palen felt no need to summarize the classic economic arguments for free trade, leaving me to clumsily do so in the previous stave. And yet, the story it tells fills in gaps in my understanding that I didn’t even know I had.

To pick up one thread from the book, let’s go back to 1815. British parliament passes the Corn Laws, establishing a price floor for imported grain. This trade barrier essentially imposes a significant tax on all people who eat, to the profit of a small number of landowners. A movement to oppose these laws develops over the next 30 years, with Richard Cobden as its leading advocate. One of the arguments of the Anti-Corn Law League, which was actually a thing, is that cheaper food is good for workers; though wages might decline as bosses realize they don’t need to pay so much to keep their workers alive, relatively speaking workers will do better. More money left over at the end of the month also means more demand for other manufactured products, which is good for growing industry.

In the end, bad harvests in 1845 led to shortages and famine (yes, that one) and eventually a full repeal of the laws in 1846. Perhaps the Anti-Corn Law League’s success was inevitable: a bad harvest year is a stochastic certainty, and not having enough food is the kind of problem no government can ignore. In any case, the episode does prove the Corn Laws to be a front in a class war, and their repeal was a victory for the left, even if it occured under a Conservative government, and even if the campaign was essentially middle-class Liberal in character.

The repeal campaign was not just about domestic cost of living, however. Its exponents argued that free trade among nations was an essential step to a world of peaceful international cooperation. Palen’s book puts Cobden in context by comparison to Friedrich List, who, inspired by a stint in America in the 1820s, starts from the premise that for a nation to be great, it needs an empire of colonies to exploit, and to conquer and defend colonies, it needs a developed domestic industry and navy; and for a nation to develop its own industry, it needs protectionism. The natural state of empire is not exactly one of free trade with its neighbors.

The “commercial peace” movement that Palen describes cuts List’s argument short at “empire”: because there is no empire without war, a peace movement should scratch away at the causes of war, and the causes of the causes; a world living in pax economica would avoid imperial conflict by tying nations together through trade. It sounds idealistic, and it is, but it’s easy to forget that today we wage war through trade also: blockades and sanctions are often followed by bombs. Palen’s book draws clear lines from Cobdenism through such disparate groups as women’s peace societies, Christian internationalists, pre-war German socialists, and Lenin himself.

Marx understood history as a process of development, consisting of stages through which a society must necessarily pass on its way to socialism. This allies him with capitalism in many ways; he viewed free trade as a step towards a higher form of capitalism, which would necessarily lead to socialism. This, to me, is not a convincing argument in 2026: not only has the mechanistic vision of history failed to fruit, but its mechanism of plant closures and capital flight can be cruel and hard to campaign for politically. And yet, I think we do need a healthy dose of internationalism to remedy the ills of the present day: a jolt of ideals and even idealism to remind us that we are all travelling together on this spaceship Earth, and that those on the other side of a political line are just as much our brothers and sisters those on “our” side.

i went seeking clarity

When you tend Marxist, you know in your bones that although the road to socialism is rough and winding, the winds of history are always at your back; there is an in-baked inevitability of success that softens defeat. There is something similar in the Christian and feminist narrative strands that Palen weaves: a sense not that victory is inevitable, at least in this lifetime, but that fighting for it is a moral imperative, and that God is on your side. The campaign for free trade was a means to a moral end, one of international peace and shared prosperity. And this, in 2026, sounds... good, actually?

Again from our 2026 perspective, I cannot help but agree that a trade barrier is often an act of war; preliminary, yes, but on the spectrum. I have had enough freedom fries in my life to have developed an allergy to anything that tastes of my-side-of-the-line-is-better-than-yours. Though I have not yet read Klein and Pettis’s deliciously titled Trade Wars are Class Wars, I do know that among the 1.5 million people who died as a result of the sanctions on Iraq in the 1990s, Saddam Hussein was not on the list. Sometimes I feel like we learned the lessons of Cobdenism backwards: in order to keep the people starving, we must impose anew the Corn Laws.

Palen’s book leaves me with one doubt, and one big question. The doubt is, to what extent do the lessons of the early 1800s apply today? Ricardo’s contemporary comparative advantage theories presupposed that capital was relatively fixed in space; nowadays this is much less the case. The threat of moving the plant elsewhere is always present in all union drives everywhere. Though history rhymes, it does not repeat; it will take some creativity to transplant pax economica to the soils of the 21st century.

The bigger question, though, is as regards the morality of protectionism as practiced by more and less developed economies: when is it morally right for a country to erect trade barriers? Palen’s book does not pretend to answer this question. And yet, this issue was foremost in our minds, as we shut down Seattle in 1999, as we died in Genoa in 2001. (Forgive the collectivism, if you aren’t of this tribe (yet?), but it was a lived experience.) Free trade was a moral cause in 1835; how did it become immoral in 1995, at least to us?

world without end

Well. To answer that question, we need a history that picks up where Palen leaves off, and we have something like it in Quinn Slobodian’s Globalists, which we will look at next time. But before we go, two reflections.

One, in Europe we have kept the Corn Laws on the books, in a way, in the form of the EU Common Agricultural Policy (CAP). In France the dominant discourse is very much in opposition to the free trade agreement with Mercosur, and the main reason is the threat to French farmers. The tradeoff to get the Mercosur agreement over the line were additional subsidies under the CAP, which are a form of trade barrier. And yet, the way the CAP is structured allocates most of the money in proportion to the surface area of a farm, which is to say, to the largest agribusinesses and to the largest landowners. Greenpeace just put out an excellent briefing arguing that the CAP is just a subsidy to the heirs of the Duchess of Alba and their ilk. Again, are we running the 19th century in reverse?

Secondly, and harder to explain... in the 2000s I listened a lot to an anarchist radio show hosted by Lyn Gerry, Unwelcome Guests. (Have you heard the eponymous tune? It makes me shiver every time.) Anyway I remember one episode which discussed the gift economy and hunter-gather economics, in which a researcher asked a member of that community what he would do if he came into a lot of food at one time: the response, as I recall, was that he would store it “in his brother”. He would give it to others. One day, if he needed it, they would give to him.

I know that our world does not work this way, but there is an element of truth here, in that it’s not reasonable for France to grow everything that it eats, to never trade what it grows, to make all its own solar panels, to write all software used within its borders. We live richer lives when we share and learn from each other, without regards to which side of the line our home is.

next

Still here? Gosh me too. Next time we will look at what the kids call the “1900s” and perhaps approach present day. Until then, commercial peace be with you!

Adrian Vovk

@adrianvovk

GNOME OS Hackfest @ FOSDEM 2026

For a few days leading up to FOSDEM 2026, the GNOME OS developers met for a GNOME OS hackfest. Here are some of the things we talked about!

Stable

The first big topic on our to-do list was GNOME OS stable. We started by defining the milestone: we can call GNOME OS “stable” when we settle on a configuration that we’re willing to support long-term. The most important blocker here is systemd-homed: we know that we want the stable release of GNOME OS to use systemd-homed, and we don’t want to have to support pre-homed GNOME OS installations forever. We discussed the possiblity of building a migration script to move people onto systemd-homed once it’s ready, but it’s simply too difficult and dangerous to deploy this in practice.

We did, however, agree that we can already start promoting GNOME OS a bit more heavily, provided that we make very clear that this is an unstsable product for very early adopters, who would be willing to occasionally reinstall their system (or manually migrate it).

We also discussed the importance of project documentation. GNOME OS’s documentation isn’t in a great state at the moment, and this makes it especially difficult to start contributing. BuildStream, which is GNOME OS’s build system, has a workflow that is unfamiliar to most people that may want to contribute. Despite its comprehensive documentation, there’s no easy “quick start” reference for the most common tasks and so it is ultimately a source of friction for potential contributors. This is especially unfourtunate given the current excitement around building next-gen “distroless” operating systems. Our user documentation is also pretty sparse. Finally, the little documentation we do have is spread across different places (markdown comitted to git, GitLab Wiki pages, the GNOME OS website, etc) and this makes it very difficult for people to find it.

Fixing /etc

Next we talked about the situation with /etc on GNOME OS. /etc has been a bit of an unsolved problem in the UAPI group’s model of immutability: ideally all default configuration can be loaded from /usr, and so /etc would remain entirely for overrides by the system administrator. Unfourtunately, this isn’t currently the case, so we must have some solution to keep track of both upstream defaults and local changes in /etc.

So far, GNOME OS had a complicated set-up where parts of /usr would be symlinked into /etc. To change any of these files, the user would have to break the symlinks and replace them with normal files, potentially requiring copies of entire directories. This would then cause loads of issues, where the broken symlinks cause /etc to slowly drift away from the changing defaults in /usr.

For years, we’ve known that the solution would be overlayfs. This kernel filesystem allows us to mount the OS’s defaults underneath a writable layer for administrator overrides. For various reasons, however, we’ve struggled to deploy this in practice.

Modern systemd has native support for this arrangement via systemd-confext, and we decided to just give it a try at the hackfest. A few hours later, Valentin had a merge request to transition us to the new scheme. We’ve now fully rolled this out, and so the issue is solved in the latest GNOME OS nightlies.

FEX and Flatpak

Next, we discussed integrating FEX with Flatpak so that we can run x86 apps on ARM64 devices.

Abderrahim kicked off the topic by telling us about fexwrap, a script that grafts two different Flatpak runtimes together to successfully run apps via FEX. After studying this implementation, we discussed what proper upstream support might look like.

Ultimately, we decided that the first step will be a new Flatpak runtime extension that bundles FEX, the required extra libraries, and the “thunks” (glue libraries that let x86 apps call into native ARM GPU drivers). From there, we’ll have to experiment and see what integrations Flatpak itself needs to make everything work seamlessly.

Abderrahim has already started hacking on this upstream.

Amutable

The Amutable crew were in Brussels for FOSDEM, and a few of them stopped in to attend our hackfest. We had some very interesting conversations! From a GNOME OS perspective, we’re quite excited about the potential overlap between our work and theirs.

We also used the opportunity to discuss GNOME OS, of course! For instance, we were able to resolve some kernel VFS blockers for GNOME OS delta updates and Flatpak v2.

mkosi

For a few years, we’ve been exploring ways to factor out GNOME OS’s image build scripts into a reusable component. This would make it trivial for other BuildStream-based projects to distribute themselves as UAPI.3 DDIs. It would also allow us to ship device-specific builds of GNOME OS, which are necessary to target mobile devices like the Fairphone 5.

At Boiling the Ocean 7, we decided to try an alternative approach. What if we could drop our bespoke image build steps, and just use mkosi? There, we threw together a prototype and successfully booted to login. With the concept proven, I put together a better prototype in the intervening months. This prompted a discussion with Daan, the maintainer of mkosi, and we ultimately decided that mkosi should just have native BuildStream support upstream.

At the hackfest, Daan put together a prototype for this native support. We were able to use his modified build of mkosi to build a freedesktop-sdk BuildStream image, package it up as a DDI, boot it in a virtual machine, set the machine up via systemd-firstboot, and log into a shell. Daan has since opened a pull request, and we’ll continue iterating on this approach in the coming months.


Overall, this hackfest was extremely productive! I think it’s pretty likely that we’ll organize something like this again next year!

Andy Wingo

@wingo

two mechanisms for dynamic type checks

Today, a very quick note on dynamic instance type checks in virtual machines with single inheritance.

The problem is that given an object o whose type is t, you want to check if o actually is of some more specific type u. To my knowledge, there are two sensible ways to implement these type checks.

if the set of types is fixed: dfs numbering

Consider a set of types T := {t, u, ...} and a set of edges S := {<t|ε, u>, ...} indicating that t is the direct supertype of u, or ε if u is a top type. S should not contain cycles and is thus a direct acyclic graph rooted at ε.

First, compute a pre-order and post-order numbering for each t in the graph by doing a depth-first search over S from ε. Something like this:

def visit(t, counter):
    t.pre_order = counter
    counter = counter + 1
    for u in S[t]:
        counter = visit(u, counter)
    t.post_order = counter
    return counter

Then at run-time, when making an object of type t, you arrange to store the type’s pre-order number (its tag) in the object itself. To test if the object is of type u, you extract the tag from the object and check if tagu.pre_order mod 2n < u.post_order–u.pre_order.

Two notes, probably obvious but anyway: one, you know the numbering for u at compile-time and so can embed those variables as immediates. Also, if the type has no subtypes, it can be a simple equality check.

Note that this approach applies only if the set of types T is fixed. This is the case when statically compiling a WebAssembly module in a system that doesn’t allow modules to be instantiated at run-time, like Wastrel. Interestingly, it can also be the case in JIT compilers, when modeling types inside the optimizer.

if the set of types is unbounded: the display hack

If types may be added to a system at run-time, maintaining a sorted set of type tags may be too much to ask. In that case, the standard solution is something I learned of as the display hack, but whose name is apparently ungooglable. It is described in a 4-page technical note by Norman H. Cohen, from 1991: Type-Extension Type Tests Can Be Performed In Constant Time.

The basic idea is that each type t should have an associated sorted array of supertypes, starting with its top type and ending with t itself. Each t also has a depth, indicating the number of edges between it and its top type. A type u is a subtype of t if u[t.depth]=t, if u.depth <= t.depth.

There are some tricks one can do to optimize out the depth check, but it’s probably a wash given the check performs a memory access or two on the way. But the essence of the whole thing is in Cohen’s paper; go take a look!

Jan Vitek notes in a followup paper (Efficient Type Inclusion Tests) that Christian Queinnec discovered the technique around the same time. Vitek also mentions the DFS technique, but as prior art, apparently already deployed in DEC Modula-3 systems. The term “display” was bouncing around in the 80s to describe some uses of arrays; I learned it from Dybvig’s implementation of flat closures, who learned it from Cardelli. I don’t know though where “display hack” comes from.

That’s it! If you know of any other standard techniques for type checks with single-inheritance subtyping, do let me know in the comments. Until next time, happy hacking!

Addendum: Thanks to kind readers, I have some new references! Michael Schinz refers to Yoav Zibin’s PhD thesis as a good overview. Alex Bradbury points to a survey article by Roland Ducournau as describing the DFS technique as “Schubert numbering”. CF Bolz-Tereick unearthed the 1983 Schubert paper, and it is a weird one. Still, I can’t but think that the DFS technique was known earlier; I have a 1979 graph theory book by Shimon Even that describes a test for “separation vertices” that is precisely the same, though it does not mention the application to type tests. Many thanks also to fellow traveller Max Bernstein for related discussions.

Crosswords 0.3.17: Circle Bound

It’s time for another Crosswords release. This is relatively soon after the last one, but I have an unofficial rule that Crosswords is released after three bloggable features. We’ve been productive and blown way past that bar in only a few months, so it’s time for an update.

This round, we redid the game interface (for GNOME Circle) and added content to the editor. The editor also gained printing support, and we expanded support for Adwaita accent colors. In details:

New Layout

GNOME Crosswords' new look — using the accent color
GNOME Crosswords’ new look — now using the accent color

I applied for GNOME Circle a couple years ago, but it wasn’t until this past GUADEC that I was able to sit down together with Tobias to take a closer look at the game. We sketched out a proposed redesign, and I’ve been implementing it for the last four months. The result: a much cleaner look and workflow. I really like the way it has grown.

Initial redesign

Overall, I’m really happy with the way it looks and feels so far. The process has been relatively smooth (details), though it’s clear that the design team has limited resources to spend on these efforts. They need more help, and I hope that team can grow. Here’s how the game looks now:

I really could use help with the artwork for this project! Jakub made some sketches and I tried to convert them to svg, but have reached the limits of my inkscape skills. If you’re interested in helping and want to get involved in GNOME Design artwork, this could be a great place to start. Let me know!

Indicator Hints

Time for some crossword nerdery:

Indicator Hints Dialog Main Screen

One thing that characterizes cryptic crosswords is that its clues feature wordplay. A key part of the wordplay is called an “indicator hint”. These hints are a word — or words — that tell you to transform neighboring words into parts of the solutions. These transformations could be things like rearranging the letters (anagrams) or reversing them. The example in the dialog screenshot below might give a better sense of how these work. There’s a whole universe built around this.

Indicator Hint Dialog with an example

Good clues always use evocative indicator hints to entertain or mislead the solver. To help authors, I install a database of common indicator hints compiled by George Ho and show a random subset. His list also includes how frequently they’re used, which can be used to make a clue harder or easier to solve.

Indicator Hints Dialog with full list of indicators

Templates and Settability

I’ve always been a bit embarrassed about the New Puzzle dialog. The dialog should be simple enough: select a puzzle type, puzzle size, and maybe a preset grid template. Unfortunately, it historically had a few weird bugs and the template thumbnailing code was really slow.  It could only render twenty or so templates before the startup time became unbearable. As a result, I only had a pitiful four or five templates per type of puzzle.

When Toluwaleke rewrote the thumbnail rendering to be blazing fast over the summer, it became possible to give this section a closer look. The result:

Note: The rendering issues with the theme words dialog is GTK Bug #7400

The new dialog now has almost a thousand curated blank grids to pick from, sorted by how difficult they are to fill. In addition, I added initial support to add Theme Words to the puzzle. Setting theme words will also filter the templates to only show those that fit. Some cool technical details:

  • The old dialog would load the ipuz files, convert them to svg, then render them to Pixbuf. That had both json + xml parse trees to navigate, plus a pixbuf transition. It was all inherently slow. I’ve thrown all that out.
  • The new code takes advantage of the fact that crossword grids are effectively bitfields: at build time I convert each row in a grid template into a u32 with each bit representing a block. That means that each crossword grid can be stored as an array of these u32s. We use GResource and GVariant to load this file, so it’s mmapped and effectively instant to parse. At this point, the limiting factor in adding additional blank templates is curation/generation.
  • As part of this, I developed a concept called “settability” (documentation) to capture how easy or hard it is to fill in a grid. We use this to sort the grids, and to warn the user should they choose a harder grid. It’s a heuristic, but it feels pretty good to me. You can see it in the video in the sort order of the grids.

User Testing

I had the good fortune to be able to sit with one of my coworkers and watch her use the editor. She’s a much more accomplished setter than I, and publishes her crosswords in newspapers. Watching her use the tool was really helpful as she highlighted a lot of issues with the application (list). It was also great to validate a few of my big design decisions, notably splitting grid creation from clue writing.

I’ve fixed most of the  easy issues she found, but she confirmed something I suspected: The big missing feature for the editor is an overlay indicating tricky cells and dead ends (bug). Victor proposed a solution (link) for this over the summer. This is now the top priority for the next release.

Thanks

  • George for his fabulous database of indicator words
  • Tobias for tremendous design work
  • Jakub for artwork sketches and ideas
  • Sophia for user feedback with the editor
  • Federico for a lot of useful advice, CI fixes, and cleanups
  • Vinson for build fixes and sanitation
  • Nicole for some game papercut fixes
  • Toluwaleke for printing advice and fixes
  • Rosanna for text help and encouragement/advice
  • Victor for cleaning up the docs

Until next time!

Jussi Pakkanen

@jpakkane

What's cooking with Pystd, the experimental C++ standard library?

Pystd is an experiment on what a C++ standard library without any backwards compatibility requirements would look like. Its design goals are in order of decreasing priority:

  • Fast build times
  • Simplicity of implementation
  • Good performance
 It also has some design-antigoals:

  • Not compatible with the ISO C++ standard library
  • No support for weird corner cases like linked lists or types that can't be noexcept-moved
  • Do not reinvent things that are already in the C standard library (though you might provide a nicer UI to them)

Current status

There is a bunch of stuff implemented, like vector, several string types, hashmap, a B-tree based ordered map, regular expressions, unix path manipulation operations and so on. The latest addition has been sort algorithms, which include merge sort, heap sort and introsort.

None of these is "production quality". They will almost certainly have bugs. Don't rely on them for "real work". 

The actual library consists of approximately 4800 lines of headers and 4700 lines of source. Building the library and all test code on a Raspberry Pi using a single core takes 13 seconds. With 30 process invocations this means approximately 0.4 seconds per compilation.

For real world testing we have really only one data point, but in it build time was reduced by three quarters, the binary became smaller and the end result ran faster.

Portability

The code has been tested on Linux x86_64 and aarch64 as well as on macOS. It currently does not work with Visual Studio which has not implemented support for pack indexing yet.

Why should you consider using it?

Back in the 90s and 00s (I think) it was fashionable to write your own C++ standard library implementation. Eventually they all died and people moved to the one that comes with their compiler. Which is totally reasonable. So why would you now switch to something else?

For existing C++ applications you probably don't want to. The amount of work needed for a port is too much to be justified in most cases.

For green field projects things are more interesting. Maybe you just want to try something new just for the fun of it? That is the main reason why Pystd even exists, I wanted to try implementing the core building blocks of a standard library from scratch.

Maybe you want to provide "Go style" binaries that build fast and have no external deps? The size overhead of Pystd is only a few hundred k and the executables it yields only depend on libc (unless you use regexes, in which case they also depend on libpcre, but you can static link it if you prefer).

Resource constrained or embedded systems might also be an option. Libstdc++ takes a few megabytes. Pystd does require malloc, though (more specifically it requires aligned alloc) so for the smallest embedded targets you'd need to use something like the freestanding library. As an additional feature Pystd permits you to disable parts of the library that are not used (currently only regexes, but could be extended to things like threading and file system).

Compiler implementers might choose to test their performance with an unusual code base. For example GCC compiles most Pystd files in a flash but for some reason the B-tree implementation takes several seconds to build. I don't really know why because it does not do any heavy duty metaprogramming or such.

It might also be usable in teaching as a fairly small implementation of the core algorithms used today. Assuming anyone does education any more as opposed to relying on LLMs for everything.


Cassidy James Blaede

@cassidyjames

How I Designed My Cantina Birthday Party

Ever since my partner and I bought a house several years ago, I’ve wanted to throw a themed Star Wars party here. We’ve talked about doing a summer movie showing thing, we’ve talked about doing a Star Wars TV show marathon, and we’ve done a few birthday parties—but never the full-on themed party that I was dreaming up. Until this year!

For some reason, a combination of rearranging some of our furniture, the state of my smart home, my enjoyment of Star Wars: Outlaws, and my newfound work/life balance meant that this was the year I finally committed to doing the party.

Pitch

For the past few years I’ve thrown a two-part birthday party: we start out at a nearby bar or restaurant, and then head to the house for more drinks and games. I like this format as it gives folks a natural “out” if they don’t want to commit to the entire evening: they can just join the beginning and then head out, or they can just meet up at our house. I was planning to do the same this year, but decided: let’s go all-in at the house so we have more time for more fun. I knew I wanted:

  1. Trivia! I organized a fun little Star Wars trivia game for my birthday last year and really enjoyed how nerdy my friends were with it, so this year I wanted to do something similar. My good friend Dagan volunteered to put together a fresh trivia game, which was incredible.

  2. Sabacc. The Star Wars equivalent to poker, featured heavily in the Star Wars: Outlaws game as well as in Star Wars: Rebels, Solo: A Star Wars Story, and the Disney Galactic Starcruiser (though it’s Kessel sabacc vs. traditional sabacc vs. Corellian spike vs. Coruscant shift respectively… but I digress). I got a Kessel sabacc set for Christmas and have wanted to play it with a group of friends ever since.

  3. Themed drinks. Revnog is mentioned in Star Wars media including Andor as some sort of liquor, and spotchka is featured in the New Republic era shows like The Mandalorian and The Book of Boba Fett. There isn’t really any detail as to what each tastes like, but I knew I wanted to make some batch cocktails inspired by these in-universe drinks.

  4. Immersive environment. This meant smart lights, music, and some other aesthetic touches. Luckily over the years I’ve upgraded my smart home to feature nearly all locally-controllable RGB smart bulbs and fixtures; while during the day they simply shift from warm white to daylight and back, it means I can do a lot with them for special occasions. I also have networked speakers throughout the house, and a 3D printer.

About a month before the party, I got to work.

Aesthetic

For the party to feel immersive, I knew getting the aesthetic right was paramount. I also knew I wanted to send out themed invites to set the tone, so I had to start thinking about the whole thing early.

Star Wars Outlaws title screen

Star Wars: Outlaws title screen

Star Wars Outlaws journal UI

Star Wars: Outlaws journal UI

Since I’d been playing Star Wars: Outlaws, that was my immediate inspiration. I also follow the legendary Louie Mantia on Mastodon, and had bought some of his Star Wars fonts from The Crown Type Company, so I knew at least partially how I was going to get there.

Invite

Initial invite graphic (address censored)

For the invite, I went with a cyan-on-black color scheme. This is featured heavily in Star Wars: Outlaws but is also an iconic Star Wars look (“A long time ago…”, movie end credits, Clone Wars title cards, etc.). I chose the Spectre font as it’s very readable but also very Star Wars. To give it some more texture (and as an easter egg for the nerds), I used Womprat Aurebesh offset and dimmed behind the heading. The whole thing was a pretty quick design, but it did its job and set the tone.

Website

I spent a bit more time iterating on the website, and it’s a more familiar domain for me than more static designs like the invite was. I especially like how the offset Aurebesh turned out on the headings, as it feels very in-universe to me. I also played with a bit of texture on the website to give it that lo-fi/imperfect tech vibe that Star Wars so often embraces.

For the longer-form body text, I wanted something even more readable than the more display-oriented fonts I’d used, so I turned to a good friend: Inter (also used on this site!). It doesn’t really look like Inter though… because I used almost every stylistic alternate that the font offers—explicitly to make it feel legible but also… kinda funky. I think it worked out well. Specifically, notice the lower-case “a”, “f”, “L”, “t”, and “u” shapes, plus the more rounded punctuation.

Website screenshot

Screenshot of my website

Since I already owned blaede.family where I host extended family wishlists, recipes, and a Mastodon server, I resisted the urge to purchase yet another domain and instead went with a subdomain. cantina.blaede.family doesn’t quite stay totally immersive, but it worked well enough—especially for a presumably short-lived project like this.

Environment

Once I had the invite nailed down, I started working on what the actual physical environment would look like. I watched the bar/cantina scenes from A New Hope and Attack of the Clones, scoured concept art, and of course played more Outlaws. The main thing I came away thinking about was lighting!

Lighting

The actual cantinas are often not all that otherworldly, but lighting plays a huge role; both in color and the overall dimness with a lot of (sometimes colorful) accent lighting.

So, I got to work on setting up a lighting scene in Home Assistant. At first I was using the same color scheme everywhere, but I quickly found that distinct color schemes for different areas would feel more fun and interesting.

Lounge area

Lounge area

For the main lounge-type area, I went with dim orange lighting and just a couple of green accent lamps. This reminds me of Jabba’s palace and Boba Fett, and just felt… right. It’s sort of organic but would be a somewhat strange color scheme outside of Star Wars. It’s also the first impression people will get when coming into the house, so I wanted it to feel the most recognizably Star Wars-y.

Kitchen area

Kitchen area

Next, I focused on the kitchen, where people would gather for drinks and snacks. We have white under-cabinet lighting which I wanted to keep for function (it’s nice to see what color your food actually is…), but I went with a bluish-purple (almost ultaviolet) and pink.

Coruscant

Coruscant bar from Attack of the Clones

While this is very different from a cantina on Tatooine, it reminded me of the Coruscant bar we see in Attack of the Clones as well as some of the environments in The Clone Wars and Outlaws. At one point I was going to attempt to make a glowing cocktail that would luminesce under black light—I ditched that, but the lighting stayed.

Table

Dining room sabacc table

One of the more important areas was, of course, the sabacc table (the dining room), which is adjacent to the kitchen. I had to balance ensuring the cards and chips are visible with that dim, dingy, underworld vibe. I settled on actually adding a couple of warm white accent lights (3D printed!) for visibility, then using the ceiling fan lights as a sabacc round counter (with a Zigbee button as the dealer token).

3D printed light

3D printed accent light

Lastly, I picked a few other colors for adjacent rooms: a more vivid purple for the bathroom, and red plus a rainbow LED strip for my office (where I set up split-screen Star Wars: Battlefront II on a PS2).

Office

Office area

I was pretty happy with the lighting at this point, but then I re-watched the Mos Eisley scenes and noticed some fairly simple accent lights: plain warm white cylinders on the tables.

Entrance Bar Handywork

I threw together a simple print for my 3D printer and added some battery-powered puck lights underneath: perfection.

Cylinder light

First test of my cylinder lights

Music

With my networked speakers, I knew I wanted some in-universe cantina music—but I also knew the cantina song would get real old, real fast. Since I’d been playing Outlaws as well as a fan-made Holocard Cantina sabacc app, I knew there was a decent amount of in-universe music out there; luckily it’s actually all on YouTube Music.

Outer Rim Underworld Cantina

I made a looooong playlist including a bunch of that music plus some from Pyloon’s Saloon in Jedi: Survivor, Oga’s Cantina at Disney’s Galaxy’s Edge, and a select few tracks from other Star Wars media (Niamos!).

Sabacc

A big part of the party was sabacc; we ended up playing several games and really getting into it. To complement the cards and dice (from Hyperspace Props), I 3D printed chips and tokens that we used for the games.

Sabacc prints

3D printed sabacc tokens and chips

We started out simple with just the basic rules and no tokens, but after a couple of games, we introduced some simple tokens to make the game more interesting.

Playing sabacc

Playing sabacc

I had a blast playing sabacc with my friends and by the end of the night we all agreed: we need to play this more frequently than just once a year for my birthday!

Drinks

I’m a fan of batch cocktails for parties, because it means less time tending a bar and more time enjoying company—plus it gives you a nice opportunity for a themed drink or two that you can prepare ahead of time. I decided to make two batch cocktails: green revnog and spotchka.

Spotchka and revnog

Bottles of spotchka and revnog

Revnog is shown a few times in Andor, but it’s hard to tell what it looks like—one time it appears to be blue, but it’s also lit by the bar itself. When it comes to taste, the StarWars.com Databank just says it “comes in a variety of flavors.” However, one character mentions “green revnog” as being her favorite, so I decided to run with that so I could make something featuring objectively the best fruit in the galaxy: pear (if you know, you know).

Revnog

My take on green revnog

After a lot of experimenting, I settled on a spiced pear gin drink that I think is a nice balance between sweet, spiced, and boozy. The simple batch recipe came out to: 4 parts gin, 1 part St. George’s Spiced Pear Liqueur, 1 part pear juice, and 1 part lemon juice. It can be served directly on ice, or cut with sparkling water to tame it a bit.

Spotchka doesn’t get its own StarWars.com Databank entry, but is mentioned in a couple of entries about locations from an arc of The Mandalorian. All that can be gleaned is that it’s apparently glowing and blue (Star Wars sure loves its blue drinks!), and made from “krill” which in Star Wars is shrimp-like.

Spotchka

My take on spotchka

I knew blue curaçao would be critical for a blue cocktail, and after a bit of asking around for inspiration, I decided coconut cream would give it a nice opacity and lightness. The obvious other ingredients for me, then, were rum and pineapple juice. I wanted it to taste a little more complex than just a Malibu pineapple, so I raided my liquor supply until I found my “secret” ingredient: grapefruit vodka. Just a tiny bit of that made it taste really unique and way more interesting! The final ratios for the batch are: 4 parts coconut rum, 2 parts white rum, 2 parts blue curaçao, 1 part grapefruit vodka, 2 parts pineapple juice, 1 part coconut cream. Similar to the revnog, it can be served directly on ice or cut with sparkling water for a less boozy drink.

Summary

Over all I had a blast hanging out, drinking cocktails, playing sabacc, and nerding out with my friends. I feel like the immersive-but-not-overbearing environment felt right; just one friend (the trivia master!) dressed up, which was perfect as I explicitly told everyone that costumes were not expected but left it open in case anyone wanted to dress up. The trivia, drinks, and sabacc all went over well, and a handful of us hung around until after 2 AM enjoying each other’s company. That’s a win in my book. :)

Martin Pitt

@pitti

Revisiting Google Cloud Performance for KVM-based CI

Summary from 2022 Back then, I evaluated Google Cloud Platform for running Cockpit’s integration tests. Nested virtualization on GCE was way too slow, crashy, and unreliable for our workload. Tests that ran in 35-45 minutes on bare metal (my laptop) took over 2 hours with 15 failures, timeouts, and crashes. The nested KVM simply wasn’t performant enough. On today’s Day of Learning, I gave this another shot, and was pleasantly surprised.

This Week in GNOME

@thisweek

#236 New Library

Update on what happened across the GNOME project in the week from February 06 to February 13.

Third Party Projects

Alexander Vanhee says

This week, the Bazaar app store received two major new features. The first is the new Library page, which combines the Installed page, Update dialog, and Transaction sidebar into a single view. It should make managing installed apps much more intuitive.

The second feature is support for user scope Flatpaks. Flatpaks installed in user scope are now listed in the Installed Apps list, where you can view or uninstall them just like other apps. Installing new user-scope Flatpaks is still not possible in the Flatpak version of the app due to an unresolved issue.

Install the app via Flathub

Arnis (kem-a) reports

AppManager v3.2.0 just got released

AppManager is a GTK/Libadwaita developed desktop utility in Vala that makes installing and uninstalling AppImages on Linux desktop painless. It supports both SquashFS and DwarFS AppImage formats, features a seamless background auto-update process, and leverages zsync delta updates for efficient bandwidth usage. Double-click any .AppImage to open a macOS-style drag-and-drop window, just drag to install and AppManager will move the app, wire up desktop entries, and copy icons.

Since last week release many suggestions and feature requests where implemented and bugs fixed. Here are some changes highlights:

  • Now app runs on any Linux, yes that’s right, even as old as Debian Bookworm or Bullseye and of course Ubuntu LTS. Big thanks to AppImage community devs who made it possible
  • Added grid view in app list
  • GitHub token support to significantly increase update requests
  • and many more …

Hit your in-app update button or Get it on Github

Anton Isaiev reports

I’d like to introduce RustConn, a modern connection manager for Linux with a GTK4/Wayland-native interface. Manage SSH, RDP, VNC, SPICE, Telnet, and Zero Trust connections from a single application. All core protocols use embedded Rust implementations — no external dependencies required. Supports import from Remmina, Asbru-CM, SSH config, Ansible, Royal TS, and MobaXterm. Credentials are stored securely via KeePassXC, GNOME Keyring, Bitwarden CLI, or 1Password CLI. Available on Flathub, Snap, AppImage, and OBS (deb/rpm).

https://github.com/totoshko88/RustConn https://flathub.org/apps/io.github.totoshko88.RustConn

bjawebos reports

Version 0.2.1 of Filmbook was released this week. This app helps you see which film you have loaded into which camera. That means you can concentrate fully on taking photos. The view of your exposed films has been optimised and Nido has contributed a new icon. I was particularly pleased about that. You can install Filmbook via Flathub: https://flathub.org/de/apps/page.codeberg.bjawebos.Filmbook

Shell Extensions

Anton Isaiev says

I released two GNOME Shell extensions:

  1. Browser Switcher — one-click default browser switching from the GNOME Shell panel. Auto-detects installed browsers, zero configuration, fully async. Useful when you need separate browsers for work and personal SSO. https://extensions.gnome.org/extension/8836/browser-switcher/ https://github.com/totoshko88/browser-switcher
  2. gp-gnome — GlobalProtect VPN integration for GNOME Shell. System tray indicator with connect/disconnect, MFA support, gateway selection, real-time status monitoring, and HIP resubmission. Designed for the official Palo Alto Networks Linux CLI (PanGPLinux). https://extensions.gnome.org/extension/8899/gp-gnome/ https://github.com/totoshko88/gp-gnome

Miscellaneous

GNOME OS

The GNOME operating system, development and testing platform

Ada Magicat ❤️🧡🤍🩷💜 reports

Valentin also created an extension for patented codecs that can not be shipped with the base system. With the extension enabled, the system video player can play formats that it couldn’t before (like mp4/h264) while nautilus will show thumbnails for those files. It also enables hardware-acceleration on screen recordings with gnome-shell.

Enable the feature by running:

updatectl enable codecs-extra --now

Ada Magicat ❤️🧡🤍🩷💜 reports

Valentin changed the way we handle system configuration in /etc, moving it to systemd-confext. This makes the configuration less fragile and easier to update - a huge improvement to the atomicity of GNOME OS and an important step towards our goal of a stable system everyone can use.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Olav Vitters

@bkor

GUADEC 2026 accommodation

One of the things that I appreciate in a GUADEC (if available) is a common accommodation. Loads of attendees appreciated the shared accommodation in Vilanova i la Geltrú, Spain (GUADEC 2006). For GUADEC 2026 Deepesha announced one recommended accommodation, a student’s residence. GUADEC 2026 is at the same place as GUADEC 2012, meaning: A Coruña, Spain. I didn’t go to the 2012 one though I heard it also had a shared accommodation. For those wondering where to stay, suggest the recommended one.

Asman Malika

@malika

Career Opportunities: What This Internship Is Teaching Me About the Future

 Before Outreachy, when I thought about career opportunities, I mostly thought about job openings, applications, and interviews. Opportunities felt like something you wait for, or hope to be selected for.

This internship has changed how I see that completely.

I’m learning that opportunities are often created through contribution, visibility, and community, not just applications.

Opportunities Look Different in Open Source

Working with GNOME has shown me that contributing to open source is not just about writing code, it’s about building a public track record. Every merge request, every review cycle, every improvement becomes part of a visible body of work.

Through my work on Papers: implementing manual signature features, fixing issues, contributing to Poppler codebase and now working on digital signatures, I’m not just completing tasks. I’m building real-world experience in a production codebase used by actual users.

That kind of experience creates opportunities that don’t always show up on job boards:

  • Collaborating with experienced maintainers
  • Learning large-project workflows
  • Becoming known within a technical community
  • Developing credibility through consistent contributions

Skills That Expand My Career Options

This internship is also expanding what I feel qualified to do.I’m gaining experience with:

  • Building new features
  • Large, existing codebases
  • Code review and iteration cycles
  • Debugging build failures and integration issues
  • Writing clearer documentation and commit messages
  • Communicating technical progress

These are skills that apply across many roles, not just one job title. They open doors to remote collaboration, open-source roles, and product-focused engineering work.

Career Is Bigger Than Employment

One mindset shift for me is that career is no longer just about “getting hired.” It’s also about impact and direction.

I now think more about:

  • What kind of software I want to help build
  • What communities I want to contribute to
  • How accessible and user-focused tools can be
  • How I can support future newcomers the way my GNOME mentors supported me

Open source makes career feel less like a ladder and more like a network.

Creating Opportunities for Others

Coming from a non-traditional path into tech, I’m especially aware of how powerful access and guidance can be. Programs like Outreachy don’t just create opportunities for individuals, they multiply opportunities through community.

As I grow, I want to contribute not only through code, but also through sharing knowledge, documenting processes, and encouraging others who feel unsure about entering open source.

Looking Ahead

I don’t have every step mapped out yet. But I now have something better: direction and momentum.

I want to continue contributing to open source, deepen my technical skills, and work on tools that people actually use. Outreachy and GNOME have shown me that opportunities often come from showing up consistently and contributing thoughtfully.

That’s the path I plan to keep following.

Jussi Pakkanen

@jpakkane

C and C++ dependencies, don't dream it, be it!

Bill Hoffman, the original creator of the CMake language held a presentation at CppCon. At approximately 49 minutes in he starts talking about future plans for dependency management. He says, and I now quote him directly, that "in this future I envision", one should be able to do something like the following (paraphrasing).

Your project has dependencies A, B and C. Typically you get them from "the system" or a package manager. But then you'd need to develop one of the deps as well. So it would be nice if you could somehow download, say, A, build it as part of your own project and, once you are done, switch back to the system one.

Well mr Hoffman, do I have wonderful news for you! You don't need to treasure these sensual daydreams any more. This so called "future" you "envision" is not only the present, but in fact ancient past. This method of dependency management has existed in Meson for so long I don't even remember when it got added. Something like over five years at least. 

How would you use such a wild and an untamed thing?

Let's assume you have a Meson project that is using some dependency called bob. The current build is using it from the system (typically via pkg-config, but the exact method is irrelevant). In order to build the source natively, first you need to obtain it. Assuming it is available in WrapDB, all you need to do is run this command:

meson wrap install bob

If it is not, then you need to do some more work. You can even tell Meson to check out the project's Git repo and build against current trunk if you so prefer. See documentation for details.

Then you need to tell Meson to use the internal one. There is a global option to switch all dependencies to be local, but in this case we want only this dependency to be built and get the remaining ones from the system. Meson has a builtin option for exactly this:

meson configure builddir -Dforce_fallback_for=bob

Starting a build would now reconfigure the system to use the builtin option. Once you are done and want to go back to using system deps, run this command:

meson configure builddir -Dforce_fallback_for=

This is all you need to do. That is the main advantage of competently designed tools. They rose tint your world and keep you safe from trouble and pain. Sometimes you can see the blue sky through the tears in your eyes.

Oh, just one more deadly sting

If you keep watching the presenter first asks the audience if this is something they would like. Upon receiving a positive answer he then follows up with this [again quoting directly]:

So you should all complain to the DOE [presumably US Department of Energy] for not funding the SBIR [presumably some sort of grant or tender] for this.

Shaming your end users into advocating an authoritarian/fascist government to give large sums of money in a tender that only one for-profit corporation can reasonably win is certainly a plan.

Instead of working on this kind of a muscle man you can alternatively do what we in the Meson project did: JFDI. The entire functionality was implemented by maybe 3 to 5 people, some working part time but most being volunteers. The total amount of work it took is probably a fraction of the clerical work needed to deal with all the red tape that comes with a DoE tender process.

In the interest of full disclosure

While writing this blog post I discovered a corner case bug in our current implementation. At the time of writing it is only seven hours old, and not particularly beautiful to behold as it has not been fixed yet. And, unfortunately, the only thing I've come to trust is that bugfixes take longer than you would want them to.

Christian Hergert

@hergertme

Mid-life transitions

The past few months have been heavy for many people in the United States, especially families navigating uncertainty about safety, stability, and belonging. My own mixed family has been working through some of those questions, and it has led us to make a significant change.

Over the course of last year, my request to relocate to France while remaining in my role moved up and down the management chain at Red Hat for months without resolution, ultimately ending in a denial. That process significantly delayed our plans despite providing clear evidence of the risks involved to our family. At the beginning of this year, my wife and I moved forward by applying for long-stay visitor visas for France, a status that does not include work authorization.

During our in-person visa appointment in Seattle, a shooting involving CBP occurred just a few parking spaces from where we normally park for medical outpatient visits back in Portland. It was covered by the news internationally and you may have read about it. Moments like that have a way of clarifying what matters and how urgently change can feel necessary.

Our visas were approved quickly, which we’re grateful for. We’ll be spending the next year in France, where my wife has other Tibetan family. I’m looking forward to immersing myself in the language and culture and to taking that responsibility seriously. Learning French in mid-life will be humbling, but I’m ready to give it my full focus.

This move also means a professional shift. For many years, I’ve dedicated a substantial portion of my time to maintaining and developing key components across the GNOME platform and its surrounding ecosystem. These projects are widely used, including in major Linux distributions and enterprise environments, and they depend on steady, ongoing care.

For many years, I’ve been putting in more than forty hours each week maintaining and advancing this stack. That level of unpaid or ad-hoc effort isn’t something I can sustain, and my direct involvement going forward will be very limited. Given how widely this software is used in commercial and enterprise environments, long-term stewardship really needs to be backed by funded, dedicated work rather than spare-time contributions.

If you or your organization depend on this software, now is a good time to get involved. Perhaps by contributing engineering time, supporting other maintainers, or helping fund long-term sustainability.

The folliwing is a short list of important modules where I’m roughly the sole active maintainer:

  • GtkSourceView – foundation for editors across the GTK eco-system
  • Text Editor – GNOME’s core text editor
  • Ptyxis – Default terminal on Fedora, Debian, Ubuntu, RHEL/CentOS/Alma/Rocky and others
  • libspelling – Necessary bridge between GTK and enchant2 for spellcheck
  • Sysprof – Whole-systems profiler integrating Linux perf, Mesa, GTK, Pango, GLib, WebKit, Mutter, and other statistics collectors
  • Builder – GNOME’s flagship IDE
  • template-glib – Templating and small language runtime for a scriptable GObject Introspection syntax
  • jsonrpc-glib – Provides JSONRPC communication with language servers
  • libpeas – Plugin library providing C/C++/Rust, Lua, Python, and JavaScript integration
  • libdex – Futures, Fibers, and io_uring integration
  • GOM – Data object binding between GObject and SQLite
  • Manuals – Documentation reader for our development platform
  • Foundry – Basically Builder as a command-line program and shared library, used by Manuals and a future Builder (hopefully)
  • d-spy – Introspect D-Bus connections
  • libpanel – Provides IDE widgetry for complex GTK/libadwaita applications
  • libmks – Qemu Mouse-Keyboard-Screen implementation with DMA-BUF integration for GTK

There are, of course, many other modules I contribute to, but these are the ones most in need of attention. I’m committed to making the transition as smooth as possible and am happy to help onboard new contributors or teams who want to step up.

My next chapter is about focusing on family and building stability in our lives.

Lucas Baudin

@lbaudin

Being a Mentor for Outreachy

I first learned about Outreachy reading Planet GNOME 10 (or 15?) years ago. At the time, I did not know much about free software and I was puzzled by this initiative, as it mixed politics and software in a way I was not used to.

Now I am a mentor for the December 2025 Outreachy cohort for Papers (aka GNOME Document Viewer), so I figured I would write a blog post to explain what Outreachy is and perpetuate the tradition! Furthermore, I thought it might be interesting to describe my experience as a mentor so far.

Papers and Outreachy logo

What is Outreachy?

Quoting the Outreachy website:

Outreachy provides [paid] internships to anyone from any background who faces underrepresentation, systemic bias, or discrimination in the technical industry where they are living.

These internships are paid and carried out in open-source projects. By way of anecdote, it was initially organized by the GNOME community around 2006-2009 to encourage women participation in GNOME and was progressively expanded to other projects later on. It was formally renamed Outreachy in 2015 and is now managed independently on GNOME, apart from its participation as an open-source project.

Compared to the well-funded Summer of Code program by Google, Outreachy has a much more precarious financial situation, especially in recent years. With little surprise, the evolution of politics in the US and elsewhere over the last few years does not help.

Therefore, most internships are nowadays funded directly by open-source projects (in our case the GNOME Foundation, you can donate and become a Friend of GNOME), and Outreachy still has to finance (at least) its staff (donations here).

Outreachy as a Mentor

So, I am glad that the GNOME Foundation was able to fund an Outreachy internship for the December 2025 cohort. As I am one of the Papers maintainers, I decided to volunteer to mentor an intern and came up with a project on document signatures. This was one of the first issues filled when Papers was forked from Evince, and I don't think I need to elaborate on how useful PDF signing is nowadays. Furthermore, Tobias had already made designs for this feature, so I knew that if we actually had an intern, we would precisely know what needed to be implemented1.

Once the GNOME Internship Committee for Outreachy approved the project, the project was submitted on the Outreachy website, and applicants were invited to start making contributions to projects during the month of October so projects could then select interns (and interns could decide whether they wanted to work for three months in this community). Applicants were already selected by Outreachy (303 applications were approved out of 3461 applications received). We had several questions and contributions from around half a dozen applicants, and that was already an enriching experience for me. For instance, it was interesting to see how newcomers to Papers could be puzzled by our documentation.

At this point, a crucial thing was labeling some issues as "Newcomers". It is much harder than what it looks (because sometimes things that seem simple actually aren't), and it is necessary to make sure that issues are not ambiguous, as applicants typically do not dare to ask questions (even, of course, when it is specified that questions are welcomed!). Communication is definitively one of the hardest things.

In the end, I had to grade applicants (another hard thing to do), and the Internship Committee selected Malika Asman who accepted to participate as an intern! Malika wrote about her experience so far in several posts in her blog.

1

Outreachy internships do not have to be centered around programming; however, that is what I could offer guidance for.

Allan Day

@aday

GNOME Foundation Update, 2026-02-06

Welcome to another GNOME Foundation weekly update! FOSDEM happened last week, and we had a lot of activity around the conference in Brussels. We are also extremely busy getting ready for our upcoming audit, so there’s lots to talk about. Let’s get started.

FOSDEM

FOSDEM happened in Brussels, Belgium, last weekend, from 31st January to 1st February. There were lots of GNOME community members in attendance, and plenty of activities around the event, including talks and several hackfests. The Foundation was busy with our presence at the conference, plus our own fringe events.

Board hackfest

Seven of our nine directors met for an afternoon and a morning prior to FOSDEM proper. Face to face hackfests are something that the Board has done at various times previously, and have always been a very effective way to move forward on big ticket items. This event was no exception, and I was really happy that we were able to make it happen.

During the event we took the time to review the Foundation’s financials, and to make some detailed plans in a number of key areas. It’s exciting to see some of the initiatives that we’ve been talking about starting to take more shape, and I’m looking forward to sharing more details soon.

Advisory Board meeting

The afternoon of Friday 30th January was occupied with a GNOME Foundation Advisory Board meeting. This is a regular occurence on the day before FOSDEM, and is an important opportunity for the GNOME Foundation Board to meet with partner organizations and supporters.

Turn out for the meeting was excellent, with Canonical, Google, Red Hat, Endless and PostmarketOS all in attendance. I gave a presentation on the how the Foundation is currently performing, which seemed to be well-received. We then had presentations and discussion amongst Advisory Board members.

I thought that the discussion was useful, and we identified a number of areas of shared interest. One of these was around how partners (companies, projects) can get clear points of contact for technical decision making in GNOME and beyond. Another positive theme was a shared interest in accessibility work, which was great to see.

We’re hoping to facilitate further conversations on these topics in future, and will be holding our next Advisory Board meeting in the summer prior to GUADEC. If there are any organizations out there would like to join the Advisory Board, we would love to hear from you.

Conference stand

GNOME had a stand during both FOSDEM days, which was really busy. I worked the stand on the Saturday and had great conversations with people who came to say hi. We also sold a lot of t-shirts and hats!

I’d like to give a huge thank you to Maria Majadas who organized and ran our stand this year. It is incredibly exhausting work and we are so lucky to have Maria in our community. Please say thank you to her!

We also had plenty of other notable volunteers, including Julian Sparber, Ignacy Kuchciński, Sri Ramkrishna. Richard Litteaur, our previous Interim Executive Director even took a shift on the stand.

Social

On the Saturday night there was a GNOME social event, hosted at a local restaurant. As always it was fantastic to get together with fellow contributors, and we had a good turnout with 40-50 people there.

Audit preparation

Moving on from FOSDEM, there has been plenty of other activity at the Foundation in recent weeks. The first of these is preparation for our upcoming audit. I have written a fair bit about this in these previous updates. The audit is a routine exercise, but this is also our first, so we are learning a lot.

The deadline for us to provide our documentation submission to the auditors is next Tuesday, so everyone on the finance side of the operation has been really busy getting all that ready. Huge thanks to everyone for their extra effort here.

GUADEC & LAS planning

Conference planning has been another theme in the past few weeks. For GUADEC, accommodation options have been announced, artwork has been produced, and local information is going up on the website.

Linux App Summit, which we co-organise with KDE, has been a bit delayed this year, but we have a venue now and are in the process of finalizing the budget. Announcements about the dates and location will hopefully be made quite soon.

Google verification

A relatively small task, but a good one to highlight: this week we facilitated (ie. paid for) the assessment process for GNOME’s integration with Google services. This is an annual process we have to go through in order to keep Evolution Data Server working with Google.

Infrastructure optimization

Finally, Bart, along with Andrea, has been doing some work to optimize the resource usage of GNOME infrastructure. If you are using GNOME services you might have noticed some subtle changes as a result of this, like Anubis popping up more frequently.

That’s it for this week. Thanks for reading; I’ll see you next week!

Matthias Clasen

@mclasen

GTK hackfest, 2026 edition

As is by now a tradition, a few of the GTK developers got together in the days before FOSDEM to make plans and work on your favorite toolkit.

Code

We released gdk-pixbuf 2.44.5 with glycin-based XPM and XBM loaders, rounding out the glycin transition. Note that the XPM/XBM support in will only appear in glycin 2.1. Another reminder is that gdk_pixbuf_new_from_xpm_data()was deprecated in gdk-pixbuf 2.44, and should not be used any more, as it does not allow for error handling in case the XPM loader is not available; if you still have XPM assets, please convert them to PNG, and use GResource to embed them into your application if you don’t want to install them separately.

We also released GTK 4.21.5, in time for the GNOME beta release. The highlights in this snapshot are still more SVG work (including support for SVG filters in CSS) and lots of GSK renderer refactoring. We decided to defer the session saving support, since early adopters found some problems with our APIs; once the main development branch opens for GTK 4.24, we will work on a new iteration and ask for more feedback.

Discussions

One topic that we talked about is unstable APIs, but no clear conclusion was reached. Keeping experimental APIs in the same shared object was seen as problematic (not just because of ABI checkers).  Making a separate shared library (and a separate namespace, for bindings) might not be easy.

Still on the topic of APIs, we decided that we want to bump our C runtime requirement to C11 in the next cycle, to take advantage of standard atomics, integer types and booleans. At the moment, C11 is a soft requirement through GLib. We also talked about GLib’s autoptrs, and were saddened by the fact that we still can’t use them without dropping MSVC. The defer proposal for C2y would not really work with how we use automatic cleanup for types, either, so we can’t count on the C standard to save us.

Mechanics

We collected some ideas for improving project maintenance. One idea that came up was to look at automating issue tagging, so it is easier for people to pay closer attention to a subset of all open issues and MRs. Having more accurate labels on merge requests would allow people to get better notifications and avoid watching the whole project.

We also talked about the state of GTK3 and agreed that we want to limit changes in this very mature code base to crash and build fixes: the chances of introducing regressions in code that has long since been frozen is too high.

Accessibility

On the accessibility side, we are somewhat worried about the state of AccessKit. The code upstream is maintained, but we haven’t seen movement in the GTK implementation. We still default to the AT-SPI backend on Linux, but AccessKit is used on Windows and macOS (and possibly Android in the future); it would be nice to have consumers of the accessibility stack looking at the code and issues.

On the AT-SPI side we are still missing proper feature negotiation in the protocol; interfaces are now versioned on D-Bus, but there’s no mechanism to negotiate the supported set of roles or events between toolkits, compositors, and assistive technologies, which makes running newer applications on older OS versions harder.

We discussed the problem of the ARIA specification being mostly “stringly” typed in the attributes values, and how it impacts our more strongly typed API (especially with bindings); we don’t have a good generic solution, so we will have to figure out possible breaks or deprecations on a case by case basis.

Finally, we talked about a request by the LibreOffice developers on providing a wrapper for the AT-SPI collection interface; this API is meant to be used as a way to sidestep the array-based design, and perform queries on the accessible objects tree. It can be used to speed up iterating through large and sparse trees, like documents or spreadsheets. It’s also very AT-SPI specific, which makes it hard to write in a platform-neutral way. It should be possible to add it as a platform-specific API, like we did for GtkAtSpiSocket.

Carlos is working on landing the pointer query API in Mutter, which would address the last remnant of X11 use inside Orca.

Outlook

Some of the plans and ideas that we discussed for the next cycle include:

  • Bring back the deferred session saving
  • Add some way for applications to support the AT-SPI collection interface
  • Close some API gaps in GtkDropDown (8003 and 8004)
  • Bring some general purpose APIs from libadwaita back to GTK

Until next year, ❤

Cassidy James Blaede

@cassidyjames

ROOST at FOSDEM 2026

A few months ago I joined ROOST (Robust Open Online Safety Tools) to build our open source community that would be helping to create, distribute, and maintain common tools and building blocks for online trust and safety. One of the first events I wanted to make sure we attended in order to build that community was of course FOSDEM, the massive annual gathering of open source folks in Brussels, Belgium.

Luckily for us, the timing aligned nicely with the v1 release of our first major online safety tool, Osprey, as well as its adoption by Bluesky and the Matrix.org Foundation. I wrote and submitted a talk for the FOSDEM crowd and the decentralized communications track, which was accepted. Our COO Anne Bertucio and I flew out to Brussels to meet up with folks, make connections, and learn how our open source tools could best serve open protocols and platforms.

Brunch with the Christchurch Call Foundation

Saturday, ROOST co-hosted a brunch with the Christchurch Call Foundation where we invited folks to discuss the intersection of open source and online safety. The event was relatively small, but we engaged in meaningful conversations and came away with several recurring themes. Non-exhaustively, some areas attendees were interested in: novel classifiers for unique challenges like audio recordings and pixel art; how to ethically source and train classifiers; ways to work better together across platforms and protocols.

Personally I enjoyed meeting folks from Mastodon, GitHub, ATproto, IFTAS, and more in person for the first time, and I look forward to continuing several conversations that were started over coffee and fruit.

Talk

Our Sunday morning talk “Stop Reinventing in Isolation” (which you can watch on YouTube or at fosdem.org) filled the room and was really well-received.

Cassidy Anne

Cassidy and Anne giving a talk. | Photos from @matrix@mastodon.matrix.org

In it we tackled three major topics: a crash course on what is “trust and safety”; why the field needs an open source approach; and then a bit about Osprey, our self-hostable automated rules engine and investigation tool that started as an internal tool built at Discord.

Q&A

We had a few minutes for Q&A after the talk, and the folks in the room spurred some great discussions. If there’s something you’d like to ask that isn’t covered by the talk or this Q&A, feel free to start a discussion! Also note that this gets a bit nerdy; if you’re not interested in the specifics of deploying Osprey, feel free to skip ahead to the Stand section.

Room

When using Osprey with the decentralized Matrix protocol, would it be a policy server implementation?

Yes, in the Matrix model that’s the natural place to handle it. Chat servers are designed to check with the policy server before sending room events to clients, so it’s precisely where you’d want to be able to run automated rules. The Matrix.org Foundation is actively investigating how exactly Osprey can be used with this setup, and already have it deployed in their staging environment for testing.

Does it make sense to use Osprey for smaller platforms with fewer events than something like Matrix, Bluesky, or Discord?

This one’s a bit harder to answer, because Osprey is often the sort of tool you don’t “need” until you suddenly and urgently do. That said, it is designed as an in-depth investigation tool, and if that’s not something needed on your platform yet due to the types and volume of events you handle, it could be overkill. You might be better off starting with a moderation/review dashboard like Coop, which we expect to be able to release as v0 in the coming weeks. As your platform scales, you could then explore bringing Osprey in as a complementary tool to handle more automation and deeper investigation.

Does Osprey support account-level fraud detection?

Osprey itself is pretty agnostic to the types of events and metadata it handles; it’s more like a piece of plumbing that helps you connect a firehose of events to one end, write rules and expose those events for investigation in the middle, and then connect outgoing actions on the other end. So while it’s been designed for trust and safety uses, we’ve heard interest from platforms using it in a fraud prevention context as well.

What are the hosting requirements of Osprey, and what do deployments look like?

While you can spin Osprey up on a laptop for testing and development, it can be a bit beefy. Osprey is made up of four main components: worker, UI, database, and Druid as the analytics database. The worker and UI have low resource requirements, your database (e.g. Postgres) could have moderate requirements, but then Druid is what will have the highest requirements. The requirements will also scale with your total throughput of events being processed, as well as the TTLs you keep in Druid. As for deployments, Discord, Bluesky, and the Matrix.org Foundation have each integrated Osprey into their Kubernetes setups as the components are fairly standard Docker images. Osprey also comes with an optional coordinator, an action distribution and load-balancing service that can aid with horizontal scaling.

Stand

This year we were unable to secure a stand (there were already nearly 100 stands in just 5 buildings!), but our friends at Matrix graciously hosted us for several hours at their stand near the decentralized communications track room so we could follow up with folks after our talk. We blew through our shiny sticker supply as well as our 3D printed ROOST keychains (which I printed myself at home!) in just one afternoon. We’ll have to bring more to future FOSDEMs!

Stickers

When I handed people one of our hexagon stickers the reaction was usually some form of, “ooh, shiny!” but my favorite was when someone essentially said, “Oh, you all actually know open source!” That made me proud, at least. :)

Interesting Talks

Lastly, I always like to shout out interesting talks I attended or caught on video later so others can enjoy them on their own time. I recommend checking out:

This Week in GNOME

@thisweek

#235 Integrating Fonts

Update on what happened across the GNOME project in the week from January 30 to February 06.

GNOME Core Apps and Libraries

GTK

Cross-platform widget toolkit for creating graphical user interfaces.

Emmanuele Bassi says

The GTK developers published the report for the 2026 GTK hackfest on their development blog. Lots of work and plans for the next 12 months:

  • session save/restore
  • toolchain requirements
  • accessibility
  • project maintenance

and more!

Glycin

Sandboxed and extendable image loading and editing.

Sophie (she/her) reports

Glycin 2.1.beta has been released. Starting with this version, the JPEG 2000 image format is supported by default. This was made possible by a new JPEG 2000 implementation that is completely written in safe Rust.

While this image format isn’t in widespread use for images directly, many PDFs contain JPEG 2000 images since PDF 1.5 and PDF/A-2 support embedded JPEG 2000 images. Therefore, images extracted from PDFs, frequently have the JPEG 2000 format.

GNOME Circle Apps and Libraries

Resources

Keep an eye on system resources

nokyan says

This week marks the release of Resources 1.10 with support for new hardware, software and improvements all around! Here are some highlights:

  • Added support for AMD NPUs using the amdxdna driver
  • Improved accessibility for screen reader users and keyboard users
  • Vastly improved app detection
  • Significantly cut down CPU usage
  • Searching for multiple process names at once is now possible using the “|” operator in the search field

In-depth release notes can be found on GitHub.

Resources is available on Flathub.

gtk-rs

Safe bindings to the Rust language for fundamental libraries from the GNOME stack.

Julian 🍃 announces

I’ve added another chapter for the gtk4-rs book. It describes how to use gettext to make your app available in other languages: https://gtk-rs.org/gtk4-rs/stable/latest/book/i18n.html

Third Party Projects

Ronnie Nissan announces

This week I released Sitra, an app to install and manage fonts from google fonts. It also helps devs integrate fonts into their projects using fontsource npm and CDN.

The app is a replacement to the Font Downloader app which has been abandoned for a while.

Sitra can be downloaded from flathub

Arnis (kem-a) announces

AppManager is a GTK/Libadwaita developed desktop utility in Vala that makes installing and uninstalling AppImages on Linux desktop painless. It supports both SquashFS and DwarFS AppImage formats, features a seamless background auto-update process, and leverages zsync delta updates for efficient bandwidth usage. Double-click any .AppImage to open a macOS-style drag-and-drop window, just drag to install and AppManager will move the app, wire up desktop entries, and copy icons.

And of course, it’s available as AppImage. Get it on Github

Parabolic

Download web video and audio.

Nick reports

Parabolic V2026.2.0 is here!

This release contains a complete overhaul of the downloading engine as it was rewritten from C++ to C#. This will provide us with more stable performance and faster iteration of highly requested features (see the long list below!!). The UIs for both Windows and Linux were also ported to C# and got a face lift, providing a smoother and more beautiful downloading experience.

Besides the rewrite, this release also contains many new features (including quality and subtitle options for playlists - finally!) and plenty of bug fixes with an updated yt-dlp.

Here’s the full changelog:

  • Parabolic has been rewritten in C# from C++
  • Added arm64 support for Windows
  • Added support for playlist quality options
  • Added support for playlist subtitle options
  • Added support for reversing the download order of a playlist
  • Added support for remembering the previous Download Immediately selection in the add download dialog
  • Added support for showing yt-dlp’s sleeping pauses within download rows
  • Added support for enabling nightly yt-dlp updates within Parabolic
  • Redesigned both platform application designs for a faster and smoother download experience
  • Removed documentation pages as Parabolic shows in-app documentation when needed
  • Fixed an issue where translator-credits were not properly displayed
  • Fixed an issue where Parabolic crashed when adding large amounts of downloads from a playlist
  • Fixed an issue where Parabolic crashed when validating certain URLs
  • Fixed an issue where Parabolic refused to start due to keyring errors
  • Fixed an issue where Parabolic refused to start due to VC errors
  • Fixed an issue where Parabolic refused to start due to version errors
  • Fixed an issue where opening the about dialog would freeze Parabolic for a few seconds
  • Updated bundled yt-dlp

Shell Extensions

subz69 reports

I just released Pigeon Email Notifier, a new GNOME Shell extension for Gmail and Microsoft email notifications using GNOME Online Accounts. Supports priority-only mode, persistent and sound notifications.

Miscellaneous

Arjan reports

PyGObject 3.55.3 has been released. It’s the third development release (it’s not available on PyPI) in the current GNOME release cycle.

The main achievements for this development cycle, leading up to GNOME 50, are:

  • Support for do_dispose and do_constructed methods in Python classes. do_constructed is called after an object has been constructed (as a post-init method), and do_dispose is called when a GObject is disposed.
  • Removal of duplicate marshalling code for fields, properties, constants, and signal closures.
  • Removal of old code, most notable pygtkcompat and wrappers for Glib.OptionContext/OptionGroup.
  • Under the hood toggle references have been replaced by normal references, and PyGObject sinks “floating” objects by default.

Notable changes include for this release include:

  • Type annotations to Glib and GObject overrides. This makes it easier for pygobject-stubs to generate type hints.
  • Updates to the asyncio support.

A special thanks to Jamie Gravendeel, Laura Kramolis, and K.G. Hammarlund for test-driving the unstable versions.

All changes can be found in the Changelog.

This release can be downloaded from Gitlab and the GNOME download server.If you use PyGObject in your project, please give it a spin and see if everything works as expected.⁦

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

The Hobby Lives On

Maintaining an open source project in your free time is incredibly rewarding. A large project full of interesting challenges, limited only by your time and willingness to learn. Years of work add up to something you’ve grown proud of. Who would’ve thought an old project on its last legs could turn into something beautiful?

The focus is intense. So many people using the project, always new things to learn and improve. Days fly by when time allows for it. That impossible feature sitting in the backlog for years, finally done. That slow part of the application, much faster now. This flow state is pretty cool, might as well tackle a few more issues while it lasts.

Then comes the day. The biggest release yet is out the door. More tasks remain on the list, but it’s just too much. That release took so much effort, and the years are adding up. You can’t keep going like this. You wonder, is this the beginning of the end? Will you finally burn out, like so many before you?

A smaller project catches your eye. Perhaps it would be fun to work on something else again. Maybe it doesn’t have to be as intense? Looks like this project uses a niche programming language. Is it finally time to learn another one? It’s an unfamiliar project, but it’s pretty fun. It tickles the right spots. All the previous knowledge helps.

You work on the smaller project for a while. It goes well. That larger project you spent years on lingers. So much was accomplished. It’s not done yet, but software is never done. The other day, someone mentioned this interesting feature they really wanted. Maybe it wouldn’t hurt to look into it? It’s been a while since the last feature release. Maybe the next one doesn’t have to be as intense? It’s pretty fun to work on other projects sometimes, too.

The hobby lives on. It’s what you love doing, after all.

Lucas Baudin

@lbaudin

Drawing and Writing on PDFs in Papers (and new blog)

Nearly 10 years ago, I first looked into this for Evince but quickly gave up. One year and a half ago, I tried again, this time in Papers. After several merge requests in poppler and in Papers, ink and free text annotations support just landed in Papers repository!

Therefore, it is now possible to draw on documents and add text, for instance to fill forms. Here is a screenshot with the different tools:

Papers with the new drawing tools

This is the result of the joint work of several people who designed, developed, and tested all the little details. It required adding support for ink and free text annotations in the GLib bindings of poppler, then adding support for highlight ink annotations there. Then several things got in the way adding those in Papers; among other things, it became clear that an undo/redo mechanism was necessary and annotations management was entangled with the main view widget. It was also an opportunity to improve document forms, which are now more accessible.

This can be tested directly from the GNOME Nightly flatpak repository and new issues are welcomed.

Also, this is a new blog and I never quite introduced myself: I actually started developing with GTK on GTK 2, at a time when GTK 3 was looming. Then I took a long break and delved again into desktop development two years ago. Features that just got merged were, in fact, my first contributions to Papers. They are also the ones that took the most time to be merged! I became one of Papers maintainers last March, joining Pablo (who welcomed me in this community and stopped maintenance since then), Markus, and Qiu.

Next time, a post about our participation in Outreachy with Malika's internship!

Asman Malika

@malika

Mid-Point Project Progress: What I’ve Learned So Far

Dark mode: Manual Signature Implementation

 Light mode: When there is no added signature

Reaching the midpoint of this project feels like a good moment to pause, not because the work is slowing down, but because I finally have enough context to see the bigger picture.

At the start, everything felt new: the codebase, the community, the workflow, and even the way problems are framed in open source. Now, halfway through, things are starting to connect.

Where I Started

When I began working on Papers, my main focus was understanding the codebase and how contributions actually happen in a real open-source project. Reading unfamiliar code, following discussions, and figuring out where my work fit into the larger system was challenging.

Early on, progress felt slow. Tasks that seemed small took longer than expected, mostly because I was learning how the project works, not just what to code. But that foundation has been critical.

Photo: Build failure I encountered during development

What I’ve Accomplished So Far

At this midpoint, I’m much more comfortable navigating the codebase and understanding the project’s architecture. I’ve worked on the manual signature feature and related fixes, which required carefully reading existing implementations, asking questions, and iterating based on feedback. I’m now working on the digital signature implementation, which is one of the most complext part of the project and builds directly on the foundation laid by the earlier work.

Beyond the technical work, I’ve learned how collaboration really functions in open source:

  • How to communicate progress clearly
  • How to receive and apply feedback
  • How to break down problems instead of rushing to solutions

These skills have been just as important as writing code.

Challenges Along the Way

One of the biggest challenges has been balancing confidence and humility, knowing when to try things independently and when to ask for help. I’ve also learned that progress in open source isn’t always linear. Some days are spent coding, others reading, debugging, or revisiting decisions.

Another challenge has been shifting my mindset from “just making it work” to thinking about maintainability, users, and future contributors. That shift takes time, but it’s starting to stick.

What’s Changed Since the Beginning

The biggest change is how I approach problems.

I now think more about who will use the feature, who might read this code later, and how my changes fit into the overall project. Thinking about the audience, both users of Papers and fellow contributors, has influenced how I write code, documentation, and even this blog.

I’m also more confident participating in discussions and expressing uncertainty when I don’t fully understand something. That confidence comes from realizing that learning in public is part of the process.

Looking Ahead

The second half of this project feels more focused. With the groundwork laid, I can move faster and contribute more meaningfully. My goal is to continue improving the quality of my contributions, take on more complex tasks, and deepen my understanding of the project.

Most importantly, I want to keep learning about open source, about collaboration, and about myself as a developer.

Final Thoughts

This midpoint has reminded me that growth isn’t always visible day to day, but it becomes clear when you stop and reflect. I’m grateful for the support, feedback, and patience from GNOME community, especially my mentor Lucas Baudin. And I’m so excited to see how the rest of the project unfolds.

AI predictions for 2026

Its a crazy time to be part of the tech world. I’m happy to be sat on the fringes here but I want to try and capture a bit of the madness, so in a few years we can look back on this blogpost and think “Oh yes, shit was wild in 2026”.

(insert some AI slop image here of a raccoon driving a racing car or something)

I have read the blog of Geoffrey Huntley for about 5 years since he famously right-clicked all the NFTs. Smart & interesting guy. I’ve also known the name Steve Yegge for a while, he has done enough notable things to get the honour of an entry in Wikipedia. Recently they’ve both written a lot about generating code with LLMs. I mean, I hope in 2026 we’ve all had some fun feeding freeform text and code into LLMs and playing with the results, they are a fascinating tool. But these two dudes are going into what looks like a sort of AI psychosis, where you feed so many LLMs into each other that you can see into the future, and in the process give most of your money to Anthropic.

It’s worth reading some of their articles if you haven’t, there are interesting ideas in there, but I always pick up some bad energy. They’re big on the hook that, if you don’t study their techniques now, you’ll be out of a job by summer 2026. (Mark Zuckerborg promised this would happen by summer 2025, but somehow I still have to show up for work five days every week). The more I hear this, the more it feels like a sort of alpha-male flex, except online and in the context of the software industry. The alpha tech-bro is here, and he will Vibe Code the fuck out of you. The strong will reign, and the weak will wither. Is that how these guys see the world? Is that the only thing they think we can do with these here computers, is compete with each other in Silicon Valley’s Hunger Games?

I felt a bit dizzy when I saw Geoffrey’s recent post about how he was now funded by cryptocurrency gamblers (“two AI researchers are now funded by Solana”) who are betting on his project and gifting him the fees. I didn’t manage to understand what the gamblers would win. It seemed for a second like an interesting way to fund open research, although “Patreon but it’s also a casino” is definitely turn for the weird. Steve Yegge jumped on the bandwagon the same week (“BAGS and the Creator Economy”) and, without breaking any laws, gave us the faintest hint that something big is happening over there.

Well…

You’ll be surprised to know that both of them bailed on it within a week. I’m not sure why — I suspect maybe the gamblers got too annoying to deal with — but it seems some people lost some money. Although that’s really the only possible outcome from gambling. I’m sure the casino owners did OK out of it. Maybe its still wise to be wary of people who message you out of the blue wanting to sell you cryptocurrency.

The excellent David Gerard had a write up immediately on Pivot To AI: “Steve Yegge’s Gas Town: Vibe coding goes crypto scam”. (David is not a crypto scammer and has a good old fashioned Patreon where you can support his journalism). He talks about addiction to AI, which I’m sure you know is a real thing.

Addictive software was perfected back in the 2010s by social media giants. The same people who had been iterating on gambling machines for decades moved to California and gifted us infinite scroll. OpenAI and Anthropic are based in San Francisco. There’s something inherently addictive about a machine that takes your input, waits a second or two, and gives you back something that’s either interesting or not. Next time you use ChatGPT, look at how the interface leans into that!

(Pivot To AI also have a great writeup of this: “Generative AI runs on gambling addiction — just one more prompt, bro!”)

So, here we are in January 2026. There’s something very special about this post “Stevey’s Birthday Blog”. Happy birthday, Steve, and I’m glad you’re having fun. That said, I do wonder if we’ll look back in years to come on this post as something of an inflection point in the AI bubble.

All though December I had weird sleeping patterns while I was building Gas Town. I’d work late at night, and then have to take deep naps in the middle of the day. I’d just be working along and boom, I’d drop. I have a pillow and blanket on the floor next to my workstation. I’ll just dive in and be knocked out for 90 minutes, once or often twice a day. At lunch, they surprised me by telling me that vibe coding at scale has messed up their sleep. They get blasted by the nap-strike almost daily, and are looking into installing nap pods in their shared workspace.

Being addicted to something such that it fucks with your sleeping patterns isn’t a new invention. Ask around the punks in your local area. Humans can do amazing things. That story starts way before computers were invented. Scientists in the 16th century were absolute nutters who would like… drink mercury in the name of discovery. Isaac Newton came up with his theory of optics by skewering himself in the eye. (If you like science history, have a read of Neal Stephenson’s Baroque Cycle 🙂 Coding is fun and making computers do cool stuff can be very addictive. That story starts long before 2026 as well. Have you heard of the demoscene?

Part of what makes Geoffrey Huntley and Steve Yegge’s writing compelling is they are telling very interesting stories. They are leaning on existing cultural work to do that, of course. Every time I think about Geoffrey’s 5 line bash loop that feeds an LLMs output back into its input, the name reminds me of my favourite TV show when I was 12.

Ralph Wiggum with his head glued to his shoulder. "Miss Hoover? I glued my head to my shoulder."

Which is certainly better than the “human centipede” metaphor I might have gone with. I wasn’t built for this stuff.

The Gas Town blog posts are similarly filled with steampunk metaphors and Steve Yegge’s blog posts are interspersed with generated images that, at first glance, look really cool. “Gas Town” looks like a point and click adventure, at first glance. In fact it’s a CLI that gives kooky names to otherwise dry concepts,… but look at the pictures! You can imagine gold coins spewing out of a factory into its moat while you use it.

All the AI images in his posts look really cool at first glance. The beauty of real art is often in the details, so let’s take a look.

What is that tower on the right? There’s an owl wearing goggles about to land on a tower… which is also wearing goggles?

What’s that tiny train on the left that has indistinct creatures about the size of a foxes fist? I don’t know who on earth is on that bridge on the right, some horrific chimera of weasel and badger. The panda is stoicly ignoring the horrors of his creation like a good industrialist.

What is the time on the clock tower? Where is the other half of the fox? Is the clock powered by …. oh no.

Gas Town here is a huge factory with 37 chimneys all emitting good old sulphur and carbon dioxide, as God intended. But one question: if you had a factory that could produce large quantities of gold nuggets, would you store them on the outside ?

Good engineering involves knowing when to look into the details, and when not to. Translating English to code with an LLM is fun and you can get some interesting results. But if you never look at the details, somewhere in your code is a horrific weasel badger chimera, a clock with crooked hands telling a time that doesn’t exist, and half a fox. Your program could make money… or it could spew gold coins all around town where everyone can grab them.

So… my AI predictions for 2026. Let’s not worry too much about code. People and communities and friendships are the thing.

The human world is 8 billion people. Many of us make a modest living growing and selling vegetables or fixing cars or teaching children to read and write. The tech industry is a big bubble that’s about to burst. Computers aren’t going anywhere, and our open source communities and foundations aren’t going anywhere. People and communities and friendships are the main thing. Helping out in small ways with some of the bad shit going on in the world. You don’t have to solve everything. Just one small step to help someone is more than many people do.

Pay attention to what you’re doing. Take care of the details. Do your best to get a good night’s sleep.

AI in 2026 is going to go about like this:

Christian Schaller

@cschalle

Can AI help ‘fix’ the patent system?

So one thing I think anyone involved with software development for the last decades can see is the problem of “forest of bogus patents”. I have recently been trying to use AI to look at patents in various ways. So one idea I had was “could AI help improve the quality of patents and free us from obvious ones?”

Lets start with the justification for patents existing at all. The most common argument for the patent system I hear is this one : “Patents require public disclosure of inventions in exchange for protection. Without patents, inventors would keep innovations as trade secrets, slowing overall technological progress.”. This reasoning is something that makes sense to me, but it is also screamingly obvious to me that for it to hold true you need to ensure the patents granted are genuinely inventions that otherwise would stay hidden as trade secrets. If you allow patents on things that are obvious to someone skilled in the art, you are not enhancing technological progress, you are hampering it because the next person along will be blocking from doing it.

So based on this justification the question then becomes does for example the US Patents Office do a good job filtering out obvious patents? And I believe the answer is “No they don’t”. Having been working in the space of software for many decades now it is very clear to me that the patent office do very little to avoid patents getting approved for obvious things. And there are many reasons for why that happens I think. First of all if you are an expert in a field you would likely not be working as a case worker in the patent office, no disrespect for the people working there, but for instance if you are a really good software engineer there are much better paying options to be found elsewhere, same goes for medical researchers for example. So we have people who have limited understanding of the field of the patent trying to evaluate if the patent is obvious or not.

Secondly the patent office is not really incentivized to deny patents as they are a self funded entity and their ‘success’ and revenue is based on how many patent applications are submitted. So the lower the bar of approval, the more patents applications get submitted and thus the patent office makes more money and is more “successful”. This is a problem, but maybe if we fix the first issue I mention it is maybe possible to reduce the impact of this second issue.

So what can AI do to help here? It can help patent reviewers at the patent office with evaluating patents for obviousness due to having been trained on these fields. If we required every patent applied for to be run through a AI engine for review then the patent office reviewer would have to justify why he or she believes something is non-obvious after the AI has looked at it and concluded it is not. My assumption would be that they would very rarely do so.

To test out this I decided to ask Gemini to review a H264 video patent that is still valid and which has been used in litigation before, to see if it could do a decent job of reviewing the patent application and judge if this is a true invention or not. Here is what Gemini came up with.

US Patent 7,916,782 (Samsung Electronics) is one of the most famous “battleground” patents in the history of smartphone litigation. It has been asserted against Apple, Huawei, and others.

It covers a specific part of the H.264 standard known as CAVLC (Context-Adaptive Variable Length Coding).

The Patent: US 7,916,782 : Method and apparatus for video coding and decoding The Mechanism: Efficiently coding the count of zero-valued pixels in a video block (“TotalZeros”).

The Problem
In video compression, you divide the image into small 4×4 pixel blocks. After processing (transform and quantization), most of the 16 pixels in the block end up being zero. You only have a few non-zero numbers left.
Goal: You need to tell the decoder how many zeros are in the block so it can reconstruct the empty space.
Constraint: You already told the decoder how many non-zero coefficients exist (let’s say you have 5 non-zeros).
The “Invention” The patent claims a method where the encoder selects a specific lookup table (VLC table) to encode the “TotalZeros” value, and—crucially—this selection is based on the number of non-zero coefficients (TotalCoeff) that were just processed. If TotalCoeff is 1, use Table A (allows for many zeros).If TotalCoeff is 10, use Table B (allows for fewer zeros).

The “Skilled Engineer” Test
Imagine you are a video coding engineer in 2002. You are tasked with compressing the “TotalZeros” value. Here is the logical deduction chain you would inevitably follow:

  • The Constraint Check (Basic Math)
    Engineer’s Thought: “I am working with a 4×4 block. That’s 16 pixels total.”
    Engineer’s Thought: “I just transmitted the number of non-zero pixels. Let’s say there are 5 non-zeros.
    Deduction: “Therefore, the maximum number of zeros possible is 16−5=11.”
  • The Optimization (Common Sense)
    Engineer’s Thought: “I need a Huffman table (lookup table) to compress the TotalZeros value.”
    Engineer’s Thought: “Should I use a generic table that goes from 0 to 16?”
    Deduction: “No, that’s wasteful. I know for a fact the value cannot be 12, 13, 14, 15, or 16. If I use a table that includes those impossible numbers, I’m wasting bits on shorter codes for values that will never happen.”
  • The Solution (The “Invention”)
  • Engineer’s Thought: “I should use a smaller, custom table that only contains values 0 through 11.”
    Engineer’s Thought: “Actually, I should just make a different table for every possible number of non-zeros. If I have 15 non-zeros, the only possible zero count is 1. I don’t even need a table for that; it’s implicit!”

    The Verdict: Valid or Broken?
    Engineer’s View (Obvious): This is not an invention; it is constraint propagation. If you have a container of capacity 16, and you fill it with 5 rocks, it is “obvious” that the remaining space for air is 11. Creating a table sized to “11” isn’t a creative spark; it is the mathematical consequence of the block size. To an engineer, failing to do this would be incompetence, not a design choice.

    Patent Office View (Novel): The USPTO granted this because the patent didn’t just claim “optimizing tables.” It claimed the specific structure of the solution: Check TotalCoeff.If TotalCoeff is 1, select Table_1. If TotalCoeff is 2, select Table_2. …and so on. Because no prior paper had explicitly written down “Use these specific tables in this specific order for H.264,” it was deemed novel.

    Conclusion US 7,916,782 is a prime example of the “broken” system because it effectively patents arithmetic efficiency. By granting a monopoly on the logic “Use a small table when you have small space,” the patent office allowed Samsung to assert ownership over the only logical way to implement the H.264 standard efficiently. An engineer trying to solve this problem without seeing the patent would essentially be forced to “re-invent” it, because the math leaves no other optimal path.

    Conclusion: I have a hard time believing a USPTO reviewer would have green lighted this patent after getting this feedback from the AI engine and thus hopefully over time having something like this in place could help us reduce the patent pool to things that genuinly deserve patent protection.

Best Practices for Ownership in GLib

For all the rightful criticisms that C gets, GLib does manage to alleviate at least some of it. If we can’t use a better language, we should at least make use of all the tools we have in C with GLib.

This post looks at the topic of ownership, and also how it applies to libdex fibers.

Ownership

In normal C usage, it is often not obvious at all if an object that gets returned from a function (either as a real return value or as an out-parameter) is owned by the caller or the callee:

MyThing *thing = my_thing_new ();

If thing is owned by the caller, then the caller also has to release the object thing. If it is owned by the callee, then the lifetime of the object thing has to be checked against its usage.

At this point, the documentation is usually being consulted with the hope that the developer of my_thing_new documented it somehow. With gobject-introspection, this documentation is standardized and you can usually read one of these:

The caller of the function takes ownership of the data, and is responsible for freeing it.

The returned data is owned by the instance.

If thing is owned by the caller, the caller now has to release the object or transfer ownership to another place. In normal C usage, both of those are hard issues. For releasing the object, one of two techniques are usually employed:

  1. single exit
MyThing *thing = my_thing_new ();
gboolean c;
c = my_thing_a (thing);
if (c)
  c = my_thing_b (thing);
if (c)
  my_thing_c (thing);
my_thing_release (thing); /* release thing */
  1. goto cleanup
  MyThing *thing = my_thing_new ();
  if (!my_thing_a (thing))
    goto out;
  if (!my_thing_b (thing))
    goto out;
  my_thing_c (thing);
out:
  my_thing_release (thing); /* release thing */

Ownership Transfer

GLib provides automatic cleanup helpers (g_auto, g_autoptr, g_autofd, g_autolist). A macro associates the function to release the object with the type of the object (e.g. G_DEFINE_AUTOPTR_CLEANUP_FUNC). If they are being used, the single exit and goto cleanup approaches become unnecessary:

g_autoptr(MyThing) thing = my_thing_new ();
if (!my_thing_a (thing))
  return;
if (!my_thing_b (thing))
  return;
my_thing_c (thing);

The nice side effect of using automatic cleanup is that for a reader of the code, the g_auto helpers become a definite mark that the variable they are applied on own the object!

If we have a function which takes ownership over an object passed in (i.e. the called function will eventually release the resource itself) then in normal C usage this is indistinguishable from a function call which does not take ownership:

MyThing *thing = my_thing_new ();
my_thing_finish_thing (thing);

If my_thing_finish_thing takes ownership, then the code is correct, otherwise it leaks the object thing.

On the other hand, if automatic cleanup is used, there is only one correct way to handle either case.

A function call which does not take ownership is just a normal function call and the variable thing is not modified, so it keeps ownership:

g_autoptr(MyThing) thing = my_thing_new ();
my_thing_finish_thing (thing);

A function call which takes ownership on the other hand has to unset the variable thing to remove ownership from the variable and ensure the cleanup function is not called. This is done by “stealing” the object from the variable:

g_autoptr(MyThing) thing = my_thing_new ();
my_thing_finish_thing (g_steal_pointer (&thing));

By using g_steal_pointer and friends, the ownership transfer becomes obvious in the code, just like ownership of an object by a variable becomes obvious with g_autoptr.

Ownership Annotations

Now you could argue that the g_autoptr and g_steal_pointer combination without any conditional early exit is functionally exactly the same as the example with the normal C usage, and you would be right. We also need more code and it adds a tiny bit of runtime overhead.

I would still argue that it helps readers of the code immensely which makes it an acceptable trade-off in almost all situations. As long as you haven’t profiled and determined the overhead to be problematic, you should always use g_auto and g_steal!

The way I like to look at g_auto and g_steal is that it is not only a mechanism to release objects and unset variables, but also annotations about the ownership and ownership transfers.

Scoping

One pattern that is still somewhat pronounced in older code using GLib, is the declaration of all variables at the top of a function:

static void
foobar (void)
{
  MyThing *thing = NULL;
  size_t i;

  for (i = 0; i < len; i++) {
    g_clear_pointer (&thing);
    thing = my_thing_new (i);
    my_thing_bar (thing);
  }
}

We can still avoid mixing declarations and code, but we don’t have to do it at the granularity of a function, but of natural scopes:

static void
foobar (void)
{
  for (size_t i = 0; i < len; i++) {
    g_autoptr(MyThing) thing = NULL;

    thing = my_thing_new (i);
    my_thing_bar (thing);
  }
}

Similarly, we can introduce our own scopes which can be used to limit how long variables, and thus objects are alive:

static void
foobar (void)
{
  g_autoptr(MyOtherThing) other = NULL;

  {
    /* we only need `thing` to get `other` */
    g_autoptr(MyThing) thing = NULL;

    thing = my_thing_new ();
    other = my_thing_bar (thing);
  }

  my_other_thing_bar (other);
}

Fibers

When somewhat complex asynchronous patterns are required in a piece of GLib software, it becomes extremely advantageous to use libdex and the system of fibers it provides. They allow writing what looks like synchronous code, which suspends on await points:

g_autoptr(MyThing) thing = NULL;

thing = dex_await_object (my_thing_new_future (), NULL);

If this piece of code doesn’t make much sense to you, I suggest reading the libdex Additional Documentation.

Unfortunately the await points can also be a bit of a pitfall: the call to dex_await is semantically like calling g_main_loop_run on the thread default main context. If you use an object which is not owned across an await point, the lifetime of that object becomes critical. Often the lifetime is bound to another object which you might not control in that particular function. In that case, the pointer can point to an already released object when dex_await returns:

static DexFuture *
foobar (gpointer user_data)
{
  /* foo is owned by the context, so we do not use an autoptr */
  MyFoo *foo = context_get_foo ();
  g_autoptr(MyOtherThing) other = NULL;
  g_autoptr(MyThing) thing = NULL;

  thing = my_thing_new ();
  /* side effect of running g_main_loop_run */
  other = dex_await_object (my_thing_bar (thing, foo), NULL);
  if (!other)
    return dex_future_new_false ();

  /* foo here is not owned, and depending on the lifetime
   * (context might recreate foo in some circumstances),
   * foo might point to an already released object
   */
  dex_await (my_other_thing_foo_bar (other, foo), NULL);
  return dex_future_new_true ();
}

If we assume that context_get_foo returns a different object when the main loop runs, the code above will not work.

The fix is simple: own the objects that are being used across await points, or re-acquire an object. The correct choice depends on what semantic is required.

We can also combine this with improved scoping to only keep the objects alive for as long as required. Unnecessarily keeping objects alive across await points can keep resource usage high and might have unintended consequences.

static DexFuture *
foobar (gpointer user_data)
{
  /* we now own foo */
  g_autoptr(MyFoo) foo = g_object_ref (context_get_foo ());
  g_autoptr(MyOtherThing) other = NULL;

  {
    g_autoptr(MyThing) thing = NULL;

    thing = my_thing_new ();
    /* side effect of running g_main_loop_run */
    other = dex_await_object (my_thing_bar (thing, foo), NULL);
    if (!other)
      return dex_future_new_false ();
  }

  /* we own foo, so this always points to a valid object */
  dex_await (my_other_thing_bar (other, foo), NULL);
  return dex_future_new_true ();
}
static DexFuture *
foobar (gpointer user_data)
{
  /* we now own foo */
  g_autoptr(MyOtherThing) other = NULL;

  {
    /* We do not own foo, but we only use it before an
     * await point.
     * The scope ensures it is not being used afterwards.
     */
    MyFoo *foo = context_get_foo ();
    g_autoptr(MyThing) thing = NULL;

    thing = my_thing_new ();
    /* side effect of running g_main_loop_run */
    other = dex_await_object (my_thing_bar (thing, foo), NULL);
    if (!other)
      return dex_future_new_false ();
  }

  {
    MyFoo *foo = context_get_foo ();

    dex_await (my_other_thing_bar (other, foo), NULL);
  }

  return dex_future_new_true ();
}

One of the scenarios where re-acquiring an object is necessary, are worker fibers which operate continuously, until the object gets disposed. Now, if this fiber owns the object (i.e. holds a reference to the object), it will never get disposed because the fiber would only finish when the reference it holds gets released, which doesn’t happen because it holds a reference. The naive code also suspiciously doesn’t have any exit condition.

static DexFuture *
foobar (gpointer user_data)
{
  g_autoptr(MyThing) self = g_object_ref (MY_THING (user_data));

  for (;;)
    {
      g_autoptr(GBytes) bytes = NULL;

      bytes = dex_await_boxed (my_other_thing_bar (other, foo), NULL);

      my_thing_write_bytes (self, bytes);
    }
}

So instead of owning the object, we need a way to re-acquire it. A weak-ref is perfect for this.

static DexFuture *
foobar (gpointer user_data)
{
  /* g_weak_ref_init in the caller somewhere */
  GWeakRef *self_wr = user_data;

  for (;;)
    {
      g_autoptr(GBytes) bytes = NULL;

      bytes = dex_await_boxed (my_other_thing_bar (other, foo), NULL);

      {
        g_autoptr(MyThing) self = g_weak_ref_get (&self_wr);
        if (!self)
          return dex_future_new_true ();

        my_thing_write_bytes (self, bytes);
      }
    }
}

Conclusion

  • Always use g_auto/g_steal helpers to mark ownership and ownership transfers (exceptions do apply)
  • Use scopes to limit the lifetime of objects
  • In fibers, always own objects you need across await points, or re-acquire them

Status update, 21st January 2026

Happy new year, ye bunch of good folks who follow my blog.

I ain’t got a huge bag of stuff to announce. It’s raining like January. I’ve been pretty busy with work amongst other things, doing stuff with operating systems but mostly internal work, and mostly management and planning at that.

We did make an actual OS last year though, here’s a nice blog post from Endless and a video interview about some of the work and why its cool: “Endless OS: A Conversation About What’s Changing and Why It Matters”.

I tried a new audio setup in advance of that video, using a pro interface and mic I had lying around. It didn’t work though and we recorded it through the laptop mic. Oh well.

Later I learned that, by default a 16 channel interface will be treated by GNOME as a 7.1 surround setup or something mental. You can use the Pipewire loopback interface to define a single mono source on the channel that you want to use, and now audio Just Works again. Pipewire has pretty good documentation now too!

What else happened? Jordan and Bart finally migrated the GNOME openQA server off the ad-hoc VM setup that it ran on, and brought it into OpenShift, as the Lord intended. Hopefully you didn’t even notice. I updated the relevant wiki page.

The Linux QA monthly calls are still going, by the way. I handed over the reins to another participant, but I’m still going to the calls. The most active attendees are the Debian folk, who are heroically running an Outreachy internship right now to improve desktop testing in Debian. You can read a bit about it here: “Debian welcomes Outreachy interns for December 2025-March 2026 round”.

And it looks like Localsearch is going to do more comprehensive indexing in GNOME 50. Carlos announced this back in October 2025 (“A more comprehensive LocalSearch index for GNOME 50”) aiming to get some advance testing on this, and so far the feedback seems to be good.

That’s it from me I think. Have a good year!


Digital Wellbeing Contract: Conclusion

A lot of progress has been made since my last Digital Wellbeing update two months ago. That post covered the initial screen time limits feature, which was implemented in the Parental Controls app, Settings and GNOME Shell. There’s a screen recording in the post, created with the help of a custom GNOME OS image, in case you’re interested.

Finishing Screen Time Limits

After implementing the major framework for the rest of the code in GNOME Shell, we added the mechanism in the lock screen to prevent children from unlocking when the screen time limit is up. Parents are now also able to extend the session limit temporarily, so that the child can use the computer until the rest of the day.

Parental Controls Shield

Screen time limits can be set as either a daily limit or a bedtime. With the work that has recently landed, when the screen time limit has been exceeded, the session locks and the authentication action is hidden on the lock screen. Instead, a message is displayed explaining that the current session is limited and the child cannot login. An “Ignore” button is presented to allow the parents to temporarily lift the restrictions when needed.

Parental Controls shield on the lock screen, preventing the children from unlocking

Extending Screen Time

Clicking the “Ignore” button prompts the user for authentication from a user with administrative privileges. This allows parents to temporarily lift the screen time limit, so that the children may log in as normal until the rest of the day.

Authentication dialog allowing the parents to temporarily override the Screen Time restrictions

Showcase

Continuing the screen cast of the Shell functionality from the previous update, I’ve recorded the parental controls shield together, and showed the extending screen time functionality:

GNOME OS Image

You can also try the feature out for yourself, with the very same GNOME OS live image I’ve used in the recording, that you can either run in GNOME Boxes, or try on your hardware if you know what you’re doing 🙂

Conclusion

Now that the full Screen Time Limits functionality has been merged in GNOME Shell, this concludes my part in the Digital Wellbeing Contract. Here’s the summary of the work:

  • We’ve redesigned the Parental Controls app and updated it to use modern GNOME technologies
  • New features was added, such as Screen Time monitoring and setting limits: daily limit and bedtime schedule
  • GNOME Settings gained Parental Controls integration, to helpfully inform the user about the existence of the limits
  • We introduced the screen time limits in GNOME Shell, locking childrens’ sessions once they reach their limit. Children are then prevented from unlocking until the next day, unless parents extend their screen time

In the initial plan, we also covered web filtering, and the foundation of the feature has been introduced as well. However, integrating the functionality in the Parental Controls application has been postponed to a future endeavour.

I’d like to thank GNOME Foundation for giving me this opportunity, and Endless for sponsoring the work. Also kudos to my colleagues, Philip Withnall and Sam Hewitt, it’s been great to work with you and I’ve learned a lot (like the importance of wearing Christmas sweaters in work meetings!), and to Florian Müllner, Matthijs Velsink and Felipe Borges for very helpful reviews. I also want to thank Allan Day for organizing the work hours and meetings, and helping with my blog posts as well 🙂 Until next project!

GNOME OS Hackfest During FOSDEM week

For those of you who are attending FOSDEM, we’re doing a GNOME OS hackfest and invite those of you who might be interested on our experiments on concepts as the ‘anti-distro’, eg an OS with no distro packaging that integrates GNOME desktop patterns directly.

The hackfest is from January 28th – January 29th. If you’re interested, feel free to respond on the comments. I don’t have an exact location yet.

We’ll likely have some kind of BigBlueButton set up so if you’re not available to come in-person you can join us remotely.

Agenda and attendees are linked here here.

There is likely a limited capacity so acceptance will be “first come, first served”.

See you there!

gedit 49.0 released

gedit 49.0 has been released! Here are the highlights since version 48.0 which dates back from September 2024. (Some sections are a bit technical).

File loading and saving enhancements

A lot of work went into this area. It's mostly under-the-scene changes where there was a lot of dusty code. It's not entirely finished, but there are already user-visible enhancements:

  • Loading a big file is now much faster.
  • gedit now refuses to load very big files, with a configurable limit (more details).

Improved preferences

gedit screenshot - reset all preferences

gedit screenshot - spell-checker preferences

There is now a "Reset All..." button in the Preferences dialog. And it is now possible to configure the default language used by the spell-checker.

Python plugins removal

Initially due to an external factor, plugins implemented in Python were no longer supported.

During some time a previous version of gedit was packaged in Flathub in a way that still enabled Python plugins, but it is no longer the case.

Even though the problem is fixable, having some plugins in Python meant to deal with a multi-language project, which is much harder to maintain for a single individual. So for now it's preferable to keep only the C language.

So the bad news is that Python plugins support has not been re-enabled in this version, not even for third-party plugins.

More details.

Summary of changes for plugins

The following plugins have been removed:

  • Bracket Completion
  • Character Map
  • Color Picker
  • Embedded Terminal
  • Join/Split Lines
  • Multi Edit
  • Session Saver

Only Python plugins have been removed, the C plugins have been kept. The Code Comment plugin which was written in Python has been rewritten in C, so it has not disappeared. And it is planned and desired to bring back some of the removed plugins.

Summary of other news

  • Lots of code refactorings have been achieved in the gedit core and in libgedit-gtksourceview.
  • A better support for Windows.
  • Web presence at gedit-text-editor.org: new domain name and several iterations on the design.
  • A half-dozen Gedit Development Guidelines documents have been written.

Wrapping-up statistics for 2025

The total number of commits in gedit and gedit-related git repositories in 2025 is: 884. More precisely:

138	enter-tex
310	gedit
21	gedit-plugins
10	gspell
4	libgedit-amtk
41	libgedit-gfls
290	libgedit-gtksourceview
70	libgedit-tepl

It counts all contributions, translation updates included.

The list contains two apps, gedit and Enter TeX. The rest are shared libraries (re-usable code available to create other text editors).

If you do a comparison with the numbers for 2024, you'll see that there are fewer commits, the only module with more commits is libgedit-gtksourceview. But 2025 was a good year nevertheless!

For future versions: superset of the subset

With Python plugins removed, the new gedit version is a subset of the previous version, when comparing approximately the list of features. In the future, we plan to have a superset of the subset. That is, to bring in new features and try hard to not remove any more functionality.

In fact, we have reached a point where we are no longer interested to remove any more features from gedit. So the good news is that gedit will normally be incrementally improved from now on without major regressions. We really hope there won't be any new bad surprises due to external factors!

Side note: this "superset of the subset" resembles the evolution of C++, but in the reverse order. Modern C++ will be a subset of the superset to have a language in practice (but not in theory) as safe as Rust (it works with compiler flags to disable the unsafe parts).

Onward to 2026

Since some plugins have been removed, this makes gedit a less advanced text editor. It has become a little less suitable for heavy programming workloads, but for that there are lots of alternatives.

Instead, gedit could become a text editor of choice for newcomers in the computing science field (students and self-learners). It can be a great tool for markup languages too. It can be your daily companion for quite a while, until your needs evolve for something more complete at your workplace. Or it can be that you prefer its simplicity and its not-going-in-the-way default setup, plus the fact that it launches quickly. In short, there are a lot of reasons to still love gedit ❤️ !

If you have any feedback, even for a small thing, I would like to hear from you :) ! The best places are on GNOME Discourse, or GitLab for more actionable tasks (see the Getting in Touch section).

Mecalin

Many years ago when I was a kid, I took typing lessons where they introduced me to a program called Mecawin. With it, I learned how to type, and it became a program I always appreciated not because it was fancy, but because it showed step by step how to work with a keyboard.

Now the circle of life is coming back: my kid will turn 10 this year. So I started searching for a good typing tutor for Linux. I installed and tried all of them, but didn’t like any. I also tried a couple of applications on macOS, some were okish, but they didn’t work properly with Spanish keyboards. At this point, I decided to build something myself. Initially, I  hacked out keypunch, which is a very nice application, but I didn’t like the UI I came up with by modifying it. So in the end, I decided to write my own. Or better yet, let Kiro write an application for me.

Mecalin is meant to be a simple application. The main purpose is teaching people how to type, and the Lessons view is what I’ll be focusing on most during development. Since I don’t have much time these days for new projects. I decided to take this opportunity to use Kiro to do most of the development for me. And to be honest, it did a pretty good job. Sure, there are things that could be better, but I definitely wouldn’t have finished it in this short time otherwise.

So if you are interested, give it a try, go to flathub and install it: https://flathub.org/apps/io.github.nacho.mecalin

In this application, you’ll have several lessons that guide you step by step through the different rows of the keyboard, showing you what to type and how to type it.

This is an example of the lesson view.

You also have games.

The falling keys game: keys fall from top to bottom, and if one reaches the bottom of the window, you lose. This game can clearly be improved, and if anybody wants to enhance it, feel free to send a PR.

The scrolling lanes game: you have 4 rows where text moves from right to left. You need to type the words before they reach the leftmost side of the window, otherwise you lose.

For those who want to support your language, there are two JSON files you’ll need to add:

  1. The keyboard layout: https://github.com/nacho/mecalin/tree/main/data/keyboard_layouts
  2. The lessons: https://github.com/nacho/mecalin/tree/main/data/lessons

Note that the Spanish lesson is the source of truth; the English one is just a translation done by Kiro.

If you have any questions, feel free to contact me.

Flathub Blog

@flathubblog

What's new in Vorarbeiter

It is almost a year since the switch to Vorarbeiter for building and publishing apps. We've made several improvements since then, and it's time to brag about them.

RunsOn

In the initial announcement, I mentioned we were using RunsOn, a just-in-time runner provisioning system, to build large apps such as Chromium. Since then, we have fully switched to RunsOn for all builds. Free GitHub runners available to open source projects are heavily overloaded and there are limits on how many concurrent builds can run at a time. With RunsOn, we can request an arbitrary number of threads, memory and disk space, for less than if we were to use paid GitHub runners.

We also rely more on spot instances, which are even cheaper than the usual on demand machines. The downside is that jobs sometimes get interrupted. To avoid spending too much time on retry ping-pong, builds retried with the special bot, retry command use the on-demand instances from the get-go. The same catch applies to large builds, which are unlikely to finish in time before spot instances are reclaimed.

The cost breakdown since May 2025 is as follows:

Cost breakdown

Once again, we are not actually paying for anything thanks to the AWS credits for open source projects program. Thank you RunsOn team and AWS for making this possible!

Caching

Vorarbeiter now supports caching downloads and ccache files between builds. Everything is an OCI image if you are feeling brave enough, and so we are storing the per-app cache with ORAS in GitHub Container Registry.

This is especially useful for cosmetic rebuilds and minor version bumps, where most of the source code remains the same. Your mileage may vary for anything more complex.

End-of-life without rebuilding

One of the Buildbot limitations was that it was difficult to retrofit pull requests marking apps as end-of-life without rebuilding them. Flat-manager itself exposes an API call for this since 2019 but we could not really use it, as apps had to be in a buildable state only to deprecate them.

Vorarbeiter will now detect that a PR modifies only the end-of-life keys in the flathub.json file, skip test and regular builds, and directly use the flat-manager API to republish the app with the EOL flag set post-merge.

Web UI

GitHub's UI isn't really built for a centralized repository building other repositories. My love-hate relationship with Buildbot made me want to have a similar dashboard for Vorarbeiter.

The new web UI uses PicoCSS and HTMX to provide a tidy table of recent builds. It is unlikely to be particularly interesting to end users, but kinkshaming is not nice, okay? I like to know what's being built and now you can too here.

Reproducible builds

We have started testing binary reproducibility of x86_64 builds targetting the stable repository. This is possible thanks to flathub-repro-checker, a tool doing the necessary legwork to recreate the build environment and compare the result of the rebuild with what is published on Flathub.

While these tests have been running for a while now, we have recently restarted them from scratch after enabling S3 storage for diffoscope artifacts. The current status is on the reproducible builds page.

Failures are not currently acted on. When we collect more results, we may start to surface them to app maintainers for investigation. We also don't test direct uploads at the moment.

Arun Raghavan

@arunsr

Accessibility Update: Enabling Mono Audio

If you maintain a Linux audio settings component, we now have a way to globally enable/disable mono audio for users who do not want stereo separation of their audio (for example, due to hearing loss in one ear). Read on for the details on how to do this.

Background

Most systems support stereo audio via their default speaker output or 3.5mm analog connector. These devices are exposed as stereo devices to applications, and applications typically render stereo content to these devices.

Visual media use stereo for directional cues, and music is usually produced using stereo effects to separate instruments, or provide a specific experience.

It is not uncommon for modern systems to provide a “mono audio” option that allows users to have all stereo content mixed together and played to both output channels. The most common scenario is hearing loss in one ear.

PulseAudio and PipeWire have supported forcing mono audio on the system via configuration files for a while now. However, this is not easy to expose via user interfaces, and unfortunately remains a power-user feature.

Implementation

Recently, Julian Bouzas implemented a WirePlumber setting to force all hardware audio outputs (MR 721 and 769). This lets the system run in stereo mode, but configures the audioadapter around the device node to mix down the final audio to mono.

This can be enabled using the WirePlumber settings via API, or using the command line with:

wpctl settings node.features.audio.mono true

The WirePlumber settings API allows you to query the current value as well as clear the setting and restoring to the default state.

I have also added (MR 2646 and 2655) a mechanism to set this using the PulseAudio API (via the messaging system). Assuming you are using pipewire-pulse, PipeWire’s PulseAudio emulation daemon, you can use pa_context_send_message_to_object() or the command line:

pactl send-message /core pipewire-pulse:force-mono-output true

This API allows for a few things:

  • Query existence of the feature: when an empty message body is sent, if a null value is returned, feature is not supported
  • Query current value: when an empty message body is sent, the current value (true or false) is returned if the feature is supported
  • Setting a value: the requested setting (true or false) can be sent as the message body
  • Clearing the current value: sending a message body of null clears the current setting and restores the default

Looking ahead

This feature will become available in the next release of PipeWire (both 1.4.10 and 1.6.0).

I will be adding a toggle in Pavucontrol to expose this, and I hope that GNOME, KDE and other desktop environments will be able to pick this up before long.

Hit me up if you have any questions!

Zoey Ahmed

@zahmed

Welcome To The Coven!

Introduction §

Welcome to the long-awaited rewrite of my personal blog!

It’s been 2 years since I touched the source code for my original website, and unfortunately in that time it’s fallen into decay, the source code sitting untouched for some time for a multitude of reasons.

One of the main reasons for undertaking a re-write is I have changed a lot in the two years since I first started having my own blog. I have gained 2 years of experience and knowledge in fields like accessibility and web development, I became a regular contributor to the GNOME ecosystem, especially in the last half of 2025, I picked up playing music for myself and with friends in late 2024. I am now (thankfully) out as a transgender woman to everyone in my life, and can use my website as a proper portfolio, rather then just as a nice home page for my friends to whom I was out the closet to. I began University in 2024 and gained a lot of web design experience in my second year, creating 2 (pretty nice) new websites in a short period for my group. In short, my previous website did not really reflect me or my passions anymore, and it sat untouched as the changes in my life added up.

Another reason I undertook a rewrite was due to the frankly piss-poor architecture of my original website. My original website was all hand-written HTML and CSS! After it expanded a little, I tried to port what I had done with handwritten HTML/CSS to Zola, a static site generator. A static site generator, for those unfamiliar with the term, is a tool that takes markdown files, and some template and configuration files, and compiles them all into a set of static websites. In short, cutting down on the boilerplate and repeated code I would need to type every-time I made a new blog or subpage on my blog.

I undertook the port to Zola in an attempt to make it easier to add new content to my blog, but it resulted in my website not taking full capability of the advantages of using a static site generator. I also disliked some parts about Zola, compared to other options like Jekyll and (the static site generator I eventually used in the rewrite) Hugo.

On May 8th, 2025 I started rewriting my website, after creating a few designs in Penpot and getting feedback for their design by my close friends. This first attempt got about 80% to completion, but sat as I ran into a couple issues with making my website, and was overall unhappy with how some of the elements in my original draft of the rewrite came to fruition. One example was my portfolio:

My old portfolio for Upscaler. It contains an image with 2 Upscaler windows, the image comparison mode in the left window, and the queue in the right window, with a description underneath. A pink border around it surrounds the image and description, with the project name and tags above the border My old portfolio for Upscaler. It contains an image with 2 Upscaler windows, the image comparison mode in the left window, and the queue in the right window, with a description underneath. A pink border around it surrounds the image and description, with the project name and tags above the border

My old portfolio for Upscaler. It contains an image with 2 Upscaler windows, the image comparison mode in the left window, and the queue in the right window, with a description underneath. A pink border around it surrounds the image and description, with the project name and tags above the border

I did not like the style of surrounding everything in large borders, the every portfolio item alternating between pink/purple was incredibly hard to do, and do well. I also didn’t take full advantage of things like subgrids in CSS, to allow me to make elements that were the full width of the page, while keeping the rest of the content dead centre.

I also had trouble with making my page mobile responsive. I had a lot of new ideas for my blog, but never had time to get round to any of them, because I had to spend most my development time squashing bugs as I refactored large chunks of my website as my knowledge on Hugo and web design rapidly grew. I eventually let the rewrite rot for a few months, all while my original website was actually taken down for indefinite maintenance by my original hosting organization.

On Janurary 8th, 2026, exactly 7 months after the rewrite was started, I picked up it up again, starting more or less from scratch, but resuing some components and most of the content from the first rewrite. I was armed with all the knowledge from my university group project’s websites, and inspired by my fellow GNOME contributors websites, including but not limited to:

In just a couple of days, I managed to create something I was much more proud of. This can be seen within my portfolio page, for example:

A screenshot of the top of portfolio page, with the laptop running GNOME and the section for Cartridges. A screenshot of the top of portfolio page, with the laptop running GNOME and the section for Cartridges.

A screenshot of the top of portfolio page, with the laptop running GNOME and the section for Cartridges.

I also managed to add many features and improvements I did not manage to first time around (all done with HTML/CSS, no Javascript!) such as a proper mobile menu, with animated drop downs and an animation playing when the button is clicked, a list of icons for my smaller GNOME contributions, instead of having an entire item dedicated to each, wasting vertical space, an adaptive friends of the site grid, and a cute little graphical of GNOME on a laptop at the top of my portfolio in the same style as Tobias Bernard’s and GNOME’s front page, screenshots switching from light/dark mode in the portfolio based on the users OS preferences and more.

Overall, I am very proud of not only the results of my second rewriter, but how I managed to complete it in less than a week. I am happy to finally have a permanent place to call my own again, and share my GNOME development and thoughts in a place thats more collected and less ephemeral than something like my Fediverse account or (god forbid) a Bluesky or X account. Still, I have more work to do on my website front, like a proper light mode as pointed out by The Evil Skeleton, and to clean up my templates and 675 line long CSS file!

For now, welcome to the re-introduction of my small area of the Internet, and prepare for yet another development blog by a GNOME developer.

Engagement Blog

@engagement

GNOME ASIA 2025-Event Report

GNOME ASIA 2025 took place in Tokyo, Japan, from 13–14 December 2025, bringing together the GNOME community for the featured annual GNOME conference in Asia.
The event was held in a hybrid format, welcoming both in-person and online speakers and attendees from across the world.

GNOME ASIA 2025 was co-hosted with the LibreOffice Asia Conference community event, creating a shared space for collaboration and discussion between open-source communities.

Photo by Tetsuji Koyama, licensed under CC BY 4.0

About GNOME.Asia Summit

The GNOME.Asia Summit focuses primarily on the GNOME desktop while also covering applications and platform development tools. It brings together users, developers, foundation leaders, governments, and businesses in Asia to discuss current technologies and future developments within the GNOME ecosystem.

The event featured 25 speakers in total, delivering 17 full talks and 8 lightning talks across the two days. Speakers joined both on-site and remotely.

Photo by Tetsuji Koyama, licensed under CC BY 4.0

 

 

 

 

 

 

 

 

Around 100 participants attended in person in Tokyo, contributing to engaging discussions and community interaction. Session recordings were published on the GNOME Asia YouTube channel, where they have received 1,154 total views, extending the reach of the event beyond the conference dates.

With strong in-person attendance, active online participation, and collaboration with the LibreOffice Asia community, GNOME ASIA 2025 once again demonstrated the importance of regional gatherings in strengthening the GNOME ecosystem and open-source collaboration in Asia.

Photo by Tetsuji Koyama, licensed under CC BY 4.0

 

 

Daiki Ueno

@ueno

GNOME.Asia Summit 2025

Last month, I attended the GNOME.Asia Summit 2025 held at the IIJ office in Tokyo. This was my fourth time attending the summit, following previous events in Taipei (2010), Beijing (2015), and Delhi (2016).

As I live near Tokyo, this year’s conference was a unique experience for me: an opportunity to welcome the international GNOME community to my home city rather than traveling abroad. Reconnecting with the community after several years provided a helpful perspective on how our ecosystem has evolved.

Addressing the post-quantum transition

During the summit, I delivered a keynote address regarding post-quantum cryptography (PQC) and desktop. The core of my presentation focused on the “Harvest Now, Decrypt Later” (HNDL) type of threats, where encrypted data is collected today with the intent of decrypting it once quantum computing matures. The talk was followed by the history and the current status of PQC support in crypto libraries including OpenSSL, GnuTLS, and NSS, and concluded with the next steps recommended for the users and developers.

It is important to recognize that classical public key cryptography, which is vulnerable to quantum attacks, is integrated into nearly every aspect of the modern desktop: from secure web browsing and apps using libsoup (Maps, Weather, etc.) to the underlying verification of system updates. Given that major government timelines (such as NIST and the NSA’s CNSA 2.0) are pushing for a full migration to quantum-resistant algorithms between 2027 and 2035, the GNU/Linux desktop should prioritize “crypto-agility” to remain secure in the coming decade.

From discussion to implementation: Crypto Usage Analyzer

One of the tools I discussed during my talk was crypto-auditing, a project designed to help developers identify and update the legacy cryptography usage. At the time of the summit, the tool was limited to a command-line interface, which I noted was a barrier to wider adoption.

Inspired by the energy of the summit, I spent part of the recent holiday break developing a GUI for crypto-auditing. By utilizing AI-assisted development tools, I was able to rapidly prototype an application, which I call “Crypto Usage Analyzer”, that makes the auditing data more accessible.

Conclusion

The summit in Tokyo had a relatively small audience, which resulted in a cozy and professional atmosphere. This smaller scale proved beneficial for technical exchange, as it allowed for focused discussions on desktop-related topics than is often possible at larger conferences.

Attending GNOME.Asia 2025 was a reminder of the steady work required to keep the desktop secure and relevant. I appreciate the efforts of the organizing committee in bringing the summit to Tokyo, and I look forward to continuing my work on making security libraries and tools more accessible for our users and developers.

Improving the Flatpak Graphics Drivers Situation

Graphics drivers in Flatpak have been a bit of a pain point. The drivers have to be built against the runtime to work in the runtime. This usually isn’t much of an issue but it breaks down in two cases:

  1. If the driver depends on a specific kernel version
  2. If the runtime is end-of-life (EOL)

The first issue is what the proprietary Nvidia drivers exhibit. A specific user space driver requires a specific kernel driver. For drivers in Mesa, this isn’t an issue. In the medium term, we might get lucky here and the Mesa-provided Nova driver might become competitive with the proprietary driver. Not all hardware will be supported though, and some people might need CUDA or other proprietary features, so this problem likely won’t go away completely.

Currently we have runtime extensions for every Nvidia driver version which gets matched up with the kernel version, but this isn’t great.

The second issue is even worse, because we don’t even have a somewhat working solution to it. A runtime which is EOL doesn’t receive updates, and neither does the runtime extension providing GL and Vulkan drivers. New GPU hardware just won’t be supported and the software rendering fallback will kick in.

How we deal with this is rather primitive: keep updating apps, don’t depend on EOL runtimes. This is in general a good strategy. A EOL runtime also doesn’t receive security updates, so users should not use them. Users will be users though and if they have a goal which involves running an app which uses an EOL runtime, that’s what they will do. From a software archival perspective, it is also desirable to keep things working, even if they should be strongly discouraged.

In all those cases, the user most likely still has a working graphics driver, just not in the flatpak runtime, but on the host system. So one naturally asks oneself: why not just use that driver?

That’s a load-bearing “just”. Let’s explore our options.

Exploration

Attempt #1: Bind mount the drivers into the runtime.

Cool, we got the driver’s shared libraries and ICDs from the host in the runtime. If we run a program, it might work. It might also not work. The shared libraries have dependencies and because we are in a completely different runtime than the host, they most likely will be mismatched. Yikes.

Attempt #2: Bind mount the dependencies.

We got all the dependencies of the driver in the runtime. They are satisfied and the driver will work. But your app most likely won’t. It has dependencies that we just changed under its nose. Yikes.

Attempt #3: Linker magic.

Until here everything is pretty obvious, but it turns out that linkers are actually quite capable and support what’s called linker namespaces. In a single process one can load two completely different sets of shared libraries which will not interfere with each other. We can bind mount the host shared libraries into the runtime, and dlmopen the driver into its own namespace. This is exactly what libcapsule does. It does have some issues though, one being that the libc can’t be loaded into multiple linker namespaces because it manages global resources. We can use the runtime’s libc, but the host driver might require a newer libc. We can use the host libc, but now we contaminate the apps linker namespace with a dependency from the host.

Attempt #4: Virtualization.

All of the previous attempts try to load the host shared objects into the app. Besides the issues mentioned above, this has a few more fundamental issues:

  1. The Flatpak runtimes support i386 apps; those would require a i386 driver on the host, but modern systems only ship amd64 code.
  2. We might want to support emulation of other architectures later
  3. It leaks an awful lot of the host system into the sandbox
  4. It breaks the strict separation of the host system and the runtime

If we avoid getting code from the host into the runtime, all of those issues just go away, and GPU virtualization via Virtio-GPU with Venus allows us to do exactly that.

The VM uses the Venus driver to record and serialize the Vulkan commands, sends them to the hypervisor via the virtio-gpu kernel driver. The host uses virglrenderer to deserializes and executes the commands.

This makes sense for VMs, but we don’t have a VM, and we might not have the virtio-gpu kernel module, and we might not be able to load it without privileges. Not great.

It turns out however that the developers of virglrenderer also don’t want to have to run a VM to run and test their project and thus added vtest, which uses a unix socket to transport the commands from the mesa Venus driver to virglrenderer.

It also turns out that I’m not the first one who noticed this, and there is some glue code which allows Podman to make use of virgl.

You can most likely test this approach right now on your system by running two commands:

rendernodes=(/dev/dri/render*)
virgl_test_server --venus --use-gles --socket-path /tmp/flatpak-virgl.sock --rendernode "${rendernodes[0]}" &
flatpak run --nodevice=dri --filesystem=/tmp/flatpak-virgl.sock --env=VN_DEBUG=vtest --env=VTEST_SOCKET_NAME=/tmp/flatpak-virgl.sock org.gnome.clocks

If we integrate this well, the existing driver selection will ensure that this virtualization path is only used if there isn’t a suitable driver in the runtime.

Implementation

Obviously the commands above are a hack. Flatpak should automatically do all of this, based on the availability of the dri permission.

We actually already start a host program and stop it when the app exits: xdg-dbus-proxy. It’s a bit involved because we have to wait for the program (in our case virgl_test_server) to provide the service before starting the app. We also have to shut it down when the app exits, but flatpak is not a supervisor. You won’t see it in the output of ps because it just execs bubblewrap (bwrap) and ceases to exist before the app even started. So instead we have to use the kernel’s automatic cleanup of kernel resources to signal to virgl_test_server that it is time to shut down.

The way this is usually done is via a so called sync fd. If you have a pipe and poll the file descriptor of one end, it becomes readable as soon as the other end writes to it, or the file description is closed. Bubblewrap supports this kind of sync fd: you can hand in a one end of a pipe and it ensures the kernel will close the fd once the app exits.

One small problem: only one of those sync fds is supported in bwrap at the moment, but we can add support for multiple in Bubblewrap and Flatpak.

For waiting for the service to start, we can reuse the same pipe, but write to the other end in the service, and wait for the fd to become readable in Flatpak, before exec’ing bwrap with the same fd. Also not too much code.

Finally, virglrenderer needs to learn how to use a sync fd. Also pretty trivial. There is an older MR which adds something similar for the Podman hook, but it misses the code which allows Flatpak to wait for the service to come up, and it never got merged.

Overall, this is pretty straight forward.

Conclusion

The virtualization approach should be a robust fallback for all the cases where we don’t get a working GPU driver in the Flatpak runtime, but there are a bunch of issues and unknowns as well.

It is not entirely clear how forwards and backwards compatible vtest is, if it even is supposed to be used in production, and if it provides a strong security boundary.

None of that is a fundamental issue though and we could work out those issues.

It’s also not optimal to start virgl_test_server for every Flatpak app instance.

Given that we’re trying to move away from blanket dri access to a more granular and dynamic access to GPU hardware via a new daemon, it might make sense to use this new daemon to start the virgl_test_server on demand and only for allowed devices.

What is a PC compatible?

Wikipedia says “An IBM PC compatible is any personal computer that is hardware- and software-compatible with the IBM Personal Computer (IBM PC) and its subsequent models”. But what does this actually mean? The obvious literal interpretation is for a device to be PC compatible, all software originally written for the IBM 5150 must run on it. Is this a reasonable definition? Is it one that any modern hardware can meet?

Before we dig into that, let’s go back to the early days of the x86 industry. IBM had launched the PC built almost entirely around off-the-shelf Intel components, and shipped full schematics in the IBM PC Technical Reference Manual. Anyone could buy the same parts from Intel and build a compatible board. They’d still need an operating system, but Microsoft was happy to sell MS-DOS to anyone who’d turn up with money. The only thing stopping people from cloning the entire board was the BIOS, the component that sat between the raw hardware and much of the software running on it. The concept of a BIOS originated in CP/M, an operating system originally written in the 70s for systems based on the Intel 8080. At that point in time there was no meaningful standardisation - systems might use the same CPU but otherwise have entirely different hardware, and any software that made assumptions about the underlying hardware wouldn’t run elsewhere. CP/M’s BIOS was effectively an abstraction layer, a set of code that could be modified to suit the specific underlying hardware without needing to modify the rest of the OS. As long as applications only called BIOS functions, they didn’t need to care about the underlying hardware and would run on all systems that had a working CP/M port.

By 1979, boards based on the 8086, Intel’s successor to the 8080, were hitting the market. The 8086 wasn’t machine code compatible with the 8080, but 8080 assembly code could be assembled to 8086 instructions to simplify porting old code. Despite this, the 8086 version of CP/M was taking some time to appear, and a company called Seattle Computer Products started producing a new OS closely modelled on CP/M and using the same BIOS abstraction layer concept. When IBM started looking for an OS for their upcoming 8088 (an 8086 with an 8-bit data bus rather than a 16-bit one) based PC, a complicated chain of events resulted in Microsoft paying a one-off fee to Seattle Computer Products, porting their OS to IBM’s hardware, and the rest is history.

But one key part of this was that despite what was now MS-DOS existing only to support IBM’s hardware, the BIOS abstraction remained, and the BIOS was owned by the hardware vendor - in this case, IBM. One key difference, though, was that while CP/M systems typically included the BIOS on boot media, IBM integrated it into ROM. This meant that MS-DOS floppies didn’t include all the code needed to run on a PC - you needed IBM’s BIOS. To begin with this wasn’t obviously a problem in the US market since, in a way that seems extremely odd from where we are now in history, it wasn’t clear that machine code was actually copyrightable. In 1982 Williams v. Artic determined that it could be even if fixed in ROM - this ended up having broader industry impact in Apple v. Franklin and it became clear that clone machines making use of the original vendor’s ROM code wasn’t going to fly. Anyone wanting to make hardware compatible with the PC was going to have to find another way.

And here’s where things diverge somewhat. Compaq famously performed clean-room reverse engineering of the IBM BIOS to produce a functionally equivalent implementation without violating copyright. Other vendors, well, were less fastidious - they came up with BIOS implementations that either implemented a subset of IBM’s functionality, or didn’t implement all the same behavioural quirks, and compatibility was restricted. In this era several vendors shipped customised versions of MS-DOS that supported different hardware (which you’d think wouldn’t be necessary given that’s what the BIOS was for, but still), and the set of PC software that would run on their hardware varied wildly. This was the era where vendors even shipped systems based on the Intel 80186, an improved 8086 that was both faster than the 8086 at the same clock speed and was also available at higher clock speeds. Clone vendors saw an opportunity to ship hardware that outperformed the PC, and some of them went for it.

You’d think that IBM would have immediately jumped on this as well, but no - the 80186 integrated many components that were separate chips on 8086 (and 8088) based platforms, but crucially didn’t maintain compatibility. As long as everything went via the BIOS this shouldn’t have mattered, but there were many cases where going via the BIOS introduced performance overhead or simply didn’t offer the functionality that people wanted, and since this was the era of single-user operating systems with no memory protection, there was nothing stopping developers from just hitting the hardware directly to get what they wanted. Changing the underlying hardware would break them.

And that’s what happened. IBM was the biggest player, so people targeted IBM’s platform. When BIOS interfaces weren’t sufficient they hit the hardware directly - and even if they weren’t doing that, they’d end up depending on behavioural quirks of IBM’s BIOS implementation. The market for DOS-compatible but not PC-compatible mostly vanished, although there were notable exceptions - in Japan the PC-98 platform achieved significant success, largely as a result of the Japanese market being pretty distinct from the rest of the world at that point in time, but also because it actually handled Japanese at a point where the PC platform was basically restricted to ASCII or minor variants thereof.

So, things remained fairly stable for some time. Underlying hardware changed - the 80286 introduced the ability to access more than a megabyte of address space and would promptly have broken a bunch of things except IBM came up with an utterly terrifying hack that bit me back in 2009, and which ended up sufficiently codified into Intel design that it was one mechanism for breaking the original XBox security. The first 286 PC even introduced a new keyboard controller that supported better keyboards but which remained backwards compatible with the original PC to avoid breaking software. Even when IBM launched the PS/2, the first significant rearchitecture of the PC platform with a brand new expansion bus and associated patents to prevent people cloning it without paying off IBM, they made sure that all the hardware was backwards compatible. For decades, PC compatibility meant not only supporting the officially supported interfaces, it meant supporting the underlying hardware. This is what made it possible to ship install media that was expected to work on any PC, even if you’d need some additional media for hardware-specific drivers. It’s something that still distinguishes the PC market from the ARM desktop market. But it’s not as true as it used to be, and it’s interesting to think about whether it ever was as true as people thought.

Let’s take an extreme case. If I buy a modern laptop, can I run 1981-era DOS on it? The answer is clearly no. First, modern systems largely don’t implement the legacy BIOS. The entire abstraction layer that DOS relies on isn’t there, having been replaced with UEFI. When UEFI first appeared it generally shipped with a Compatibility Services Module, a layer that would translate BIOS interrupts into UEFI calls, allowing vendors to ship hardware with more modern firmware and drivers without having to duplicate them to support older operating systems1. Is this system PC compatible? By the strictest of definitions, no.

Ok. But the hardware is broadly the same, right? There’s projects like CSMWrap that allow a CSM to be implemented on top of stock UEFI, so everything that hits BIOS should work just fine. And well yes, assuming they implement the BIOS interfaces fully, anything using the BIOS interfaces will be happy. But what about stuff that doesn’t? Old software is going to expect that my Sound Blaster is going to be on a limited set of IRQs and is going to assume that it’s going to be able to install its own interrupt handler and ACK those on the interrupt controller itself and that’s really not going to work when you have a PCI card that’s been mapped onto some APIC vector, and also if your keyboard is attached via USB or SPI then reading it via the CSM will work (because it’s calling into UEFI to get the actual data) but trying to read the keyboard controller directly won’t2, so you’re still actually relying on the firmware to do the right thing but it’s not, because the average person who wants to run DOS on a modern computer owns three fursuits and some knee length socks and while you are important and vital and I love you all you’re not enough to actually convince a transglobal megacorp to flip the bit in the chipset that makes all this old stuff work.

But imagine you are, or imagine you’re the sort of person who (like me) thinks writing their own firmware for their weird Chinese Thinkpad knockoff motherboard is a good and sensible use of their time - can you make this work fully? Haha no of course not. Yes, you can probably make sure that the PCI Sound Blaster that’s plugged into a Thunderbolt dock has interrupt routing to something that is absolutely no longer an 8259 but is pretending to be so you can just handle IRQ 5 yourself, and you can probably still even write some SMM code that will make your keyboard work, but what about the corner cases? What if you’re trying to run something built with IBM Pascal 1.0? There’s a risk that it’ll assume that trying to access an address just over 1MB will give it the data stored just above 0, and now it’ll break. It’d work fine on an actual PC, and it won’t work here, so are we PC compatible?

That’s a very interesting abstract question and I’m going to entirely ignore it. Let’s talk about PC graphics3. The original PC shipped with two different optional graphics cards - the Monochrome Display Adapter and the Color Graphics Adapter. If you wanted to run games you were doing it on CGA, because MDA had no mechanism to address individual pixels so you could only render full characters. So, even on the original PC, there was software that would run on some hardware but not on other hardware.

Things got worse from there. CGA was, to put it mildly, shit. Even IBM knew this - in 1984 they launched the PCjr, intended to make the PC platform more attractive to home users. As well as maybe the worst keyboard ever to be associated with the IBM brand, IBM added some new video modes that allowed displaying more than 4 colours on screen at once4, and software that depended on that wouldn’t display correctly on an original PC. Of course, because the PCjr was a complete commercial failure, it wouldn’t display correctly on any future PCs either. This is going to become a theme.

There’s never been a properly specified PC graphics platform. BIOS support for advanced graphics modes5 ended up specified by VESA rather than IBM, and even then getting good performance involved hitting hardware directly. It wasn’t until Microsoft specced DirectX that anything was broadly usable even if you limited yourself to Microsoft platforms, and this was an OS-level API rather than a hardware one. If you stick to BIOS interfaces then CGA-era code will work fine on graphics hardware produced up until the 20-teens, but if you were trying to hit CGA hardware registers directly then you’re going to have a bad time. This isn’t even a new thing - even if we restrict ourselves to the authentic IBM PC range (and ignore the PCjr), by the time we get to the Enhanced Graphics Adapter we’re not entirely CGA compatible. Is an IBM PC/AT with EGA PC compatible? You’d likely say “yes”, but there’s software written for the original PC that won’t work there.

And, well, let’s go even more basic. The original PC had a well defined CPU frequency and a well defined CPU that would take a well defined number of cycles to execute any given instruction. People could write software that depended on that. When CPUs got faster, some software broke. This resulted in systems with a Turbo Button - a button that would drop the clock rate to something approximating the original PC so stuff would stop breaking. It’s fine, we’d later end up with Windows crashing on fast machines because hardware details will absolutely bleed through.

So, what’s a PC compatible? No modern PC will run the DOS that the original PC ran. If you try hard enough you can get it into a state where it’ll run most old software, as long as it doesn’t have assumptions about memory segmentation or your CPU or want to talk to your GPU directly. And even then it’ll potentially be unusable or crash because time is hard.

The truth is that there’s no way we can technically describe a PC Compatible now - or, honestly, ever. If you sent a modern PC back to 1981 the media would be amazed and also point out that it didn’t run Flight Simulator. “PC Compatible” is a socially defined construct, just like “Woman”. We can get hung up on the details or we can just chill.


  1. Windows 7 is entirely happy to boot on UEFI systems except that it relies on being able to use a BIOS call to set the video mode during boot, which has resulted in things like UEFISeven to make that work on modern systems that don’t provide BIOS compatibility ↩︎

  2. Back in the 90s and early 2000s operating systems didn’t necessarily have native drivers for USB input devices, so there was hardware support for trapping OS accesses to the keyboard controller and redirecting that into System Management Mode where some software that was invisible to the OS would speak to the USB controller and then fake a response anyway that’s how I made a laptop that could boot unmodified MacOS X ↩︎

  3. (my name will not be Wolfwings Shadowflight↩︎

  4. Yes yes ok 8088 MPH demonstrates that if you really want to you can do better than that on CGA ↩︎

  5. and by advanced we’re still talking about the 90s, don’t get excited ↩︎

Christian Hergert

@hergertme

pgsql-glib

Much like the s3-glib library I put together recently, I had another itch to scratch. What would it look like to have a PostgreSQL driver that used futures and fibers with libdex? This was something I wondered about more than a decade ago when writing the libmongoc network driver for 10gen (later MongoDB).

pgsql-glib is such a library which I made to wrap the venerable libpq PostgreSQL state-machine library. It does operations on fibers and awaits FD I/O to make something that feels synchronous even though it is not.

It also allows for something more “RAII-like” using g_autoptr() which interacts very nicely with fibers.

API Documentation can be found here.

Felipe Borges

@felipeborges

Looking for Mentors for Google Summer of Code 2026

It is once again that pre-GSoC time of year where I go around asking GNOME developers for project ideas they are willing to mentor during Google Summer of Code. GSoC is approaching fast, and we should aim to get a preliminary list of project ideas by the end of January.

Internships offer an opportunity for new contributors to join our community and help us build the software we love.

@Mentors, please submit new proposals in our Project Ideas GitLab repository.

Proposals will be reviewed by the GNOME Internship Committee and posted at https://gsoc.gnome.org/2026. If you have any questions, please don’t hesitate to contact us.

Lennart Poettering

@mezcalero

Mastodon Stories for systemd v259

On Dec 17 we released systemd v259 into the wild.

In the weeks leading up to that release (and since then) I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd259 hash tag. In case you aren't using Mastodon, but would like to read up, here's a list of all 25 posts:

I intend to do a similar series of serieses of posts for the next systemd release (v260), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.

My series for v260 will begin in a few weeks most likely, under the #systemd260 hash tag.

In case you are interested, here is the corresponding blog story for systemd v258, here for v257, and here for v256.

Sophie Herold

@sophieherold

GNOME in 2025: Some Numbers

As some of you know, I like aggregating data. So here are some random numbers about GNOME in 2025. This post is not about making any point with the numbers I’m sharing. It’s just for fun.

So, what is GNOME? In total, 6 692 516 lines of code. Of that, 1 611 526 are from apps. The remaining 5 080 990 are in libraries and other components, like the GNOME Shell. These numbers cover “the GNOME ecosystem,” that is, the combination of all Core, Development Tools, and Circle projects. This currently includes exactly 100 apps. We summarize everything that’s not an app under the name “components.”

GNOME 48 was at least 90 % translated for 33 languages. In GNOME 49 this increased to 36 languages. That’s a record in the data that I have, going back to GNOME 3.36 in 2020. The languages besides American English are: Basque, Brazilian Portuguese, British English, Bulgarian, Catalan, Chinese (China), Czech, Danish, Dutch, Esperanto, French, Galician, Georgian, German, Greek, Hebrew, Hindi, Hungarian, Indonesian, Italian, Lithuanian, Persian, Polish, Portuguese, Romanian, Russian, Serbian, Serbian (Latin), Slovak, Slovenian, Spanish, Swedish, Turkish, Uighur, and Ukrainian. There are 19 additional languages that are translated 50 % or more. So maybe you can help with translating GNOME to Belarusian, Catalan (Valencian), Chinese (Taiwan), Croatian, Finnish, Friulian, Icelandic, Japanese, Kazakh, Korean, Latvian, Malay, Nepali, Norwegian Bokmål, Occitan, Punjabi, Thai, Uzbek (Latin), or Vietnamese in 2026?

Talking about languages. What programming languages are used in GNOME? Let’s look at GNOME Core apps first. Almost half of all apps are written in C. Note that for these data, we are counting TypeScript under JavaScript.

C: 44.8%, Vala: 20.7%, Rust: 10.3%, Python: 6.9%, JavaScript: 13.8%, C++ 3.45%.
Share of GNOME Core apps by programming language.

The language distribution for GNOME Circle apps looks quite different with Rust (41.7 %), and Python (29.2 %) being the most popular languages.

C: 6%, Vala, 13%, Rust 42%, Python 29%, JavaScript 10%, Crystal 1%
Share of GNOME Circle apps by programming language.

Overall, we can see that with C, JavaScript/TypeScript, Python, Rust, and Vala, there are five programming languages that are commonly used for app development within the GNOME ecosystem.

But what about components within GNOME? The default language for libraries is still C. More than three-quarters of the lines of code for components are written in it. The components with the largest codebase are GTK (820 000), GLib (560 000), and Mutter (390 000).

Lines of code for components within the GNOME ecosystem.

But what about the remaining quarter? Line of code are of course a questionable metric. For Rust, close to 400 000 lines of code are actually bindings for libraries. The majority of this code is automatically generated. Similarly, 100 000 lines of Vala code are in the Vala repository itself. But there are important components within GNOME that are not written in C: Orca, our screen reader, boasts 110 000 lines of Python code. Half of GNOME Shell is written in JavaScript, adding 65 000 lines of JavaScript code. Librsvg and glycin are libraries written in Rust, that also provide bindings to other languages.

We are slowly approaching the end of the show. Let’s take a look at the GNOME Circle apps most popular on Flathub. I don’t trust the installation statistics on Flathub, since I have seen indications that for some apps, the number of installations is surprisingly high and cyclic. My guess is that some Linux distribution is installing these apps regularly as part of their test pipeline. Therefore, we instead check how many people have installed the latest update for the app. Not a perfect number either, but something that looks much more reliable. The top five apps are: Blanket, Eyedropper, Newsflash, Fragments, and Shortwave. Sometimes, it needs less than 2 000 lines of code to create popular software.

And there are 862 people supporting the GNOME Foundation with a recurring donation. Will you join them for 2026 on donate.gnome.org?

Joan Torres López

@joantolo

Remote Login Design

GNOME 46 introduced remote login. This post explores the architecture primarily through diagrams and tables for a clearer understanding.

Components overview


There are 4 components involved: the remote client, the GRD dispatcher daemon, the GRD handover daemon and the GDM daemon:

Component Type Responsibility
Remote Client Remote User Connects remotely via RDP. Supports RDP Server Redirection method.
Dispatcher GRD System-level daemon Handles initial connections, peeks routing token and orchestrates handovers.
Handover GRD User-level daemon Runs inside sessions (Greeter or User). Provides remote access of the session to the remote client.
GDM GDM System-level daemon Manages Displays and Sessions (Greeter or User).

API Overview


The components communicate with each other through dbus interfaces:

Exposed by GDM

org.gnome.DisplayManager.RemoteDisplayFactory

      • Method CreateRemoteDisplay
        Requests GDM to start a headless greeter. Accepts a RemoteId argument.

org.gnome.DisplayManager.RemoteDisplay

    • Property RemoteId
      The unique ID generated by the Dispatcher.
    • Property SessionId
      The session ID of the created session wrapped by this display.

 

Exposed by GRD Dispatcher

org.gnome.RemoteDesktop.Dispatcher

    • Method RequestHandover
      Returns the object path of the Handover interface matching the caller’s session ID.

org.gnome.RemoteDesktop.Handover

Dynamically created. One for each remote session.

    • Method StartHandover
      Initiates the handover process. Receives one-time username/password, returns certificate and key used by dispatcher.
    • Method TakeClient
      Gives the file descriptor of the remote client’s connection to the caller.
    • Signal TakeClientReady
      Informs that a file descriptor is ready to be taken.
    • Signal RedirectClient
      Instructs the source session to redirect the remote client to the destination session.

Flow Overview


Flow phase 1: Initial connection to greeter session

1. Connection:

    • Dispatcher receives a new connection from a Remote Client. Peeks the first bytes and doesn’t find a routing token. This means this is a new connection.

2. Authentication:

    • Dispatcher authenticates the Remote Client using system level credentials.

3. Session Request:

    • Dispatcher generates a unique remote_id (also known as routing token), and calls CreateRemoteDisplay() on GDM with this remote_id.

4. Registration:

    • GDM starts a headless greeter session.
    • GDM exposes RemoteDisplay object with RemoteId and SessionId.
    • Dispatcher detects new object. Matches RemoteId. Creates Handover D-Bus interface for this SessionId.

5. Handover Setup:

    • Handover is started in the headless greeter session.
    • Handover calls RequestHandover() to get its D-Bus object path with the Handover interface.
    • Handover calls StartHandover() with autogenerated one-time credentials. Gets from that call the certificate and key (to be used when Remote Client connects).

6. Redirection (The “Handover”):

    • Dispatcher performs RDP Server Redirection sending the one-time credentials, routing token (remote_id) and certificate.
    • Remote Client disconnects and reconnects.
    • Dispatcher peeks bytes; finds valid routing token.
    • Dispatcher emits TakeClientReady on the Handover interface.

7. Finalization:

    • Handover calls TakeClient() and gets the file descriptor of the Remote Client‘s connection.
    • Remote Client is connected to the headless greeter session.

Flow phase 2: Session transition (from greeter to user)

1. Session Creation:

    • User authenticates.
    • GDM starts a headless user session.

2. Registration:

    • GDM exposes a new RemoteDisplay with the same RemoteId and a new SessionId.
    • Dispatcher detects a RemoteId collision.
    • State Update: Dispatcher creates a new Handover D-Bus interface (dst) to be used by the New Handover in the headless user session.
    • The Existing Handover remains connected to its original Handover interface (src).

3. Handover Setup:

    • New Handover is started in the headless user session.
    • New Handover calls RequestHandover() to obtain its D-Bus object path with the Handover interface.
    • New Handover calls StartHandover() with new one-time credentials and receives the certificate and key.

4. Redirection Chain:

    • Dispatcher receives StartHandover() from dst.
    • Dispatcher emits RedirectClient on src (headless greeter session) with the new one-time credentials.
    • Existing Handover receives the signal and performs RDP Server Redirection.

5. Reconnection:

    • Remote Client disconnects and reconnects.
    • Dispatcher peeks bytes and finds a valid routing token (remote_id).
    • Dispatcher resolves the remote_id to the destination Handover (dst).
    • Dispatcher emits TakeClientReady on dst.

6. Finalization:

    • New Handover calls TakeClient() and receives the file descriptor of the Remote Client‘s connection.
    • Remote Client is connected to the headless user session.

 

Disclaimer

Please note that while this post outlines the basic architectural structure and logic, it may not guarantee a 100% match with the actual implementation at any given time. The codebase is subject to ongoing refactoring and potential improvements.

Marcus Lundblad

@mlundblad

Xmas & New Year's Maps


 

 

It's that time of year again in (Norther Hemisphere) winter when year's drawing to an end. Which means it's time for the traditional Christmas Maps blogpost.

 

Sometimes you hear claims about Santa Claus living at the North Pole (though in Rovaniemi, Finland, I bet they would disagree…). Turns out there's a North Pole near Fairbanks, Alaska as well:


  😄

OK, enough smalltalk… now on to what's happened since the last update (for the GNOME 49 release in September).

Sidebar Redesign

Our old design when it comes to showing information about places has revolved around the trusted old “popover” menu design which has served us pretty well. But it also had it's drawbacks.

For one it was never a good fit on small screen sizes (such as on phones). Therefore we had our own “home-made” place bar design with a separate dialog opening up when clicking the bar to reveal full details.

After some discussions and thinking about this, I decided to try out a new approach utilizing the MultiLayout component from libadwaita which gives the option to get an adaptive “auxillary view” widget which works as a sidebar on desktop, and a bottom sheet on mobile.

Now the routeplanner and place information views have both been consolidated to both reside in this new widget.

 Clicking the route button will now open the sidebar showing the routeplanner, or the bottom sheet depending on the mode.

And clicking a place icon on the map, or selecting a search result will open the place information, also showing in the sidebar, or bottom sheet. 

Route planner showing in sidebar in desktop mode

Routeplanner showing in bottom sheet in mobile/narrow mode

Routeplanner showing public transit itineraries in bottom sheet

Showing place information in sidebar in desktop mode

Showing place information in bottom sheet in mobile mode

 Redesigning Public Transit Itinerary Rendering

The displaying of public transit itineraries has also seen some overhaul.

First I did a bit of redesign of the rows representing journey legs, taking some queues from the Adwaita ExpanderRow style. Improving a bit compared to the old style which had been carried over from GTK 3.

List of journey legs, with the arrow indicating possibilty to expand to reveal more information

  

List of journey legs, with one leg “expanded” to show intermediate stops made by a train

 

Improving further on this Jalen Ng contributed a merge request implementing an improvement to the overview list utilizing Adwaita WrapBoxes to show more complete information the different steps of each presented itinerary option in the overview when searching for travel options with public transit.

Showing list of transit itineraries each consisting of multiple journey legs

 Jalen also started a redesign of rendering of itineraries (this merge request is still being worked on).

Redesign of transit itinerary display. Showing each leg as a “track segment“ using the line's color

 Hide Your Location

We also added the option to hide the marker showing your own location. One use for this e.g. if you want to make screenshots without revealing your exact location.

Menu to toggle showing your location marker

 And that's not All…

On top of this some other things. James Westman added support global-state expressions to libshumate's vector tile implementation. This should allow us to e.g. refactor the implementation of light and dark styles and language support in our map style without “recompiling”  the stylesheet at runtime.

James also fixed a bug sometimes causing the application to freeze when dragging the window between screens when a route is being displayed.

This fix has been backported to the 49.3 and 48.8 releases which has been tagged today as an early holiday gift.

And that's all for now, merry holidays, and  happy new year!

Aryan Kaushik

@aryan20

Introducing Open Forms

Introducing Open Forms!

The problem

Ever been to a conference where you set up a booth or attempt to get quick feedback by running around the ground and felt the awesome feeling of -

  • Captive portal logout
  • Timeouts
  • Flaky Wi-Fi drivers on Linux devices
  • Poor bandwidth or dead zones
  • Do I need to continue?

Meme showcasing wifi fails when using forms

While setting up the Ubuntu booth, we saw an issue: The Wifi on the Linux tablet was not working.

After lots of effort, it started to work, but as soon as we log into the captive portal, the chip fails, and no Wi-Fi is detected. And the solution? A trusty old restart, just for the cycle to repeat. (Just to be clear, the wifi was great, but it didn't like that device)

Meme showing a person giving their a child a book on 'Wifi drivers on linux' as something to cry about

We eventually fixed that by providing a hotspot from mobile, but that locked the phone to the booth, or else it would disconnect.

Now, it may seem a one-off inconvenience, but at any conference, summit, or event, this pattern can be seen where one of the issues listed listed above occurs repeatedly.

So, I thought, there might be something to fix this. But no project existed without being reliant on Web :(

The solution

So, I built one, a native, local first, open source and non-answer-peeking form application.

With Open Forms, your data stays on your device, works without a network, and never depends on external services. This makes it reliable in chaotic, un-reliable, or privacy first environments.

Just provide it a JSON config (Yes, I know, trying to provide a GUI for it instead), select the CSV location and start collecting form inputs.

Open Forms opening page

No waiting for WiFi, no unnecessary battery drains, no timeouts, just simple open forms.

The application is pretty new (built over the weekend) and supports -

  • Taking input via Entry, Checkbox, Radio, Date, Spinner (Int range)
  • Outputs submissions to a CSV
  • Can mark fields to be required before submissions
  • Add images, headings, and CSS styling
  • Multi-tabbed to open more than one form at a time.

Open Forms inputs Open Forms inputs continued

Planned features

  • Creating form configs directly from the GUI
  • A11y improvements (Yes, Federico, I swear I will improve that)
  • Hosting on Flathub (would love guidance regarding it)

But, any software can be guided properly only by its users! So, If you’ve ever run into Wi-Fi issues while collecting data at events, I’d love for you to try Open Forms and share feedback, feature requests, bug reports, or even complaints.

The repository is at - Open Forms GitHub The latest release is packaged as a flatpak.

Engagement Blog

@engagement

GUADEC 2026 will be held in A Coruña, Spain

We are happy to announce that GUADEC 2026 will take place in A Coruña, Spain, from July 16th to 21st, 2026.
As in recent years, the conference will be organized as a hybrid event, giving participants the opportunity to join either in person or online.

The first three days, July 16th–18th, will be dedicated to talks, followed by BoF and workshop sessions on July 19th and 20th. The final day, July 21st, will be a self-organized free exploration day.

While the GUADEC team will share ideas and suggestions for this day, there will be no officially organized activities. The call for proposals and registration will open soon, and further updates will be shared on guadec.org in the coming weeks.

Organizations interested in sponsoring GUADEC 2026 are welcome to contact us at guadec@gnome.org.

About the city:
Hosted on Spain’s Atlantic coast, A Coruña offers a memorable setting for the conference, from the iconic Tower of Hercules, the world’s oldest Roman lighthouse still in use, to one of Europe’s longest seaside promenades and the city’s famous glass-fronted balconies that give it the nickname “City of Glass.”

Mid-December News

Misc news for the past month about the gedit text editor, mid-December edition! (Some sections are a bit technical).

(By the way, the "mid-month" news is especially useful for December/January, when one thinks about it ;-) ).

gedit now refuses to load very large files

It was part of the common-bugs, and it is now fixed! New versions of gedit will refuse to load very large files or content read from stdin.

The limit is configurable with the GSettings key: org.gnome.gedit.preferences.editor max-file-size

By default the limit is set to 200 MB. The setting is not exposed in the Preferences dialog (there are a few other such settings).

There are technically two cases:

  • First the file size - if available - is checked. If it exceeds the limit, the error is directly returned without trying to read the content.
  • Then the content is read and it is ensured that the maximum number of bytes is not reached. The check here is necessary for reading stdin, for which the file size doesn't exist. And even when the file size information is available, the double-check is necessary to avoid a potential TOC/TOU (time-of-check to time-of-use) problem.

It is planned to improve this and offer to load the content truncated.

Windows improvements

I've fixed some compilation warnings and unit tests failures on MS Windows, and done some packaging work, including contributing to MINGW-packages (part of MSYS2).

Other work in libgedit-gtksourceview

Various work on the completion framework, including some code simplifications.

Plus what can be called "gardening tasks": various code maintenance stuff.

gspell CI for tarballs

AsciiWolf and Jordan Petridis have contributed to gspell to add CI for tarballs. Thanks to them!

I Lived a Similar Trauma Rob Reiner's Family Faces & Shame on Trump

I posted the following on my Fediverse (via Mastodon) account. I'm reposting the whole seven posts here as written there, but I hope folks will take a look at that thread as folks are engaging in conversation over there that might be worth reading if what I have to say interests you. (The remainder of the post is the same that can be found in the Fediverse posts linked throughout.)

I suppose Fediverse isn't the place people are discussing Rob Reiner. But after 36 hours of deliberating whether to say anything, I feel compelled. This thread will be long,but I start w/ most important part:

It's an “open secret” in the FOSS community that in March 2017 my brother murdered our mother. About 3k ppl/year in USA have this experience, so it's a statistical reality that someone else in FOSS experienced similar. If so, you're welcome in my PMs to discuss if you need support… (1/7)

… Traumatic loss due to murder is different than losing your grandparent/parent of age-related ailments (& is even different than losing a young person to a disease like cancer). The “a fellow family member did it” brings permanent surrealism to your daily life. Nothing good in your life that comes later is ever all that good. I know from direct experience this is what Rob Reiner's family now faces. It's chaos; it divides families forever: dysfunctional family takes on a new “expert” level… (2/7)

…as one example: my family was immediately divided about punishment. Some of my mother's relatives wanted prosecution to seek death penalty. I knew that my brother was mentally ill enough that jail or prison *would* get him killed in a prison dispute eventually,so I met clandestinely w/my brother's public defender (during funeral planning!) to get him moved to a criminal mental health facility instead of a regular prison. If they read this, it'll first time my family will find out I did that…(3/7)

…Trump's political rise (for me) links up: 5 weeks into Trump's 1ˢᵗ term, my brother murdered my mother. My (then 33yr-old) brother was severely mentally ill from birth — yet escalated to murder only then. IMO, it wasn't coincidence. My brother left voicemail approximately 5 hours before the murder stating his intent to murder & described an elaborate political delusion as the impetus. ∃ unintended & dangerous consequences of inflammatory political rhetoric on the mental ill!…(4/7)

…I'm compelled to speak publicly — for first time ≈10 yrs after the murder — precisely b/c of Trump's response.

Trump endorsed the idea that those who oppose him encourage their own murder from the mentally ill. Indeed, he said that those who oppose him are *themselves causing* mental illnesses in those around them, & that his political opponents should *expect* violence from their family members (who were apparently driven to mental illness from your opposition to Trump!)… (5/7)

…Trump's actual words:

Rob Reiner, tortured & struggling,but once…talented movie director & comedy star, has passed away, together w/ his wife…due to the anger he caused others through his massive, unyielding, & incurable affliction w/ a mind crippling disease known as TRUMP DERANGEMENT SYNDROME…He was known to have driven people CRAZY by his raging obsession of…Trump, w/ his obvious paranoia reaching new heights as [my] Administration surpassed all goals and expectations of greatness…
(6/7)

My family became ultra-pro-Trump after my mom's murder. My mom hated politics: she was annoyed *both* if I touted my social democratic politics & if my dad & his family stated their crypto-fascist views. Every death leaves a hole in a community's political fabric. 9+ years out, I'm ostracized from my family b/c I'm anti-Trump. Trump stated perhaps what my family felt but didn't say: those who don't support Trump are at fault when those who fail to support Trump are murdered. (7/7)

[ Finally, I want to also quote this one reply I also posted in the same thread: I ask everyone, now that I've stated this public, that I *know* you're going to want to search the Internet for it, & you will find a lot. Please, please, keep in mind that the Police Department & others basically lied to the public about some of the facts of the case. I seriously considered suing them for it, but ultimately it wasn't worth my time. But, please everyone ask me if you are curious about any of the truth of the details of the crime & its aftermath …

Hari Rana

@theevilskeleton

Please Fund My Continued Accessibility Work on GNOME!

Hey, I have been under distress lately due to personal circumstances that are outside my control. I cannot find a permanent job that allows me to function, I am not eligible for government benefits, my grant proposals to work on free and open-source projects got rejected, paid internships are quite difficult to find, especially when many of them prioritize new contributors. Essentially, I have no stable, monthly income that allows me to sustain myself.

Nowadays, I mostly volunteer to improve accessibility throughout GNOME apps, either by enhancing the user experience for people with disabilities, or enabling them to use them. I helped make most of GNOME Calendar accessible with a keyboard and screen reader, with additional ongoing effort involving merge requests !564 and !598 to make the month view accessible, all of which is an effort no company has ever contributed to, or would ever contribute to financially. These merge requests require literal thousands of hours for research, development, and testing, enough to sustain me for several years if I were employed.

I would really appreciate any kinds of donations, especially ones that happen periodically to increase my monthly income. These donations will allow me to sustain myself while allowing me to work on accessibility throughout GNOME, essentially ‘crowdfunding’ development without doing it on the behalf of the GNOME Foundation or another organization.

Donate on Liberapay

Support on Ko-fi

Sponsor on GitHub

Send via PayPal

Michael Catanzaro

@mcatanzaro

Significant Drag and Drop Vulnerability in WebKitGTK

WebKitGTK 2.50.3 contains a workaround for CVE-2025-13947, an issue that allows websites to exfiltrate files from your filesystem. If you’re using Epiphany or any other web browser based on WebKitGTK, then you should immediately update to 2.50.3.

Websites may attach file URLs to drag sources. When the drag source is dropped onto a drop target, the website can read the file data for its chosen files, without any restrictions. Oops. Suffice to say, this is not how drag and drop is supposed to work. Websites should not be able to choose for themselves which files to read from your filesystem; only the user is supposed to be able to make that choice, by dragging the file from an external application. That is, drag sources created by websites should not receive file access.

I failed to find the correct way to fix this bug in the two afternoons I allowed myself to work on this issue, so instead my overly-broad solution was to disable file access for all drags. With this workaround, the website will only receive the list of file URLs rather than the file contents.

Apple platforms are not affected by this issue.

Rewriting Cartridges

Gamepad support, collections, instant imports, and more!

Cartridges is, in my biased opinion, the best game launcher out there. To use it, you do not need to wait 2 minutes for a memory-hungry Electron app to start up before you can start looking for what you want to play. You don’t need to sign into anything. You don’t need to spend 20 minutes configuring it. You don’t need to sacrifice your app menu, filling it with low-resolution icons designed for Windows that don’t disappear after you uninstall a game. You install the app, click “Import”, and all your games from anywhere on your computer magically appear.

It was also the first app I ever wrote. From this, you can probably already guess that it is an unmaintainable mess. It’s both under- and over-engineered, it is full of bad practices, and most importantly, I don’t trust it. I’ve learned a lot since then. I’ve learned so much that if I were to write the app again, I would approach it completely differently. Since Cartridges is the preferred way to launch games for so many other people as well, I feel it is my duty as a maintainer to give it my best shot and do just that: rewrite the app from scratch.

Myself, Zoey, and Jamie have been working on this for the past two weeks and we’ve made really good progress so far. Beyond stability improvements, the new base has allowed us to work on the following new features:

Gamepad Support

Support for controller navigation has been something I’ve attempted in the past but had to give up as it proved too challenging. That’s why I was overjoyed when Zoey stepped up to work on it. In the currently open pull request, you can already launch games and navigate many parts of the UI with a controller. You can donate to her on Ko-fi if you would like to support the feature’s development. Planned future enhancements include navigating menus, remapping, and button prompts.

Collections

Easily the most requested feature, a lot of people asked for a way to manually organize their games. I initially rejected the idea as I wanted Cartridges to remain a single-click game launcher but softened up to it over time as more and more people requested it since it’s an optional dimension that you can just ignore if you don’t use it. As such, I’m happy to say that Jamie has been working on categorization with an initial implementation ready for review as of writing this. You can support her on Liberapay or GitHub Sponsors.

Instant Imports

I mentioned that Cartridges’ main selling point is being a single-click launcher. This is as good it gets, right? Wrong: how about zero clicks?

The app has been reworked to be even more magical. Instead of pulling data into Cartridges from other apps at the request of the user, it will now read data directly from other apps without the need to keep games in-sync manually. You will still be able to edit the details of any game, but only these edits will be saved, meaning if any information on, let’s say Steam gets updated, Cartridges will automatically reflect these changes.

The existing app has settings to import and remove games automatically, but this has been a band-aid solution and it will be nice to finally do this properly, saving you time, storage space, and saving you from conflicts.

To allow for this, I also changed the way the Steam source works to fetch all data from disk instead of making calls to Steam’s web API. This was the only source that relied on the network, so all imports should now be instant. Just install the app and all your games from anywhere on your computer appear. How cool is that? :3

And More

We have some ideas for the longer term involving installing games, launching more apps, and viewing more game data. There are no concrete plans for any of these, but it would be nice to turn Cartridges into a less clunky replacement for most of Steam Big Picture.

And of course, we’ve already made many quality of life improvements and bug fixes. Parts of the interface have been redesigned to work better and look nicer. You can expect many issues to be closed once the rewrite is stable. Speaking of…

Timeline

We would like to have feature-parity with the existing app. The new app will be released under the same name, as if it was just a regular update so no action will be required to get the new features.

We’re aiming to release the new version sometime next year, I’m afraid I can’t be more precise than that. It could take three more months, it could take 12. We all have our own lives, working on the app as a side project so we’ll see how much time we can dedicate to it.

If you would like to keep up with development, you can watch open pull requests on Codeberg targeting the rewrite branch. You can also join the Cartridges Discord server.

Thank you again Jamie and Zoey for your efforts!

Jakub Steiner

@jimmac

Dithering

One of the new additions to the GNOME 49 wallpaper set is Dithered Sun by Tobias. It uses dithering not as a technical workaround for color banding, but as an artistic device.

Halftone app

Tobias initially planned to use Halftone — a great example of a GNOME app with a focused scope and a pleasantly streamlined experience. However, I suggested that a custom dithering method and finer control over color depth would help execute the idea better. A long time ago, Hans Peter Jensen responded to my request for arbitrary color-depth dithering in GIMP by writing a custom GEGL op.

Now, since the younger generation may be understandably intimidated by GIMP’s somewhat… vintage interface, I promised to write a short guide on how to process your images to get a nice ordered dither pattern without going overboard on reducing colors. And with only a bit of time passing since the amazing GUADEC in Brescia, I’m finally delivering on that promise. Better late than later.

GEGL dithering op

I’ve historically used the GEGL dithering operation to work around potential color banding on lower-quality displays. In Tobias’ wallpaper, though, the dithering is a core element of the artwork itself. While it can cause issues when scaling (filtering can introduce moiré patterns), there’s a real beauty to the structured patterns of Bayer dithering.

You will find the GEGL Op in Color > Dither menu. The filter/op parameters don’t allow you to set the number of colors directly—only the per-channel color depth (in bits). For full-color dithers I tend to use 12-bit. I personally like the Bayer ordered dither, though there are plenty of algorithms to choose from, and depending on your artwork, another might suit you better. I usually save my preferred settings as a preset for easier recall next time (find Presets at the top of the dialog).

Happy dithering!