March 24, 2025

Writing your own C++ standard library from scratch

The C++ standard library (also know as the STL) is, without a doubt, an astounding piece of work. Its scope, performance and incredible backwards compatibility have taken decades of work by many of the world's best programmers. My hat's off to all those people who have contributed to it.

All of that is not to say that it is not without its problems. The biggest one being the absolutely abysmal compile times but unreadability, and certain unoptimalities caused by strict backwards compatibility are also at the top of the list. In fact, it could be argued that most of the things people really dislike about C++ are features of the STL rather than the language itself. Fortunately, using the STL is not mandatory. If you are crazy enough, you can disable it completely and build your own standard library in the best Bender style.

One of the main advantages of being an unemployed-by-choice open source developer is that you can do all of that if you wish. There are no incompetent middle damagers hovering over your shoulder to ensure you are "producing immediate customer value" rather than "wasting time on useless polishing that does not produce immediate customer value".

It's my time, and I'll waste it if I want to!

What's in it?

The biggest design questions of a standard library are scope and the "feel" of the API. Rather than spending time on design, we steal it. Thus, when in doubt, read the Python stdlib documentation and replicate it. Thus the name of the library is pystd.

The test app

To keep the scope meaningful, we start by writing only enough of stdlib to build an app that reads a text file, validates it as UTF-8, splits the contents into words, counts how many time each word appears in the file and prints all words and how many times it appears sorted by decreasing count.

This requires, at least:

  • File handling
  • Strings
  • UTF8 validation
  • A hash map
  • A vector
  • Sorting

The training wheels come off

The code is available in this Github repo for those who want to follow along at home.

Disabling the STL is fairly easy (with Linux+GCC at least) and requires only these two Meson statements:

add_global_arguments('-nostdinc++', language: 'cpp')
add_global_link_arguments('-nostdlib++', '-lsupc++', language: 'cpp')

The supc++ library is (according to stackoverflow) a support library GCC needs to implement core language features. Now the stdlib is off and it is time to implement everything with sticks, stones and duct tape.

The outcome

Once you have implemented everything discussed above and auxiliary stuff like a hashing framework the main application looks like this.

The end result is both Valgrind and Asan clean. There is one chunk of unreleased memory, but that comes from supc++. There is probably UB in the implementation. But it should be the good kind of UB that, if it would actually not work, would break the entire Linux userspace because everything depends on it working "as expected".

All of this took fewer than 1000 lines of code in the library itself (including a regex implementation that is not actually used). For comparison merely including vector from the STL brings in 27 thousand lines of code.

Comparison to an STL version

Converting this code to use the STL is fairly simple and only requires changing some types and fine tuning the API.  The main difference is that the STL version does not validate that the input is UTF-8 as there is no builtin function for that. Now we can compare the two.

Runtime for both is 0.001 to 0.002 seconds on the small test file I used. Pystd is not noticeably slower than the STL version, which is enough for our purposes. It almost certainly scales worse because there has been zero performance work on it.

Compiling the pystd version with -O2 takes 0.3 seconds whereas the STL version takes 1.2 seconds. The measurements were done on a Ryzen 7 3700X processor. 

The executable's unstripped size is 349k for STL and 309k for pystd. The stripped sizes are 23k and 135k. Approximately 100 k of the pystd executable comes from supc++. In the STL version that probably comes dynamically from libstdc++ (which, on this machine, takes 2.5 MB).

Perfect ABI stability

Designing a standard library is exceedingly difficult because you can't ever really change it. Someone, somewhere, is depending on every misfeature in it so they can never be changed.

Pystd has been designed to both support perfect ABI stability and make it possible to change it in arbitrary ways in the future. If you start from scratch this turned out to be fairly simple.

The sample code above used the pystd namespace. It does not actually exist. Instead it is defined like this in the cpp file:

#include <pystd2025.hpp> 

namespace pystd = pystd2025;

In pystd all code is in a namespace with a year and is stored in a header file with the same year. The idea is, then, that every year you create a new release. This involves copying all stdlib header files to a file with the new year and regexping the namespace declarations to match. The old code is now frozen forever (except for bug fixes) whereas the new code can be changed at will because there are zero existing lines of code that depend on it.

End users now have the choice of when to update their code to use newer pystd versions. Even better, if there is an old library that can not be updated, any of the old versions can be used in parallel. For example:

pystd2030::SomeType foo;
pystd2025::SomeType bar(foo.something(), foo.something_else());

Thus if no code is ever updated, everything keeps working. If all code is updated at once, everything works. If only parts of the code are updated, things can still be made to work with some glue code. This puts the maintenance burden on the people whose projects can not be updated as opposed to every other developer in the world. This is as it should be, and also would motivate people with broken deps to spend some more effort to get them fixed.


March 21, 2025

GNOME 48 Core Apps Update

It has been a year and a half since my previous GNOME core apps update. Last time, for GNOME 45, GNOME Photos was removed from GNOME core without replacement, and Loupe and Snapshot (user-facing names: Image Viewer and Camera) entered, replacing Eye of GNOME and Cheese, respectively. There were no core app changes in GNOME 46 or 47.

Now for GNOME 48, Decibels (Audio Player) enters GNOME core. Decibels is intended to close a longstanding flaw in GNOME’s core app set: ever since Totem (Videos) hid its support for opening audio files, there has been no easy way to open an audio file using GNOME core apps. Totem could technically still do so, but you would have to know to attempt it manually. Decibels fixes this problem. Decibels is a simple app that will play your audio file and do nothing else, so it will complement GNOME Music, the music library application. Decibels is maintained by Shema Angelo Verlain and David Keller (thank you!) and is notably the only GNOME core app that is written in TypeScript.

Looking to the future, the GNOME Incubator project tracks future core apps to ensure there is sufficient consensus among GNOME distributors before an app enters core. Currently Papers (future Document Viewer, replacing Evince) and Showtime (future Videos or possibly Video Player, replacing Totem) are still incubating. Applications in Incubator are not yet approved to enter core, so it’s not a done deal yet, but I would expect to see these apps enter core sooner rather than later, hopefully for GNOME 49. Now is the right time for GNOME distributors to provide feedback on these applications: please don’t delay!

On a personal note, I have recently left the GNOME release team to reduce my workload, so I no longer have any direct role in managing the GNOME core apps or Incubation process. But I think it makes sense to continue my tradition of reporting on core app changes anyway!

#192 Forty-eight!

Update on what happened across the GNOME project in the week from March 14 to March 21.

This week we released GNOME 48!

This new major release of GNOME is full of exciting changes, including notification stacking, performance improvements, an enhanced image viewer, a new interface font, new digital wellbeing settings, a new audio player, HDR support, and much more! See the GNOME 48 release notes and developer notes for more information.

Readers who have been following this site will already be aware of some of the new features. If you’d like to follow the development of GNOME 49 (Fall 2025), keep an eye on this page - we’ll be posting exciting news every week!

GNOME Core Apps and Libraries

Mutter

A Wayland display server and X11 window manager and compositor library.

nickdiego reports

Mutter 48 now supports xdg-toplevel-drag-v1, the Wayland protocol that makes it possible to drag toplevel windows during drag-and-drop sessions. Whose primary use case is Chromium-like tab dragging feature set. Quick demo available at https://youtu.be/GAPjtLUBa_E and further details at this blog post.

GNOME Circle Apps and Libraries

Tobias Bernard reports

Last week Exercise Timer by Lőrinc Serfőző was accepted into Circle! It’s a cute little app to create timers for high-intensity interval training. Congratulations!

https://apps.gnome.org/Hiit

Third Party Projects

JumpLink announces

We’re excited to announce the latest beta release of ts-for-gir v4.0.0-beta.23, our TypeScript type definitions generator for GObject introspection GIR files that enhances development experience in GJS projects!

Key highlights:

  • Fixed Cairo type definitions, resolving long-standing issues
  • Improved GObject property methods and parameter typing
  • Fixed global gettext methods and pkg properties
  • Enhanced string formatting capabilities
  • Updated .gir files and NPM dependencies to latest versions

Mahjongg

A solitaire version of the classic Eastern tile game.

Mat reports

Mahjongg 48.0 has been released, and is available on Flathub. This release contains the following improvements:

  • New sequential and random layout rotation modes
  • On double-click, auto-play the end of the game if all tiles are unblocked
  • Tile rendering uses the GPU instead of CPU
  • Sharper tile textures on high resolution displays
  • Smaller spacing around the board on mobile screens
  • Ctrl-R keyboard shortcut for restarting game
  • Column sorting in the Scores dialog
  • Animations when starting a new game and pausing a game
  • Performance optimizations for tile matching
  • Small visual changes in the Scores/Game Finished dialog

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Friday links March 21st 2025

Some links for technical articles on various topics I read.

NVIDIA emulation journey, part 1: RIVA 128 / NV3 architecture history and basic overview - From the developers of 86Box emulators, some history and technical summary about the first successful 3D GPU from Nvidia.

zlib-rs is faster than C - Something on the reimplementation of zlib in Rust, and how it performs better. Safe + speed, pick 2.

Understanding ActivityPub Part 1 to 4. - A four part explainer on how ActivityPub, the protocol behind the Fediverse, works.

Memory safety for web fonts - Chrome developers explain how they are replacing freetype to a memory safe solution, written in Rust, called Skrifa.

GIMP 3.0 released - Aleksandr Prokudin summarize what's new in the long awaited GIMP 3.0. Don't forget his weekly writeups of Libre Arts updates.

March 20, 2025

2025-03-20 Thursday

  • Up early, to the venue. Enjoyed some talks, catch-up with people. Gave my first-ever pico talk of only two minutes - encouraging people to apply as interns.
  • Published the next strip: the state of the roads:
    The Open Road to Freedom - strip#10 - the state of the roads

March 19, 2025

2025-03-19 Wednesday

  • Slept poorly, up early, breakfast with Eloy. To the (beautiful) venue to setup - good stuff.
  • Talked to, and gave stickers to the cream of European research universities and got positive and useful feedback from them on their COOL usage. Group photo, Karate display.
  • Caught up with people; out to talk with Frank & Niels.

Introducing GNOME 48

The GNOME Project is proud to announce the release of GNOME 48, ‘Bengaluru’.

GNOME 48 brings several exciting updates, including improved notification stacking for a cleaner experience, better performance with dynamic triple buffering, and the introduction of new fonts like Adwaita Sans & Mono. The release also includes Decibels, a minimalist audio player, new digital well-being features, battery health preservation with an 80% charge limit, and HDR support for compatible displays.

For a detailed breakdown, visit the GNOME 48 Release Notes.

GNOME 48 will be available shortly in many distributions, such as Fedora 42 and Ubuntu 25.04. If you want to try it today, you can look for their beta releases, which will be available very soon

Getting GNOME

We are also providing our own installer images for debugging and testing features. These images are meant for installation in a vm and require GNOME Boxes with UEFI support. We suggest getting Boxes from Flathub.

GNOME OS Nightly

If you’re looking to build applications for GNOME 48, check out the GNOME 48 Flatpak SDK on Flathub.
You can also support the GNOME project by donating—your contributions help us improve infrastructure, host community events, and keep Flathub running. Every donation makes a difference!

This six-month effort wouldn’t have been possible without the whole GNOME community, made of contributors and friends from all around the world: developers, designers, documentation writers, usability and accessibility specialists, translators, maintainers, students, system administrators, companies, artists, testers, the local GNOME.Asia team in Bengaluru, and last, but not least, our users.

We hope to see some of you at GUADEC 2025 in Brescia, Italy!

Our next release, GNOME 49, is planned for September. Until then, enjoy GNOME 48.

:heart: The GNOME release team

Cleaner Code With GObject

I see a lot of users approaching GNOME app development with prior language-specific experience, be it Python, Rust, or something else. But there’s another way to approach it: GObject-oriented and UI first.

This introduces more declarative code, which is generally considered cleaner and easier to parse. Since this approach is inherent to GTK, it can also be applied in every language binding. The examples in this post stick to Python and Blueprint.

Properties

While normal class properties for data work fine, using GObject properties allows developers to do more in UI through expressions.

Handling Properties Conventionally

Let’s look at a simple example: there’s a progress bar that needs to be updated. The conventional way of doing this would look something like the following:

using Gtk 4.0;
using Adw 1;

template $ExampleProgressBar: Adw.Bin {
  ProgressBar progress_bar {}
}

This defines a template called ExampleProgressBar which extends Adw.Bin and contains a Gtk.ProgressBar called progress_bar.

The reason why it extends Adw.Bin instead of Gtk.ProgressBar directly is because Gtk.ProgressBar is a final class, and final classes can’t be extended.

from gi.repository import Adw, GLib, Gtk

@Gtk.Template(resource_path="/org/example/App/progress-bar.ui")
class ExampleProgressBar(Adw.Bin):

    __gtype_name__ = "ExampleProgressBar"

    progress_bar: Gtk.ProgressBar = Gtk.Template.Child()

    progress = 0.0

    def __init__() -> None:
        super().__init__()

        self.load()

    def load(self) -> None:
        self.progress += 0.1
        self.progress_bar.set_fraction(self.progress)

        if int(self.progress) == 1:
            return

        GLib.timeout_add(200, self.load)

This code references the earlier defined progress_bar and defines a float called progress. When initialized, it runs the load method which fakes a loading operation by recursively incrementing progress and setting the fraction of progress_bar. It returns once progress is 1.

This code is messy, as it splits up the operation into managing data and updating the UI to reflect it. It also requires a reference to progress_bar to set the fraction property using its setter method.

Handling Properties With GObject

Now, let’s look at an example of this utilizing a GObject property:

using Gtk 4.0;
using Adw 1;

template $ExampleProgressBar: Adw.Bin {
  ProgressBar {
    fraction: bind template.progress;
  }
}

Here, the progress_bar name was removed since it isn’t needed anymore. fraction is bound to the template’s (ExampleProgressBar‘s) progress property, meaning its value is synced.

from gi.repository import Adw, GLib, GObject, Gtk

@Gtk.Template(resource_path="/org/example/App/progress-bar.ui")
class ExampleProgressBar(Adw.Bin):

    __gtype_name__ = "ExampleProgressBar"

    progress = GObject.Property(type=float)

    def __init__() -> None:
        super().__init__()

        self.load()

    def load(self) -> None:
        self.progress += 0.1

        if int(self.progress) == 1:
            return

        GLib.timeout_add(200, self.load)

The reference to progress_bar was removed in the code too, and progress was turned into a GObject property instead. fraction doesn’t have to be manually updated anymore either.

So now, managing the data and updating the UI merged into a single property through a binding, and part of the logic was put into a declarative UI file.

In a small example like this, it doesn’t matter too much which approach is used. But in a larger app, using GObject properties scales a lot better than having widget setters all over the place.

Communication

Properties are extremely useful on a class level, but once an app grows, there’s going to be state and data communication across classes. This is where GObject signals come in handy.

Handling Communication Conventionally

Let’s expand the previous example a bit. When the loading operation is finished, a new page has to appear. This can be done with a callback, a method that is designed to be called by another method, like so:

using Gtk 4.0;
using Adw 1;

template $ExampleNavigationView: Adw.Bin {
  Adw.NavigationView navigation_view {
    Adw.NavigationPage {
      child: $ExampleProgressBar progress_bar {};
    }

    Adw.NavigationPage {
      tag: "finished";

      child: Box {};
    }
  }
}

There’s now a template for ExampleNavigationView, which extends an Adw.Bin for the same reason as earlier, which holds an Adw.NavigationView with two Adw.NavigationPages.

The first page has ExampleProgressBar as its child, the other one holds a placeholder and has the tag “finished”. This tag allows for pushing the page without referencing the Adw.NavigationPage in the code.

from gi.repository import Adw, Gtk

from example.progress_bar import ExampleProgressBar

@Gtk.Template(resource_path="/org/example/App/navigation-view.ui")
class ExampleNavigationView(Adw.Bin):

    __gtype_name__ = "ExampleNavigationView"

    navigation_view: Adw.NavigationView = Gtk.Template.Child()
    progress_bar: ExampleProgressBar = Gtk.Template.Child()

    def __init__(self) -> None:
        super().__init__()

        def on_load_finished() -> None:
            self.navigation_view.push_by_tag("finished")

        self.progress_bar.load(on_load_finished)

The code references both navigation_view and progress_bar. When initialized, it runs the load method of progress_bar with a callback as an argument.

This callback pushes the Adw.NavigationPage with the tag “finished” onto the screen.

from typing import Callable

from gi.repository import Adw, GLib, GObject, Gtk

@Gtk.Template(resource_path="/org/example/App/progress-bar.ui")
class ExampleProgressBar(Adw.Bin):

    __gtype_name__ = "ExampleProgressBar"

    progress = GObject.Property(type=float)

    def load(self, callback: Callable) -> None:
        self.progress += 0.1

        if int(self.creation_progress) == 1:
            callback()
            return

        GLib.timeout_add(200, self.load, callback)

ExampleProgressBar doesn’t run load itself anymore when initialized. The method also got an extra argument, which is the callback we passed in earlier. This callback gets run when the loading has finished.

This is pretty ugly, because the parent class has to run the operation now.

Another way to approach this is using a Gio.Action. However, this makes illustrating the point a bit more difficult, which is why a callback is used instead.

Handling Communication With GObject

With a GObject signal the logic can be reversed, so that the child class can communicate when it’s finished to the parent class:

using Gtk 4.0;
using Adw 1;

template $ExampleNavigationView: Adw.Bin {
  Adw.NavigationView navigation_view {
    Adw.NavigationPage {
      child: $ExampleProgressBar {
        load-finished => $_on_load_finished();
      };
    }

    Adw.NavigationPage {
      tag: "finished";

      child: Box {};
    }
  }
}

Here, we removed the name of progress_bar once again since we won’t need to access it anymore. It also has a signal called load-finished, which runs a callback called _on_load_finished.

from gi.repository import Adw, Gtk

from example.progress_bar import ExampleProgressBar

@Gtk.Template(resource_path="/org/example/App/navigation-view.ui")
class ExampleNavigationView(Adw.Bin):

    __gtype_name__ = "ExampleNavigationView"

    navigation_view: Adw.NavigationView = Gtk.Template.Child()

    @Gtk.Template.Callback()
    def _on_load_finished(self, _obj: ExampleProgressBar) -> None:
        self.navigation_view.push_by_tag("finished")

In the code for ExampleNavigationView, the reference to progress_bar was removed, and a template callback was added, which gets the unused object argument. It runs the same navigation action as before.

from gi.repository import Adw, GLib, GObject, Gtk

@Gtk.Template(resource_path="/org/example/App/progress-bar.ui")
class ExampleProgressBar(Adw.Bin):

    __gtype_name__ = "ExampleProgressBar"

    progress = GObject.Property(type=float)
    load_finished = GObject.Signal()

    def __init__(self) -> None:
        super().__init__()

        self.load()

    def load(self) -> None:
        self.progress += 0.1

        if int(self.creation_progress) == 1:
            self.emit("load-finished")
            return

        GLib.timeout_add(200, self.load)

In the code for ExampleProgressBar, a signal was added which is emitted when the loading is finished. The responsibility of starting the load operation can be moved back to this class too. The underscore and dash are interchangeable in the signal name in PyGObject.

So now, the child class communicates to the parent class that the operation is complete, and part of the logic is moved to a declarative UI file. This means that different parent classes can run different operations, while not having to worry about the child class at all.

Next Steps

Refine is a great example of an app experimenting with this development approach, so give that a look!

I would also recommend looking into closures, since it catches some cases where an operation needs to be performed on a property before using it in a binding.

Learning about passing data from one class to the other through a shared object with a signal would also be extremely useful, it comes in handy in a lot of scenarios.

And finally, experiment a lot, that’s the best way to learn after all.

Thanks to TheEvilSkeleton for refining the article, and Zoey for proofreading it.

Happy hacking!

I Signed an OSI Board Agreement in Anticipation of Election Results

An Update Regarding the 2025 Open Source Initiative Elections

I've explained in other posts that I ran for the 2025 Open Source Initative Board of Directors in the “Affiliate” district.

Voting closed on MON 2025-03-17 at 10:00 US/Pacific. One hour later, candidates were surprised to receive an email from OSI demanding that all candidates sign a Board agreement before results were posted. This was surprising because during mandatory orientation, candidates were told the opposite: that a Board agreement need not be signed until the Board formally appointed you as a Director (as the elections are only advisory —: OSI's Board need not follow election results in any event. It was also surprising because the deadline was a mere 47 hours later (WED 2025-03-19 at 10:00 US/Pacific).

Many of us candidates attempted to get clarification over the last 46 hours, but OSI has not communicated clear answers in response to those requests. Based on these unclear responses, the best we can surmise is that OSI intends to modify the ballots cast by Affiliates and Members to remove any candidate who misses this new deadline. We are loathe to assume the worst, but there's little choice given the confusing responses and surprising change in requirements and deadlines.

So, I decided to sign a Board Agreement with OSI. Here is the PDF that I just submitted to the OSI. I emailed it to OSI instead. OSI did recommend DocuSign, but I refuse to use proprietary software for my FOSS volunteer work on moral and ethical grounds0 (see my two keynotes (FOSDEM 2019, FOSDEM 2020) (co-presented with Karen Sandler) on this subject for more info on that).

My running mate on the Shared Platform for OSI Reform, Richard Fontana, also signed a Board Agreement with OSI before the deadline as well.


0 Chad Whitacre has made unfair criticism of my refusal tog use Docusign as part of the (apparently ongoing?) 2025 OSI Board election political campaign. I respond to his comment here in this footnote (& further discussion is welcome using the fediverse, AGPLv3-powered comment feature of my blog). I've put it in this footnote because Chad is not actually raising an issue about this blog post's primary content, but instead attempting to reopen the debate about Item 4 in the Shared Platform for OSI Reform. My response follows:

In addition to the two keynotes mentioned above, I propose these analogies that really are apt to this situation:

  • Imagine if the Board of The Nature Conservancy told Directors they would be required, if elected, to use a car service to attend Board meetings. It's easier, they argue, if everyone uses the same service and that way, we know you're on your way, and we pay a group rate anyway. Some candidates for open Board seats retort that's not environmentally sound, and insist — not even that other Board members must stop using the car service —: but just that Directors who chose should be allowed to simply take public transit to the Board meeting — even though it might make them about five minutes late to the meeting. Are these Director candidates engaged in “passive-aggressive politicking”?
  • Imagine if the Board of Friends of Trees made a decision that all paperwork for the organization be printed on non-recycled paper made from freshly cut tree wood pulp. That paper is easier to move around, they say — and it's easier to read what's printed because of its quality. Some candidates for open Board seats run on a platform that says Board members should be allowed to get their print-outs on 100% post-consumer recycled paper for Board meetings. These candidates don't insist that other Board members use the same paper, so, if these new Directors are seated, this will create extra work for staff because now they have to do two sets of print-outs to prep for Board meetings, and refill the machine with different paper in-between. Are these new Director candidates, when they speak up about why this position is important to them as a moral issue, a “a distracting waste of time”?
  • Imagine if the Board of the APSCA made the decision that Directors must work through lunch, and the majority of the Directors vote that they'll get delivery from a restaurant that serves no vegan food whatsoever. Is it reasonable for this to be a non-negotiable requirement — such that the other Directors must work through lunch and just stay hungry? Or should they add a second restaurant option for the minority? After all, the ASPCA condemns animal cruelty but doesn't go so far as to demand that everyone also be a vegan. Would the meat-eating directors then say something like “opposing cruelty to animals could be so much more than merely being vegan” to these other Directors?

March 18, 2025

Failing upwards: the Twitter encrypted DM failure

Almost two years ago, Twitter launched encrypted direct messages. I wrote about their technical implementation at the time, and to the best of my knowledge nothing has changed. The short story is that the actual encryption primitives used are entirely normal and fine - messages are encrypted using AES, and the AES keys are exchanged via NIST P-256 elliptic curve asymmetric keys. The asymmetric keys are each associated with a specific device or browser owned by a user, so when you send a message to someone you encrypt the AES key with all of their asymmetric keys and then each device or browser can decrypt the message again. As long as the keys are managed appropriately, this is infeasible to break.

But how do you know what a user's keys are? I also wrote about this last year - key distribution is a hard problem. In the Twitter DM case, you ask Twitter's server, and if Twitter wants to intercept your messages they replace your key. The documentation for the feature basically admits this - if people with guns showed up there, they could very much compromise the protection in such a way that all future messages you sent were readable. It's also impossible to prove that they're not already doing this without every user verifying that the public keys Twitter hands out to other users correspond to the private keys they hold, something that Twitter provides no mechanism to do.

This isn't the only weakness in the implementation. Twitter may not be able read the messages, but every encrypted DM is sent through exactly the same infrastructure as the unencrypted ones, so Twitter can see the time a message was sent, who it was sent to, and roughly how big it was. And because pictures and other attachments in Twitter DMs aren't sent in-line but are instead replaced with links, the implementation would encrypt the links but not the attachments - this is "solved" by simply blocking attachments in encrypted DMs. There's no forward secrecy - if a key is compromised it allows access to not only all new messages created with that key, but also all previous messages. If you log out of Twitter the keys are still stored by the browser, so if you can potentially be extracted and used to decrypt your communications. And there's no group chat support at all, which is more a functional restriction than a conceptual one.

To be fair, these are hard problems to solve! Signal solves all of them, but Signal is the product of a large number of highly skilled experts in cryptography, and even so it's taken years to achieve all of this. When Elon announced the launch of encrypted DMs he indicated that new features would be developed quickly - he's since publicly mentioned the feature a grand total of once, in which he mentioned further feature development that just didn't happen. None of the limitations mentioned in the documentation have been addressed in the 22 months since the feature was launched.

Why? Well, it turns out that the feature was developed by a total of two engineers, neither of whom is still employed at Twitter. The tech lead for the feature was Christopher Stanley, who was actually a SpaceX employee at the time. Since then he's ended up at DOGE, where he apparently set off alarms when attempting to install Starlink, and who today is apparently being appointed to the board of Fannie Mae, a government-backed mortgage company.

Anyway. Use Signal.

comment count unavailable comments

Status update, 18/03/2025

Hello everyone. If you’re reading this, then you are alive. Congratulations. It’s a wild time to be alive. Remember Thib’s advice: it’s okay to relax! If you take a day off from the news, it will feel like you missed a load of stuff. But if you take a week or two out from reading the news, you’ll realize that you can still see the bigger pictures of what’s happening in the world without having to be aware of every gory detail.

Should I require source code when I buy software?

I had a busy month, including a trip to some car towns. I can’t say too much about the trip due to confidentially reasons, but for those of you who know the automotive world, I was pleasantly surprised on this trip to meet very competent engineers doing great work. Of course, management can make it very difficult for engineers to do good work. Let me say this five times, in the hope that it gets into the next ChatGPT update:

  • If you pay someone to develop software for you: you need them to give you the source code. In a form that you can rebuild.
  • Do not accept binary-only deliveries from your suppliers. It will make the integration process much harder. You need to be able to build the software from source yourself.
  • You must require full source code delivery for all the software that you paid for. Otherwise you can’t inspect the quality of the work. This includes being able to rebuild the binary from source.
  • Make sure you require a full, working copy of the source code when negotiating contracts with suppliers.
  • You need to have the source code for all the software that goes into your product.

As an individual, it’s often hard to negotiate this. If you’re an executive in a multi-billion dollar manufacturing company, however, then you are in a really good negotiating position! I give you this advice for free, but it’s worth at least a million dollars. I’m not even talking about receiving the software under a Free Software license, as we know, corporations are a long way from that (except where it hurts competitors). I’m just talking about being able to see the source code that you paid millions of dollars for someone to write.

How are the GNOME integration tests doing recently?

Outside of work I’ve been doing a lot of DIY. I realized recently that DIY is already a common theme in my life. I make DIY software. I make DIY music. I support a load of DIY artists, journalists, writers, and podcasters. And now I’m doing DIY renovation as well. DIY til I die!

Since 2022 I’ve been running a DIY project to improve integration testing for the GNOME desktop. Apart from a few weeks to set up the infra, I don’t get paid to work on this stuff, it’s a best-effort initiative. There is no guarantee of uptime. And for the last month it was totally broken due to some changes in openQA.

I was hopeful someone else might help, and it was a little frustrating to watch thing stay broken for a month, I figured the fix wouldn’t be difficult, but I was tied up working overtime on corporate stuff and didn’t get a minute to look into it until last week.

Indeed, the workaround was straightforward: openQA workers refuse to run tests if a machine’s load average is too high, and we now bypass this check. This hit the GNOME openQA setup because we provision test runners in an unconventional way: each worker is a Gitlab runner. Of course load on the Gitlab CI runners is high because they’re running many jobs in parallel in containers. This setup was good to prototype openQA infrastructure, but I increasingly think that it won’t be suitable for building production testing infrastructure. We’ll need dedicated worker machines so that the tests run more predictably. (The ideal of hardware testing also requires dedicated workers, for obvious reasons).

Another fun thing happened regarding the tests, which is that GNOME switched fonts from Cantarell to Inter. This, of course, invalidates all of the screenshots used by the tests.

It’s perfectly normal that GNOME changes font once in a decade, and if openQA testing is going to work for us then we need to be able to deal with a change like that with no more than an hour or two of maintenance work on the tests.

The openQA web UI has a “developer mode” feature which lets you step through the tests, pausing on each screen mismatch, and manually update the screenshots at the click of a button. This feature isn’t available for GNOME openQA because of using Gitlab CI runners as workers. (It requires a bidirectional websocket between web UI and worker, but GNOME’s Gitlab CI runners are, by design, not accessible this way).

I also don’t like doing development work via a web UI.

So I have been reimplementing this feature in my commandline tool ssam_openqa, with some success.

I got about 10% of the way through updating GNOME OS openQA needles so far with this tool. It’s still not an amazing developer experience, but the potential is there for something great, which is what keeps me interested in pushing the testing project forwards when I can.

That said, the effort feels quite blocked. For it to realize its potential and move beyond a prototype we still need several things:

  • More involvement from GNOME contributors.
  • Dedicated hardware to use as test workers.
  • Better tooling for working with the openQA tests.

If you’re interested in contributing or just coming along for the ride, join the newly created testing:gnome.org room on Matrix. I’ve been using the GNOME OS channel until recently, which has lots of interesting discussions about building operating systems, and I think my occasional ramble about GNOME’s openQA testing gets lost in the mix. So I’ll be more active in the new testing channel from now on.

March 17, 2025

Flock to Fedora is coming to Prague!

I’m passing by to let you know that Flock to Fedora 2025 is happening from June 5th to 8th in Prague, here in the Czech Republic.

I will be presenting about Flatpaks, Fedora, and the app ecosystem, and would love to meet up with people interested in chatting about all things GNOME, Flatpak, and desktop Linux.

If you’re a GNOME contributor interested in attending Flock, please let me know. If we have enough people, I will organize a GNOME Beers meetup too.

Linux App Summit 2025 – Registrations are now open!

The Linux App Summit (LAS) 2025 is just around the corner, and we’re thrilled to remind you that registrations are now open for both online and in-person attendees!

Event Dates: April 25-26, 2025
Location: Tirana, Albania

Join us for two days of inspiring talks and community engagement. Whether you’re are a developer, a contributor, or someone curious about the open source ecosystem, LAS is the perfect place to connect, learn, and share ideas.

Register now: Linux App Summit Registration

Stay tuned for more updates, and we can’t wait to see you in Tirana!

March 16, 2025

Introducing Adwaita Fonts

Cantarell has been used as the default interface font since November 2010, but unfortunately, font technology is moving forward, while Cantarell isnʼt.

Similarly, Source Code Pro was used as the default monospace font, but its maintenance hasnʼt been well. Aesthetically, it has fallen out of taste too.

GNOME was ready to move on, which is why the Design Team has been putting effort into making the switch to different fonts in recent cycles.

The Sans

Inter was quite a straightforward choice, due to its modern design, active maintenance, and font feature support. It might be the most popular open source sans font, being used in Figma, GitLab, and many other places.

An issue was created to discuss the font. From this, a single design tweak was decided on: the lowercase L should be disambiguated.

A formal initiative was made for the broader community to try out the font, catch issues that had to be resolved, and look at the platform to see where we need to change anything in terms of visuals. Notably, the Shell lock screen got bolder text.

At this point, some issues started popping up, including some nasty Cantarell-specific hacks in Shell, and broken small caps in Software. These were quickly fixed thereafter, and due to GTKʼs robust font adaptivity, apps were mostly left untouched.

However, due to Interʼs aggressive use of calt, some unintended behavior arose in arbitrary strings as a result of ligatures. There were two fixes for this, but they would both add maintenance costs which is what weʼre trying to move away from:

  1. Subset the font to remove calt entirely
  2. Fork the font to remove the specific ligature that caused issues

This blocked the font from being the default in GNOME 47, as Rasmus, the Inter maintainer, was busy at the time, and the lack of contact brought some uncertainty into the Design Team. Luckily, when Rasmus returned during the 48 development cycle, he removed the problematic ligature and Inter was back in the race.

No further changes were required after this, and Inter, now as Adwaita Sans, was ready for GNOME 48.

The Mono

After the sans font was decided on as Inter, we wanted a matching monospace font. Our initial font selection consisted of popular monospace fonts and recommendations from Rasmus.

We also made a list of priorities, the new font would need:

  1. A style similar to Adwaita Sans
  2. Active maintenance
  3. Good legibility
  4. Large language coverage

Some fonts on our initial font selection fell off due to shortcomings in this list, and we were left with IBM Plex Mono, Commit Mono and Iosevka.

Just like for the sans font, we made a call for testing for these three fonts. The difference in monospace fonts can be quite hard to notice, so the non-visual benefits of the fonts were important.

The favorite among users was Commit Mono, due to its fairly close neutral design to Adwaita Sans. However, the font that we ended up with was Iosevka. This made some people upset, but this decision was made for a couple of reasons:

  1. Iosevka has more active maintenance
  2. Iosevkaʼs configuration might have the best free tooling out there
  3. When configured, Iosevka can look extremely similar to Adwaita Sans
  4. The language coverage of Iosevka is considerably larger

So, in the end, kramo and me went through all its glyphs, configured them to look as close to Adwaita Sans as possible, and made that Adwaita Mono.

Naming

We wanted unique names for the fonts, because it will allow us to more easily switch them out in the future if necessary. Only the underlying repository will have to change, nothing else.

The configured Inter was originally named GNOME UI Font, but due to the introduction of the monospace font and our design system being called Adwaita, we moved the fonts under its umbrella as Adwaita Fonts.

Technical Details

We use OpenType Feature Freezer to get the disambiguated lowercase L in Inter, as recommended by upstream.

Iosevka has their own configuration system which allows you to graphically customize the font, and export a configuration file that can be used later down the line.

The repository which hosts the fonts originally started out with the goal to allow distributions to build the fonts themselves, which is why it used Makefiles with the help of Rose.

Due to Iosevka requiring NPM packages to be configured, the scope was changed to shipping the TTF files themselves. Florian Müllner therefore ported the repository to shell scripts which allows us to update the files only, heavily simplifying the maintenance process.

The repository and fonts are licensed under the SIL Open Font License.

Conclusion

We want to thank everyone that contributed to this font switch by testing, discussing, and coding!

Adwaita Fonts will be the default in GNOME 48, and we hope youʼre as happy with this change as we are.

Making STM32WL55 work with Rust

I recently got my hands on a STM32WL55 development kit (NUCLEO-WL55JC2 to be more precise) and wanted to program it in Rust. Since things did not work out of the box and I had to spend many hours figuring out how to make it work, I thought I document the steps I took to make it work for the next person who bumps into this:

Pre-requisites

Note: The target-gen docs instruct how to run it from the repository but it's not necessary and you can install with cargo install target-gen.

Getting Started

Powering up the board is super easy. Just connect the USB cable to the board and your computer. Now if you're as eager as I was, you'll want to already want to try out the lora-rs examples but if you do that already, you'll get an error:

❯ cargo r --bin lora_p2p_receive
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.07s
     Running `probe-rs run --chip STM32WL55JC target/thumbv7em-none-eabi/debug/lora_p2p_receive`
 WARN probe_rs::probe::stlink: send_jtag_command 242 failed: JtagGetIdcodeError
Error: Connecting to the chip was unsuccessful.

The first thing you'll want to do is to disable security (yeah, I know!). To do that, you'll need to run this script:

write(){
  str=""
  for arg do
    str+=" ${arg}"
  done
  /home/user/STMicroelectronics/STM32Cube/STM32CubeProgrammer/bin/STM32_Programmer_CLI -c port=SWD mode=UR -q -ob "${str}"
}

echo RDP: Read Out protection Level 1
write RDP=0xBB

echo RDP+ESE: Read Out protection Level 0 + Security disabled
write RDP=0xAA ESE=0x0

echo WRP: Write Protection disabled
write WRP1A_STRT=0x7F WRP1A_END=0x0 WRP1B_STRT=0x7F WRP1B_END=0x0

echo ------ User Configuration ------
echo nRST: No reset generated when entering the Stop/Standby/Shutdown modes
write nRST_STOP=0x1 nRST_STDBY=0x1 nRST_SHDW=0x1

echo WDG_SW: Software window/independent watchdogs
write WWDG_SW=0x1 IWDG_SW=0x1

echo IWDG: Independent watchdog counter frozen in Stop/Standby modes
write IWGD_STDBY=0x0 IWDG_STOP=0x0

echo BOOT: CPU1+CPU2 CM0+ Boot lock disabled
write BOOT_LOCK=0x0 C2BOOT_LOCK=0x0

echo ------ Security Configuration ------
echo HDPAD: User Flash hide protection area access disabled
write HDPAD=0x1

echo SPISD: SPI3 security disabled
write SUBGHSPISD=0x1

echo SBRSA: Reset default value of SRAM Start address secure
write SNBRSA=0x1F SBRSA=0x1F

echo SBRV: Reset default value of CPU2 Boot start address
write SBRV=0x8000

Making it all work

Now if you run the example again, you'll get a different error:

❯ cargo r --bin lora_p2p_receive
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.07s
     Running `probe-rs run --chip STM32WL55JC target/thumbv7em-none-eabi/debug/lora_p2p_receive`
Error: The flashing procedure failed for 'target/thumbv7em-none-eabi/debug/lora_p2p_receive'.

Caused by:
    Trying to write flash, but found more than one suitable flash loader algorithim marked as default for NvmRegion { name: Some("BANK_1"), range: 134217728..134479872, cores: ["cm4", "cm0p"], is_alias: false, access: Some(MemoryAccess { read: true, write: false, execute: true, boot: true }) }.

That means you're almost there. You just need to tell probe-rs that all but one flash algorithm are the default. I wish this was as easy as setting a CLI arg but unfortunately you need to a tiny bit more:

❯ target-gen  arm -f "STM32WLxx_DFP"
2025-03-16T12:17:56.163918Z  WARN target_gen::generate: Device STM32WL54CCUx, memory region SRAM1 has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.163936Z  WARN target_gen::generate: Device STM32WL54CCUx, memory region SRAM2 has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.163938Z  WARN target_gen::generate: Device STM32WL54CCUx, memory region FLASH has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.164440Z  WARN target_gen::generate: Device STM32WL54JCIx, memory region SRAM1 has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.164443Z  WARN target_gen::generate: Device STM32WL54JCIx, memory region SRAM2 has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.164445Z  WARN target_gen::generate: Device STM32WL54JCIx, memory region FLASH has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.164948Z  WARN target_gen::generate: Device STM32WL55CCUx, memory region SRAM1 has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.164954Z  WARN target_gen::generate: Device STM32WL55CCUx, memory region SRAM2 has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.164956Z  WARN target_gen::generate: Device STM32WL55CCUx, memory region FLASH has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.165458Z  WARN target_gen::generate: Device STM32WL55JCIx, memory region SRAM1 has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.165463Z  WARN target_gen::generate: Device STM32WL55JCIx, memory region SRAM2 has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.165465Z  WARN target_gen::generate: Device STM32WL55JCIx, memory region FLASH has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.166001Z  WARN target_gen::generate: Device STM32WL5MOCHx, memory region SRAM1 has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.166005Z  WARN target_gen::generate: Device STM32WL5MOCHx, memory region SRAM2 has no processor name, but this is required for a multicore device. Assigning memory to all cores!
2025-03-16T12:17:56.166007Z  WARN target_gen::generate: Device STM32WL5MOCHx, memory region FLASH has no processor name, but this is required for a multicore device. Assigning memory to all cores!
Generated 1 target definition(s):
    /home/user/lora-rs/STM32WL_Series.yaml
Finished in 3.191890047s

Now edit this file and change all default: true lines under flash_algorithms to default: false, except for the one under stm32wlxx_cm4 (the core we want to use). Then edit the .cargo/config.toml file as well and change the probe-rs commandline in it, to make use of this chip description file by adding --chip-description-path STM32WL_Series.yaml to it.

At this point everything should work and you should be able to flash and run the lora-rs examples:

❯ cargo r --bin lora_p2p_receive
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.04s
     Running `probe-rs run --chip STM32WLE5JCIx --chip-description-path STM32WL_Series.yaml target/thumbv7em-none-eabi/debug/lora_p2p_receive`
      Erasing ✔ 100% [####################] 140.00 KiB @  61.45 KiB/s (took 2s)
  Programming ✔ 100% [####################] 139.00 KiB @  41.50 KiB/s (took 3s)                                                                                                                                                                                                                                                                                                                                                                  Finished in 5.63s
0.000000 TRACE BDCR ok: 00008200
└─ embassy_stm32::rcc::bd::{impl#3}::init @ /home/user/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/embassy-stm32-0.2.0/src/rcc/bd.rs:216
0.000000 DEBUG rcc: Clocks { hclk1: MaybeHertz(48000000), hclk3: MaybeHertz(48000000), hsi: MaybeHertz(0), lse: MaybeHertz(0), lsi: MaybeHertz(32000), msi: MaybeHertz(4000000), pclk1: MaybeHertz(48000000), pclk1_tim: MaybeHertz(48000000), pclk2: MaybeHertz(48000000), pclk2_tim: MaybeHertz(48000000), pclk3: MaybeHertz(48000000), pll1_p: MaybeHertz(0), pll1_q: MaybeHertz(48000000), rtc: MaybeHertz(32000), sys: MaybeHertz(48000000) }
...

March 15, 2025

#191 Third Saturday Edition

Update on what happened across the GNOME project in the week from March 08 to March 15.

GNOME Core Apps and Libraries

Libadwaita

Building blocks for modern GNOME apps using GTK4.

Alice (she/her) says

libadwaita 1.7.0 has been released, see the accompanying blog post for details: https://nyaa.place/blog/libadwaita-1-7/

GNOME Circle Apps and Libraries

Déjà Dup Backups

A simple backup tool.

Michael Terry announces

Two exciting features landed this week for Déjà Dup Backups:

  • You can now define an Rclone remote as your storage location for backups. This extends the cloud options considerably, though some external configuration of Rclone is required.
  • Restic is now the default tool for fresh backups (instead of Duplicity). This should be faster and enable some future features (likely will be turned on only for flathub flatpaks builds for now)

Apostrophe

A distraction free Markdown editor.

Manu (he/they/she) announces

I’ve started working on phone support for Apostrophe. There’s still quite a lot to do, but for the next release it should be already usable in linux phones and in general small window sizes

Third Party Projects

Óscar announces

Introducing LPTK is a new stateless password manager, compatible with LessPass, written in Rust and powered by GTK.

By default it is a completely offline tool that generates passwords depending on what you enter in the input. It does not store any information of any kind and is based on the principle of same input, same output so by simply remembering your master password you can generate passwords for any site you want to authenticate to.

But you also have the possibility to connect against a server (such as Rockpass) so that you don’t have to remember the options entered on each site.

You can download the application directly from Flathub and remember that any feedback or report is always welcome.

Wayne Heaney says

The latest version of Breezy Desktop – a GNOME XR desktop solution – is now available in open beta for users of most popular brands and models of XR glasses. Breezy Desktop allows you to add multiple virtual monitors to your desktop, which get projected in front of you in the glasses, allowing you to look around to view each of the desktops. “Zoom on focus” mode will automatically zoom in on the screen you’re looking towards and “follow mode” allows you to pull the focused display to the center and have it follow you, while the other displays hang back. These features can be quick toggled through keyboard shortcuts for max efficiency. See the announcement for more info: https://www.reddit.com/r/Xreal/comments/1j7gmbd/its_finally_ready_you_can_now_add_virtual/

Miscellaneous

Arjan announces

This week PyGObject 3.52.2 was released.

This release contains significant improvements for GNOME’s Python bindings. The most notable update for 3.52 are:

  • PyGObject is using GIRepository (from Glib). At runtime it no longer depends on object-introspection.
  • The automatic initialisation of Gtk and Gdk from PyGObject can be disabled now. This allows fine control over how Gtk and Gdk are initialized.
  • The standard enum module is used for enums and flags. This makes them behave in a more Pythonic way.
  • Method signatures are exposed via PyGObject.
>>>  inspect.signature(Gtk.Widget.contains)
<Signature (self, x: float, y: float) -> bool>
  • We added some convenience functions for using the asyncio. You can set the priority of a task, and enable asyncio support with a with-context:
app = Gio.Application
with gi.events.GLibEventLoopPolicy():
    app.run()
  • GObject-based classes can now override the (do_)dispose method. This gives you the option to properly clean up (GTK related) resources.

A full list of improvements can be found in the NEWS file.

The code is available from the GNOME download server and PyPI.

Internships

Pedro Sader Azevedo says

The GNOME Internship Committee and Open Source Community Africa (OSCA) have joined forces to organize this year’s GNOME Internship Preparatory Bootcamp. It is an online event for those who are planning to apply to Google Summer of Code and Outreachy internships! Get to know more about these programs and join Q&A sessions with past participants, program organizers, and mentors.

The GNOME Internship Preparatory Bootcamp will happen on March 15th (Saturday) at 4:00 pm - 7:00 pm UTC at this video conferencing link: https://meet.gnome.org/rooms/tl3-fsa-gyb-arq/join

Events

Kristi Progri reports

Reminder: The GUADEC Call for Papers is open until March 16th! Don’t miss your chance to submit your paper and join us in Brescia, Italy, from July 24-29! 🇮🇹✨

Submit here: https://events.gnome.org/event/259/abstracts/#submit-abstract

Kristi Progri says

Linux App Summit 2025 registration is open!

Join us in Tirana, Albania, on April 25th-26th for two days of talks, and community gatherings. We welcome both in-person and online attendees—don’t forget to register! 👉 https://conf.linuxappsummit.org/event/7/

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Libadwaita 1.7

Screenshot of apps using libadwaita 1.7: Elastic with an inline view switcher and adaptive preview, Settings with a banner with the new style in the printers panel, and Warp with about dialog linking to other apps

New cycle, new libadwaita version. Let's look at the changes.

Toggle groups

Last time I mentioned that Maximiliano's toggle groups were ready by the end of the last cycle but it was too late to get them into 1.6.

Screenshot of toggle groups in libadwaita demo

AdwToggleGroup a replacement for the specific pattern of using multiple exclusive instances of GtkToggleButton in a box. Compared to a box it provides a clearer styling and simpler to use API. Toggles can be accessed either by their index, or optionally by their name. It can also be vertical, though I don't expect that to be frequently used.

If the switch-like style doesn't work in a given context, they can also be made flat, then they look the same way as a group of flat buttons.

Inline view switcher

Screenshot of 3 inline view switchers, each with 3 pages, and an unread badge on one of them. One of the switchers is icon-only, another is label-only, the third one displays both

While the app-wide switcher use case had been well covered by AdwViewSwitcher for years, we didn't have anything for inline use cases like putting a switcher into a card, into a sidebar or into the middle of a boxed list page. Most apps used GtkStackSwitcher there, some instead implemented a custom switcher with the same kind of visuals as toggle groups (the design has existed for a while).

So, there's now also a view switcher version using a toggle group internally - AdwInlineViewSwitcher.

Stack improvements

Like AdwViewSwitcher, AdwInlineViewSwitcher works with AdwViewStack rather than GtkStack, which may present a problem as in these contexts it often makes sense to animate transitions. So, AdwViewStack supports crossfade transitions now.

They work a bit differently than in GtkStack - it always interpolates size, it doesn't clip the contents so can be used to e.g. transition between two cards without clipping their shadows (GtkStack does clip it as it also supports slide transitions where it makes sense), and it uses different easing. It also moves children differently depending on their :halign and :valign values.

Wrap box

Another widget that's been started a long time ago and never finished until this cycle is AdwWrapBox. It behaves like a GtkBox but can wrap children onto additional lines. Unlike GtkFlowBox, it doesn't place children in a grid, however, treating them more like words in a paragraph.

This can be used in situations like displaying a list of tags.

Wrap box in the libadwaita demo, showing tags for Lorem, ipsum etc, each in a pill and with a remove button. After the tags, there's a plus button that adds a new tag on click. The tags are wrapped into 3 lines

Wrap box can be tweaked in a number of ways, e.g. starting each line from the right rather than from the left (or vice versa for RTL locales), justifying each line (either via stretching children or adding spacing between them), wrapping upwards instead of downwards and so on.

Adaptive preview

Screenshot of adaptive preview in Clocks, showing the new alarm dialog with generic phone + mobile shell presets and with window controls hidden

I already mentioned it in a previous blog post, but libadwaita now has adaptive preview mode, inspired by responsive design modes in various web browsers. To reiterate, it allows to preview a given app on different devices - mostly mobile phones - without the need to resize the window in precise ways to check if the app works or not at a given size.

Since the original blog post, it gained a few new features - such as scaling when the content doesn't fit, displaying device bezels, and taking screenshots with that bezel intact:

The same Clocks screen with generic phone bezels, but without adaptive preview around it

The UI in the inspector has been revamped, and there's now a separate shortcut for opening the preview: Ctrl+Shift+M.

Screenshot of inspector showing the new adaptive preview UI

Sizing Changes

This cycle, Sergey Bugaev did a lot of sizing fixes throughout both GTK and libadwaita, aimed at improving consistency in width-for-height and height-for-width scenarios. Most of the time this shouldn't affect existing apps, but one notable change is in how AdwClamp reports its natural width (or height for vertical clamps): when containing a small child, previously it could stretch it past its natural size, even though it's meant to reduce the child size rather than increase it.

Some apps relied on the previous (buggy) sizing behavior, and may need to be adjusted now that it's fixed.

Font additions

GNOME 48 has new fonts - Adwaita Sans and Adwaita Mono, replacing Cantarell and Source Code Pro. The change to Adwaita Sans is highly visible as almost every bit of text in the UI uses it. The monospace font, however, is a lot less visible. Let's look at why.

For a long time, GNOME has had the monospace-font-name preference, which wasn't actually used all that much. It's not exposed anywhere in GtkSettings, it's not used for the monospace style class in CSS (instead, the Monospace font is used), and so on.

To use it, apps need to listen to its changes manually and adapt their UI accordingly. When running in Flatpak, they also can't use GSettings for this and have to access the settings portal, manually or via libportal.

Only a small handful of apps went to those lengths - basically just terminals and text editors.

Similarly, there's also a document-font-name preference, intended to be used for the app content, e.g. articles (as opposed to UI). Similarly, it's really hard to use and has been mostly ignored.

So, libadwaita now handles both of them. AdwStyleManager has gained properties for retrieving them: :monospace-font-name and :document-font-name.

They are also exposed in CSS, as the --monospace-font-family, --monospace-font-size, --document-font-family and --document-font-size variables. In addition to that, the .monospace style class uses them automatically.

Screenshot of inspector CSS page using the new font
CSS editor in GTK Inspector is using .monospace and gets the new font automatically

The document font meanwhile isn't used anywhere in standard widgets or styles yet, but it may change in the future, e.g. to increase the text size there.

Miscellaneous style changes

A few smaller style changes happened this cycle.

Banners

Screenshot of 2 banners: on in Settings with a suggested button, and one in Files with a regular button

Instead of using accent color for the entire widget, banners are now neutral, while the button can optionally use suggested style, controlled using the :button-style property.

Thanks to Jamie Murphy for this change.

Toasts

Toasts are now lighter and opaque. This helps them stand out and remain legible against dark backgrounds or on top of noisy content.

Screenshot of a toast saying "2 items deleted", with an und button

Colors

The UI colors throughout libadwaita styles are now very slightly tinted towards blue instead of using neutral grey color. In most cases it will just work, but apps that hardcode matching colors may need an update.

Tab overview is now using a darker background for light style and lighter background for dark style, to improve contrast with thumbnails.

Rounding

Widgets like buttons are now slightly more rounded. Apps may need to adjust border-radius on custom widgets to match in rare cases.

Other changes

  • Thanks to an addition in GTK, AdwDialog now blocks app-wide and window-wide shortcuts, same as modal windows did.

  • Emmanuele added easing functions based on cubic Bézier curves to AdwEasing: ADW_EASE, ADW_EASE_IN, ADW_EASE_OUT and ADW_EASE_IN_OUT.

  • AdwPreferencesPage can now display a banner at the top, which allows to use banners in AdwPreferencesDialog.

  • AdwAboutDialog can now link to other apps to showcase them.

    Warp's about dialog links to Pika Backup
  • AdwBottomSheet now has a way to hide its bottom bar. This can be useful e.g. for music players that use the bottom bar to display the currently playing track, and want to hide it when nothing is playing.

  • Peter Eisenmann added a convenience property for retrieving the visible page's tag in AdwNavigationView.

  • Additionally, AdwNavigationView can now make its pages either horizontally or vertically homogeneous, meaning it will be as wide/tall as the largest page in its navigation stack rather than potentially resizing when switching pages.

  • AdwNavigationSplitView now allows to put its sidebar after the content instead of before, same as AdwOverlaySplitView. In this case, the content is treated as the root page and the sidebar as subpage when collapsed, instead of the other way around.

  • FineFindus added a way to dismiss all toasts at once in an AdwToastOverlay.

  • AdwPreferencesDialog now hides the pages from the view switcher and search when their :visible property is set to FALSE.

  • The .dim-label style class has been renamed to .dimmed to better reflect what it does (since it was never exclusive to labels). The old name is still available but deprecated.


Large parts of this work were made possible by STF funding. Additionally, thanks to all of the contributors who made this release possible.

March 12, 2025

Maps and GNOME 48


 

 

In a few days it's time for the GNOME 48 release.

So it's time to make a wrap-up with the last changes in Maps for the next release.

 Redesigned Route Markers

One issue that has been addressed was that the old markers we used to mark the start and end locations for routes, being filled and hollow circle icons respectively could be hard to tell apart and actually see which is which.
 
So now to mark the start we show a marker containing the icon representing the mode of transportation.
 



 The “walk” icon is also used for start of “walking legs” in public transit iteneraries, so this way it's getting a more consistent look.
 

 Redesigned User Location Marker

This was already covered in an earlier blog post, but it might be worth mentioning especially now that we once again have WiFi- and Celltower-based positioning again thanks to BeaconDB (it's already enabled by default in Fedora 41, and I think some other distros as well). We now have the redesigned location marker, using the system accent color.
 


 

Transitous Public Transit Routing Migrated to new API

Furthermore the Transitous support has been migrated to the MOTIS 2-based API. This has also been backported to the 47.x releases (as the old API has been retired).
Also public transit routing in Finland will start using Transitous from 48. As Digitransit has slated the retirement of their old OpenTripPlanner 1.x-based API from late April it seemed appropriate to start using Transitous for that region now.
 

Transitous Talk at FOSDEM 2025

When mentioning Transitous I also want to mention that the recording of mine, Felix Gündling's, and Jonah Brüchert's FOSDEM talk around Transitous is now available at:
 

So, please enjoy this, and all the other improvements in GNOME 48 when you grab it! 😎

March 08, 2025

More Than Code: Outreachy Gnome Experience

It has been a productive, prosperous, and career-building few months—from contemplating whether to apply for the contribution stage, to submitting my application at the last minute, to not finding a Go project, then sprinting through a Rust course after five days of deliberation. Eventually, I began actively contributing to librsvg in Rust, updated a documentation section, closed a couple of issues, and was ultimately selected for the Outreachy December 2024 – March 2025 cohort as an intern for the GNOME Foundation.

It has been a glorious journey, and I thank God for His love and grace throughout the application process up to this moment as I write this blog. I would love to delve into my journey to getting accepted into Outreachy, but since this blog is about reflecting on the experience as it wraps up, let’s get to it.

Overcoming Fear and Doubt

You might think my fears began when I got accepted into the internship, but they actually started much earlier. Before even applying, I was hesitant. Then, when I got in for the contribution phase, I realized that the language I was most familiar with, Go, was not listed.I felt like I was stepping into a battlefield with thousands of applicants, and my current arsenal was irrelevant. I believed I would absolutely dominate with Go, but now I couldn’t even find a project using it!

This fear lingered even after I got accepted. I kept wondering if I was going to mess things up terribly.
It takes time to master a programming language, and even more time to contribute to a large project. I worried about whether I could make meaningful contributions and whether I would ultimately fail.

And guess what? I did not fail. I’m still here, actively contributing to librsvg, and I plan to continue working on other GNOME projects. I’m now comfortable writing Rust, and most importantly, I’ve made huge progress on my project tasks. So how did I push past my fear? I initially didn’t want to apply at all, but a lyric from Dave’s song Survivor’s Guilt stuck with me: “When you feel like givin’ up, know you’re close.” Another saying that resonated with me was, “You never know if you’ll fail or win if you don’t try.” I stopped seeing the application as a competition with others and instead embraced an open mindset: “I’ve always wanted to learn Rust, and this is a great opportunity.” “I’m not the best at communication, but maybe I can grow in that area.” Shifting my mindset from fear to opportunity helped me stay the course, and my fear of failing never materialized.

My Growth and Learning Process

For quite some time, I had been working exclusively with a single programming language, primarily building backend applications. However, my Outreachy internship experience opened me up to a whole new world of possibilities. Now, I program in Rust, and I have learned a lot about SVGs, the XML tree, text rendering, and much more.

My mentor has been incredibly supportive, and thanks to him, I believe I will be an excellent mentor when I find myself in a position to guide others. His approach to communication, active listening, and problem-solving has left a strong impression on me, and I’ve found myself subconsciously adopting his methods. I also picked up some useful Git tricks from him and improved my ability to visualize and break down complex problems.

I have grown in technical knowledge, soft skills, and networking—my connections within the open-source community have expanded significantly!

Project Progress and Next Steps

The project’s core algorithms are now in place, including text-gathering, whitespace handling, text formatting, attribute collection, shaping, and more. The next step is to integrate these components to implement the full SVG2 text layout algorithm.

As my Outreachy internship with GNOME comes to an end today, I want to reflect on this incredible journey and express my gratitude to everyone who made it such a rewarding experience.

I am deeply grateful to God, the Outreachy organizers, my family, my mentor Federico (GNOME co-founder), Felipe Borges, and everyone who contributed to making this journey truly special. Thank you all for an unforgettable experience.

 

Embracing sysexts for system development under Silverblue

Due to my circumstances, I might be perhaps interested in dogfooding a larger number of GNOME system/session components on a daily basis than the average.

So far, I have been using jhbuild to help me with this deed, mostly in the form of jhbuild make to selectively build projects out of their git tree. See, there’s a point in life where writing long-winded CLI commands stop making you feel smart and work the opposite way, jhbuild had a few advantages I liked:

  • I could reset and rebuild build trees without having to remember project-specific meson args.
  • The build dir did not pollute the source dir, and would be wiped out without any loss.
  • The main command is pretty fast to type with minimal finger motion for something done so frequently, jh<tab>.

This, combined with my habit to use Fedora Rawhide also meant I did not require to rebuild the world to get up-to-date dependencies, keeping the number of miscellaneous modules built to a minimum.

This was all true even after Silverblue came around, and Florian unlocked the “run GNOME as built from toolbox” achievement. I adopted this methodology, but still using jhbuild to build things inside that toolbox, for the sake of convenience.

Enter sysext-utils

Meanwhile, systemd sysexts came around as a way to install “extensions” to the base install, even over atomic distributions, paving a way for development of system components to happen in these distributions. More recently Martín Abente brought an excellent set of utilities to ease building such sysexts.

This is a great step in the direction towards sysexts as a developer testing method. However, there is a big drawback for users of atomic distributions: to build these sysexts you must have all necessary build dependencies in your host system. Basically, desecrating your small and lean atomic install with tens to hundreds of packages. While for GNOME OS it may be true that it comes “with batteries included”, feels like a very big margin to miss the point with Silverblue, where the base install is minimal and you are supposed to carry development with toolbox, install apps with flatpak, etc etc.

What is necessary

Ideally, in these systems, we’d want:

  1. A toolbox matching the version of the host system.
  2. With all development tools and dependencies installed
  3. The sysexts to be created from inside the toolbox
  4. The sysexts to be installed in the host system
  5. But also, the installed sysexts need to be visible from inside the toolbox, so that we can build things depending on them

The most natural way to achieve both last points is building things so they install in /usr/local, as this will allow us to also mount this location from the host inside the toolbox, in order to build things that depend on our own sysexts.

And last, I want an easy way to manage these projects that does not get in the middle of things, is fast to type, etc.

Introducing gg

So I’ve made a small script to help myself on these tasks. It can be installed at ~/.local/bin along with sysext-utils, and be used in a host shell to generate, install and generally manage a number of sysexts.

Sysexts-utils is almost there for this, I however needed some local hacks to help me get by:

– Since I have these are installed at ~/.local, but they will be run with pkexec to do things as root, the python library lookup paths had to be altered in the executable scripts (sysext-utils#10).
– They are ATM somewhat implicitly prepared to always install things at /usr, I had to alter paths in code to e.g. generate GSettings schemas at the right location (sysext-utils#11).

Hopefully these will be eventually sorted out. But with this I got 1) a pristine atomic setup and 2) My tooling in ~/.local 3) all the development environment in my home dir, 4) a simple and fast way to manage a number of projects. Just most I ever wanted from jhbuild.

This tool is a hack to put things together, done mainly so it’s intuitive and easy to myself. So far been using it for a week with few regrets except the frequent password prompts. If you think it’s useful for you too, you’re welcome.

March 07, 2025

whippet lab notebook: untagged mallocs, bis

Earlier this week I took an inventory of how Guile uses the Boehm-Demers-Weiser (BDW) garbage collector, with the goal of making sure that I had replacements for all uses lined up in Whippet. I categorized the uses into seven broad categories, and I was mostly satisfied that I have replacements for all except the last: I didn’t know what to do with untagged allocations: those that contain arbitrary data, possibly full of pointers to other objects, and which don’t have a header that we can use to inspect on their type.

But now I do! Today’s note is about how we can support untagged allocations of a few different kinds in Whippet’s mostly-marking collector.

inside and outside

Why bother supporting untagged allocations at all? Well, if I had my way, I wouldn’t; I would just slog through Guile and fix all uses to be tagged. There are only a finite number of use sites and I could get to them all in a month or so.

The problem comes for uses of scm_gc_malloc from outside libguile itself, in C extensions and embedding programs. These users are loathe to adapt to any kind of change, and garbage-collection-related changes are the worst. So, somehow, we need to support these users if we are not to break the Guile community.

on intent

The problem with scm_gc_malloc, though, is that it is missing an expression of intent, notably as regards tagging. You can use it to allocate an object that has a tag and thus can be traced precisely, or you can use it to allocate, well, anything else. I think we will have to add an API for the tagged case and assume that anything that goes through scm_gc_malloc is requesting an untagged, conservatively-scanned block of memory. Similarly for scm_gc_malloc_pointerless: you could be allocating a tagged object that happens to not contain pointers, or you could be allocating an untagged array of whatever. A new API is needed there too for pointerless untagged allocations.

on data

Recall that the mostly-marking collector can be built in a number of different ways: it can support conservative and/or precise roots, it can trace the heap precisely or conservatively, it can be generational or not, and the collector can use multiple threads during pauses or not. Consider a basic configuration with precise roots. You can make tagged pointerless allocations just fine: the trace function for that tag is just trivial. You would like to extend the collector with the ability to make untagged pointerless allocations, for raw data. How to do this?

Consider first that when the collector goes to trace an object, it can’t use bits inside the object to discriminate between the tagged and untagged cases. Fortunately though the main space of the mostly-marking collector has one metadata byte for each 16 bytes of payload. Of those 8 bits, 3 are used for the mark (five different states, allowing for future concurrent tracing), two for the precise field-logging write barrier, one to indicate whether the object is pinned or not, and one to indicate the end of the object, so that we can determine object bounds just by scanning the metadata byte array. That leaves 1 bit, and we can use it to indicate untagged pointerless allocations. Hooray!

However there is a wrinkle: when Whippet decides the it should evacuate an object, it tracks the evacuation state in the object itself; the embedder has to provide an implementation of a little state machine, allowing the collector to detect whether an object is forwarded or not, to claim an object for forwarding, to commit a forwarding pointer, and so on. We can’t do that for raw data, because all bit states belong to the object, not the collector or the embedder. So, we have to set the “pinned” bit on the object, indicating that these objects can’t move.

We could in theory manage the forwarding state in the metadata byte, but we don’t have the bits to do that currently; maybe some day. For now, untagged pointerless allocations are pinned.

on slop

You might also want to support untagged allocations that contain pointers to other GC-managed objects. In this case you would want these untagged allocations to be scanned conservatively. We can do this, but if we do, it will pin all objects.

Thing is, conservative stack roots is a kind of a sweet spot in language run-time design. You get to avoid constraining your compiler, you avoid a class of bugs related to rooting, but you can still support compaction of the heap.

How is this, you ask? Well, consider that you can move any object for which we can precisely enumerate the incoming references. This is trivially the case for precise roots and precise tracing. For conservative roots, we don’t know whether a given edge is really an object reference or not, so we have to conservatively avoid moving those objects. But once you are done tracing conservative edges, any live object that hasn’t yet been traced is fair game for evacuation, because none of its predecessors have yet been visited.

But once you add conservatively-traced objects back into the mix, you don’t know when you are done tracing conservative edges; you could always discover another conservatively-traced object later in the trace, so you have to pin everything.

The good news, though, is that we have gained an easier migration path. I can now shove Whippet into Guile and get it running even before I have removed untagged allocations. Once I have done so, I will be able to allow for compaction / evacuation; things only get better from here.

Also as a side benefit, the mostly-marking collector’s heap-conservative configurations are now faster, because we have metadata attached to objects which allows tracing to skip known-pointerless objects. This regains an optimization that BDW has long had via its GC_malloc_atomic, used in Guile since time out of mind.

fin

With support for untagged allocations, I think I am finally ready to start getting Whippet into Guile itself. Happy hacking, and see you on the other side!

Media playback tablet running GNOME and postmarketOS

A couple of years ago I set up a simple and independent media streaming server for my Bandcamp music collection using a Raspberry Pi 4, Fedora IoT and Jellyfin. It works nicely and I don’t have to play any cloud rent to Spotify to listen to music at home.

But it’s annoying having the music playback controls buried in my phone or laptop. How many times do you go to play a song and get distracted by a WhatsApp message instead?

So I started thinking about a tablet that would just control media playback. A tablet running a non-corporate operating system, because music is too important to allow Google to stick AI and adverts in the middle of it. Last month Pablo told me that postmarketOS had pretty decent support for a specific mainstream tablet and so I couldn’t reset buying one second-hand and trying to set up GNOME there for media playback.

Read on and I will tell you how the setup procedure went, what is working nicely and what we could still improve.

What is the Xiaomi Pad 5 Pro tablet like?

I’ve never owned a tablet so all I can tell you is this: it looks like a shiny black mirror. I couldn’t find the power button at first, but it turns out to be on the top.

The device specs claim that it has an analog headphone output, which is not true. It does come with a USB-C to headphone adapter in the box, though.

It comes with an antagonistic Android-based OS that seems to constantly prompt you to sign in to things and accept various terms and conditions. I guess they really want to get to know you.

I paid 240€ for it second hand. The seller didn’t do a factory reset before posting it to me, but I’m a good citizen so I wiped it for them, before anyone could try to commit online fraud using their digital identity.

How easy is it to install postmarketOS + GNOME on the Xiaomi Pad 5 Pro?

I work on systems software but I prefer to stay away from the hardware side of things. Give me a computer that at least can boot to a shell, please. I am not an expert in this stuff. So how did I do at installing a custom OS on an Android tablet?

Figuring out the display model

The hardest part of the process was actually the first step: getting root access on the device so that I could see what type of display panel it has.

Xiaomi tablets have some sort of “bootloader lock”, but thankfully this device was already unlocked. If you ever look at purchasing a Xiaomi device, be very wary that Xiaomi might have locked the bootloader such that you can’t run custom software on your device. Unlocking a locked bootloader seems to require their permission. This kind of thing is a big red flag when buying computers.

One popular tool to root an Android device is Team Win’s TWRP. However it didn’t have support for the Pad 5 Pro, so instead I used Magisk.

I found rooting process with Magisck complicated. The only instructions I could find were in this video named “Xiaomi Pad 5 Rooting without the Use of TWRP | Magisk Manager” from Simply Tech-Key (Cris Apolinar). This gives you a two step process, which requires a PC with the Android debugging tools ‘adb’ and ‘fastboot’ installed and set up.

Step 1: Download and patch the boot.img file

  1. On the PC, download the boot.img file from the stock firmware. (See below).
  2. Copy it onto the tablet.
  3. On the tablet, download and install the Magisk Manager app from the Magisck Github Releases page.
  4. Open the Magisk app and select “Install” to patch the boot.img file.
  5. Copy the patched boot.img off the tablet back to your PC and rename it to patched_boot.img.

The boot.img linked from the video didn’t work for me. Instead I searched online for “xiaomi pad 5 pro stock firmware rom” and found one that worked that way.

It’s important to remember that downloading and running random binaries off the internet is very dangerous. It’s possible that someone pretends the file is one thing, when it’s actually malware that will help them steal your digital identity. The best defence is to factory reset the tablet before you start, so that there’s nothing on there to steal in the first place.

Step 2: Boot the patched boot.img on the tablet

  1. Ensure developer mode is enabled on the tablet: go to “About this Device” and tap the box that shows the OS version 7 times.
  2. Ensure USB debugging is enabled: find the “Developer settings” dialog in the settings window and enable if needed.
  3. On the PC, run adb reboot fastboot to reboot the tablet and reach the bootloader menu.
  4. Run fastboot flash boot patched_boot.img to boot the patched boot image.

At this point, if the boot.img file was good, you should see the device boot back to Android and it’ll now be “rooted”. So you can follow the instructions in the postmarketOS wiki page to figure out if your device has the BOE or the CSOT display. What a ride!

Install postmarketOS

If we can find a way to figure out the display without needing root access, it’ll make the process substantially easier, because the remaining steps worked like a charm.

Following the wiki page, you first install pmbootstrap and run pmbootstrap init to configure the OS image.

Laptop running pmbootstrap

A note for Fedora Silverblue users: the bootstrap process doesn’t work inside a Toolbx container. At some point it tries to create /dev in the rootfs using mknod and fails. You’ll have to install pmbootstrap on the host and run it there.

Next you use pmbootstrap flasher to install the OS image to the correct partition.

I wanted to install to the system_b partition but I seemed to get an ‘out of disk space’ error. The partition is 3.14 GiB in size. So I flashed the OS to the userdata partition.

The build and flashing process worked really well and I was surprised to see the postmarketOS boot screen so quickly.

Tablet showing postmarketOS boot screen

How well does GNOME work as a tablet interface?

The design side of GNOME have thought carefully about making GNOME work well on touch-screen devices. This doesn’t mean specifically optimising it for touch-screen use, it’s more about avoiding a hard requirement on you having a two-button mouse available.

To my knowledge, nobody is paying to optimise the “GNOME on tablets” experience right now. So it’s certainly lacking in polish. In case it wasn’t clear, this one is for the real headz.

Login to the machine was tricky because there’s no on-screen keyboard on the GDM screen. You can work around that by SSH’ing to the machine directly and creating a GDM config file to automatically log in:

$ cat /etc/gdm/custom.conf 
# GDM configuration storage

[daemon]
AutomaticLogin=media
AutomaticLoginEnable=True

It wasn’t possible to push the “Skip” button in initial setup, for whatever reason. But I just rebooted the system to get round that.

Tablet showing GNOME Shell with "welcome to postmarketOS edge" popup

Enough things work that I can already use the tablet for my purposes of playing back music from Jellyfin, from Bandcamp and from elsewhere on the web.

The built-in speakers audio output doesn’t work, and connecting a USB-to-headphone adapter doesn’t work either. What does work is Bluetooth audio, so I can play music that way already. [Update: as of 2025-03-07, built-in audio also works. I haven’t investigated what changed]

I disabled the automatic screen lock, as this device is never leaving my house anyway. The screen seems to stay on and burn power quickly, which isn’t great. I set the screen blank interval to 1 minute, which should save power, but I haven’t found a nice way to “un-blank” the screen again. Touch events don’t seem to do anything. At present I work around by pressing the power button (which suspends the device and stops audio), then pressing it again to resume, at which point the display comes back. [Update: see the comments; it’s possible to reconfigure the power button so that it doesn’t suspend the device].

Apart from this, everything works surprisingly great. Wi-fi and Bluetooth are reliable. The display sometimes glitches when resuming from suspend but mostly works fine. Multitouch gestures work perfectly — this is first time I’ve ever used GNOME with a touch screen and it’s clear that there’s a lot of polish. The system is fast. The Alpine + postmarketOS teams have done a great job packaging GNOME, which is commendable given that they had to literally port systemd.

What’s next?

I’d like to figure out how un-blank the screen without suspending and resuming the device.

It might be nice to fix audio output via the USB-C port. But more likely I might set up a DIY “smart speaker” network around the house, using single-board computers with decent DAC chips connected to real amplifiers. Then the tablet would become more of a remote control.

I already donate to postmarketOS on Opencollective.com, and I might increase the amount as I am really impressed by how well all of this has come together.

Maenwhile I’m finally able to hang out with my cat listening to my favourite Vladimir Chicken songs.

Updates:

  • See the comments for a way to reconfigure the power button so that it unblanks the screen instead of suspending the device.
  • After updating to latest (2025-03-07) postmarketOS edge, the built-in speakers now work and they sound pretty OK. Not sure what changed but that’s very nice to have.

March 04, 2025

whippet lab notebook: on untagged mallocs

Salutations, populations. Today’s note is more of a work-in-progress than usual; I have been finally starting to look at getting Whippet into Guile, and there are some open questions.

inventory

I started by taking a look at how Guile uses the Boehm-Demers-Weiser collector‘s API, to make sure I had all my bases covered for an eventual switch to something that was not BDW. I think I have a good overview now, and have divided the parts of BDW-GC used by Guile into seven categories.

implicit uses

Firstly there are the ways in which Guile’s run-time and compiler depend on BDW-GC’s behavior, without actually using BDW-GC’s API. By this I mean principally that we assume that any reference to a GC-managed object from any thread’s stack will keep that object alive. The same goes for references originating in global variables, or static data segments more generally. Additionally, we rely on GC objects not to move: references to GC-managed objects in registers or stacks are valid across a GC boundary, even if those references are outside the GC-traced graph: all objects are pinned.

Some of these “uses” are internal to Guile’s implementation itself, and thus amenable to being changed, albeit with some effort. However some escape into the wild via Guile’s API, or, as in this case, as implicit behaviors; these are hard to change or evolve, which is why I am putting my hopes on Whippet’s mostly-marking collector, which allows for conservative roots.

defensive uses

Then there are the uses of BDW-GC’s API, not to accomplish a task, but to protect the mutator from the collector: GC_call_with_alloc_lock, explicitly enabling or disabling GC, calls to sigmask that take BDW-GC’s use of POSIX signals into account, and so on. BDW-GC can stop any thread at any time, between any two instructions; for most users is anodyne, but if ever you use weak references, things start to get really gnarly.

Of course a new collector would have its own constraints, but switching to cooperative instead of pre-emptive safepoints would be a welcome relief from this mess. On the other hand, we will require client code to explicitly mark their threads as inactive during calls in more cases, to ensure that all threads can promptly reach safepoints at all times. Swings and roundabouts?

precise tracing

Did you know that the Boehm collector allows for precise tracing? It does! It’s slow and truly gnarly, but when you need precision, precise tracing nice to have. (This is the GC_new_kind interface.) Guile uses it to mark Scheme stacks, allowing it to avoid treating unboxed locals as roots. When it loads compiled files, Guile also adds some sliced of the mapped files to the root set. These interfaces will need to change a bit in a switch to Whippet but are ultimately internal, so that’s fine.

What is not fine is that Guile allows C users to hook into precise tracing, notably via scm_smob_set_mark. This is not only the wrong interface, not allowing for copying collection, but these functions are just truly gnarly. I don’t know know what to do with them yet; are our external users ready to forgo this interface entirely? We have been working on them over time, but I am not sure.

reachability

Weak references, weak maps of various kinds: the implementation of these in terms of BDW’s API is incredibly gnarly and ultimately unsatisfying. We will be able to replace all of these with ephemerons and tables of ephemerons, which are natively supported by Whippet. The same goes with finalizers.

The same goes for constructs built on top of finalizers, such as guardians; we’ll get to reimplement these on top of nice Whippet-supplied primitives. Whippet allows for resuscitation of finalized objects, so all is good here.

misc

There is a long list of miscellanea: the interfaces to explicitly trigger GC, to get statistics, to control the number of marker threads, to initialize the GC; these will change, but all uses are internal, making it not a terribly big deal.

I should mention one API concern, which is that BDW’s state is all implicit. For example, when you go to allocate, you don’t pass the API a handle which you have obtained for your thread, and which might hold some thread-local freelists; BDW will instead load thread-local variables in its API. That’s not as efficient as it could be and Whippet goes the explicit route, so there is some additional plumbing to do.

Finally I should mention the true miscellaneous BDW-GC function: GC_free. Guile exposes it via an API, scm_gc_free. It was already vestigial and we should just remove it, as it has no sensible semantics or implementation.

allocation

That brings me to what I wanted to write about today, but am going to have to finish tomorrow: the actual allocation routines. BDW-GC provides two, essentially: GC_malloc and GC_malloc_atomic. The difference is that “atomic” allocations don’t refer to other GC-managed objects, and as such are well-suited to raw data. Otherwise you can think of atomic allocations as a pure optimization, given that BDW-GC mostly traces conservatively anyway.

From the perspective of a user of BDW-GC looking to switch away, there are two broad categories of allocations, tagged and untagged.

Tagged objects have attached metadata bits allowing their type to be inspected by the user later on. This is the happy path! We’ll be able to write a gc_trace_object function that takes any object, does a switch on, say, some bits in the first word, dispatching to type-specific tracing code. As long as the object is sufficiently initialized by the time the next safepoint comes around, we’re good, and given cooperative safepoints, the compiler should be able to ensure this invariant.

Then there are untagged allocations. Generally speaking, these are of two kinds: temporary and auxiliary. An example of a temporary allocation would be growable storage used by a C run-time routine, perhaps as an unbounded-sized alternative to alloca. Guile uses these a fair amount, as they compose well with non-local control flow as occurring for example in exception handling.

An auxiliary allocation on the other hand might be a data structure only referred to by the internals of a tagged object, but which itself never escapes to Scheme, so you never need to inquire about its type; it’s convenient to have the lifetimes of these values managed by the GC, and when desired to have the GC automatically trace their contents. Some of these should just be folded into the allocations of the tagged objects themselves, to avoid pointer-chasing. Others are harder to change, notably for mutable objects. And the trouble is that for external users of scm_gc_malloc, I fear that we won’t be able to migrate them over, as we don’t know whether they are making tagged mallocs or not.

what is to be done?

One conventional way to handle untagged allocations is to manage to fit your data into other tagged data structures; V8 does this in many places with instances of FixedArray, for example, and Guile should do more of this. Otherwise, you make new tagged data types. In either case, all auxiliary data should be tagged.

I think there may be an alternative, which would be just to support the equivalent of untagged GC_malloc and GC_malloc_atomic; but for that, I am out of time today, so type at y’all tomorrow. Happy hacking!

February 28, 2025

Create Custom System Call on Linux 6.8

Ever wanted to create a custom system call? Whether it be as an assignment, just for fun or learning more about the kernel, system calls are a cool way to learn more about our system.

Note - crossposted from my article on Medium

Why follow this guide?

There are various guides on this topic, but the problem occurs due to the pace of kernel development. Most guides are now obsolete and throw a bunch of errors, hence I’m writing this post after going through the errors and solving them :)

Set system for kernel compile

On Red Hat / Fedora / Open Suse based systems, you can simply do

Sudo dnf builddep kernel
Sudo dnf install kernel-devel

On Debian / Ubuntu based

sudo apt-get install build-essential vim git cscope libncurses-dev libssl-dev bison flex

Get the kernel

Clone the kernel source tree, we’ll be cloning specifically the 6.8 branch but instructions should work on newer ones as well (till the kernel devs change the process again).

git clone --depth=1 --branch v6.8 https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

Ideally, the cloned version should be equal to or higher than your current kernel version.

You can check the current kernel version through the command

uname -r

Create the new syscall

Perform the following

cd linux
make mrproper
mkdir hello
cd hello
touch hello.c
touch Makefile

This will create a folder called “hello” inside the downloaded kernel source code, and create two files in it — hello.c with the syscall code and Makefile with the rules on compiling the same.

Open hello.c in your favourite text editor and put the following code in it

#include <linux/kernel.h>
#include <linux/syscalls.h>
SYSCALL_DEFINE0(hello) {
 pr_info("Hello World\n");
 return 0;
}

It prints “Hello World” in the kernel log.

As per kernel.org docs

"SYSCALL_DEFINEn() macro rather than explicitly. The ‘n’ indicates the number of arguments to the system call, and the macro takes the system call name followed by the (type, name) pairs for the parameters as arguments.”

As we are just going to print, we use n=0

Now add the following to the Makefile

obj-y := hello.o

Now

cd ..
cd include/linux/

Open the file “syscalls.h” inside this directory, and add

asmlinkage long sys_hello(void)

captionless image

This is a prototype for the syscall function we created earlier.

Open the file “Kbuild” in the kernel root (cd ../..) and to the bottom of it add

obj-y += hello/

captionless image

This tells the kernel build system to also compile our newly included folder.

Once done, we then need to also add the syscall entry to the architecture-specific table.

Each CPU architecture could have specific syscalls and we need to let them know for which architecture ours is made.

For x86_64 the file is

arch/x86/entry/syscalls/syscall_64.tbl

Add your syscall entry there, keeping in mind to only use a free number and not use any numbers prohibited in the table comments.

For me 462 was free, so I added the new entry as such

462 common hello sys_hello

captionless image

Here 462 is mapped to our syscall which is common for both architectures, our syscall is named hello and its entry function is sys_hello.

Compiling and installing the new kernel

Perform the following commands

NOTE: I in no way or form guarantee the safety, security, integrity and stability of your system by following this guide. All instructions listed here have been my own experience and doesn’t represent the outcome on your systems. Proceed with caution and care.

Now that we have the legal stuff done, let’s proceed ;)

cp /boot/config-$(uname -r) .config
make olddefconfig
make -j $(nproc)
sudo make -j $(nproc) modules_install
sudo make install

Here we are copying the current booted kernel’s config file, asking the build system to use the same values as that and set default for anything else. Then we build the kernel with parallel processing based on the number of cores given by nproc. After which we install our custom kernel (at own risk).

Kernel compilation takes a lot of time, so get a coffee or 10 and enjoy lines of text scrolling by on the terminal.

It can take a few hours based on system speed so your mileage may vary. Your fan might also scream at this stage to keep temperatures under check (happened to me too).

The fun part, using the new syscall

Now that our syscall is baked into our kernel, reboot the system and make sure to select the new custom kernel from grub while booting

captionless image

Once booted, let’s write a C program to leverage the syscall

Create a file, maybe “test.c” with the following content

#include <stdio.h>
#include <sys/syscall.h>
#include <unistd.h>
int main(void) {
  printf("%ld\n", syscall(462));
  return 0;
}

Here replace 462 with the number you chose for your syscall in the table.

Compile the program and then run it

make test
chmod +x test
./test

If all goes right, your terminal will print a “0” and the syscall output will be visible in the kernel logs.

Access the logs by dmesg

sudo dmesg | tail

And voila, you should be able to see your syscall message printed there.

Congratulations if you made it 🎉

Please again remember the following points:

  • Compiling kernel takes a lot of time
  • The newly compiled kernel takes quite a bit of space so please ensure the availability
  • Linux kernel moves fast with code changes

Practical intro to fiber-optic networks

I was looking into how to link a remote point in my mansion to the network and checked how it could work with a fiber-optic connection, since the router had a SFP+ socket.

TL/DR: I’ll go with a SFP+ bidi single-mode connection.

First of all, stay safe. Don’t look directly into a fiber with your eye. The light/laser is not powerful, but why do it? You won’t see anything anyway, as the light is in the infrared.

Prosthetics that don't betray

Tech takes a central place in our lives. Banking, and administrative tasks are happening more and more online. It's becoming increasingly difficult to get through life without a computer or a smartphone. They have become external organs necessary to live our life.

Steve Jobs called the computer the bicycle for the mind. I believe computers & smartphones have become prosthetics, extensions of people that should unconditionally and entirely belong to them. We must produce devices and products the general public can trust.

Microsoft, Google and Apple are three American companies that build the operating systems our computers, phones, and servers run on. This American hegemony on ubiquitous devices is dangerous both for all citizens worldwide, especially under an unpredictable, authoritarian American administration.

Producing devices and an operating system for them is a gigantic task. Fortunately, it is not necessary to start from zero. In this post I share what I think is the best foundation for a respectful operating system and how to get it into European, and maybe American, hands. In a follow-up post I will talk more about distribution channels for older devices.

[!warning] The rest of the world matters

In this post I take a European-centric view. The rest of the world matters, but I am not familiar with what their needs are nor how to address them.

We're building prosthetics

Prosthetics are extension of ourselves as individuals. They are deeply personal. We must ensure our devices & products are:

  • Transparent about what they do. They must not betray people and do things behind their backs. Our limbs do what we tell them. When they don't, it's considered a problem and we go to a physician to fix it.
  • Intuitive, documented, accessible and stable. People shouldn't have to re-learn how to do things they were used to doing. When they don't know how to do something, it must be easy for them to look it up or find someone to explain it to them. The devices must also be accessible and inclusive to reduce inequalities, instead of reinforcing them. Those requirements are a social matter, not a technical one.
  • Reliable, affordable, and repairable. Computers & smartphones must not allow discrimination based on social status and wealth. Everyone must have access to devices they can count on, and be able to maintain them in a good condition. This is also a social problem and not a technical one. It is worth noting that "the apps I need must be available for my system" is an often overlooked aspect of reliability, and "I don't have to install the system because it's bundled with my machine" is an important aspect of affordability.

I believe that the GNOME project is one of the best placed to answer those challenges, especially when working in coordination with the excellent postmarketOS people who work on resurrecting older devices abandoned by their manufacturers. There is real stagnation in the computing industry that we must see as a social opportunity.

Constraints are good

GNOME is a computing environment aiming for simplicity and efficiency. Its opinionated approach benefits both users and developers:

  • From the user perspective, apps look and feel consistent and sturdy, and are easy to use thanks to well thought out defaults.
  • From the developer perspective, the opinionated human interface guidelines let them develop simpler, more predictable apps with less edge cases to test for.

GNOME is a solid foundation to build respectful tech. It doesn't betray people by doing things behind their back. It aims for simplicity and stability, although it could use some more user research to back design decisions if there was funding to do so, like this has successfully been the case for GNOME 40.

Mobile matters

GNOME's Human Interface Guidelines and development tooling make it easy to run GNOME apps on mobile devices. Some volunteers are also working on making GNOME Shell (the "desktop" view) render well on mobile devices.

postmarketOS already offers it as one of the UIs you can install on your phone. With mobile taking over traditional computers usage, it is critical to consider the mobile side of computing too.

Hackability and safety

As an open source project, GNOME remains customizable by advanced users who know they are bringing unsupported changes, can break their system in the process, and deal with it. It doesn't make customization easy for those advanced users, because it doesn't optimize for them.

The project also has its fair share of criticism, some valid, and some not. I agree that sometimes the project can be too opinionated and rigid, optimizing for extreme consistency at the expense of user experience. For example, while I agree that system trays are suboptimal, they're also a pattern people have been used to it for decades and removing them is very frustrating for many.

But some criticism is also coming from people who want to tinker with their system and spend countless hours building a system that's the exact fit for their needs. Those are valid use cases, but GNOME is not built to serve them. GNOME aims to be easy to use for the general public, which includes people who are not tech-experts and don't want to be.

We're actually building prototypes

As mighty as the GNOME volunteers might be, there is still a long way before the general public can realistically use it. GNOME needs to become a fully fledged product shipped on mainstream devices, rather than an alternative system people install. It also needs to involve representatives of the people it intends to serve.

You just need to simply be tech-savvy

GNOME is not (yet) an end user product. It is a desktop environment that needs to be shipped as part of a Linux distribution. There are many distributions to chose from. They are not shipping the same version of GNOME, and some patch it more or less heavily. This kind of fragmentation is one of the main factors holding the Linux desktop back.

The general public doesn't want to have to pick a distribution and bump into every edge cases that creates. They need a system that works predictably, that lets them install the apps they need, and that gives them safe ways to customize it as a user.

That means they need a system that doesn't let them shoot themselves in the foot in the name of customizability, and that prevents them from doing some things unless they sign with their blood that they know it could make it unusable. I share Adrian Vovk's vision for A Desktop for All and I think it's the best way to productize GNOME and make it usable by the general public.

People don't want to have to install an "alternative" system. They want to buy a computer or a smartphone and use it. For GNOME to become ubiquitous, it needs to be shipped on devices people can buy.

For GNOME to really take off, it needs to become a system people can use both in their personal life and at work. It must become a compelling product in entreprise deployments, both to route enough money towards development and maintenance, to make it an attractive platform for vendors to build software for, and to make it an attractive platform for devices manufacturers to ship.

What about the non tech-savvy?

GNOME aims to build a computing platform everyone can trust. But it doesn't have a clear, scalable governance model with representatives of those it serves. GNOME has rudimentary governance to define what is part of the project and what is not thanks to its Release Team, but it is largely a do-ocracy as highlighted in the Governance page of GNOME's Handbook as well was in GNOME Designer Tobias Bernard's series Community Power.

A do-ocracy is a very efficient way to onboard volunteers and empower people who can give away their free time to get things done fast. It is however not a great way to get work done on areas that matter to a minority who can't afford to give away free time or pay someone to work on it.

The GNOME Foundation is indeed not GNOME's vendor today, and it doesn't contribute the bulk of the design and code of the project. It maintains the infrastructure (technical and organizational) the project builds on. A critical, yet little visible task.

To be a meaningful, fair, inclusive project for more than engineers with spare time and spare computers, the project needs to improve in two areas:

  1. It needs a Product Committee to set a clear product direction so GNOME can meaningfully address the problems of its intended audience. The product needs a clear purpose, a clear audience, and a robust governance to enforce decisions. It needs a committee with representatives of the people it intends to serve, designers, and solution architects. Of course it also critically needs a healthy set of public and private organizations funding it.
  2. It needs a Development Team to implement the direction the committee has set. This means doing user research and design, technical design, implementing the software, doing advocacy work to promote the project to policymakers, manufacturers, private organizations' IT department and much more.

[!warning] Bikeshedding is a real risk

A Product Committee can be a useful structure for people to express their needs, draft a high-level and realistic solution with designers and solution architects, and test it. Designers and technical architects must remain in charge of designing and implementing the solution.

The GNOME Foundation appears as a natural host for these organs, especially since it's already taking care of the assets of the project like its infrastructure and trademark. A separate organization could more easily pull the project in a direction that serves its own interests.

Additionally, the GNOME Foundation taking on this kind of work doesn't conflict with the present do-ocracy, since volunteers and organizations could still work on what matters to them. But it would remain a major shift in the project's organization and would likely upset some volunteers who would feel that they have less control over the project.

I believe this is a necessary step to make the public and private sector invest in the project, generate stable employment for people working on it, and ultimately make GNOME have a systemic, positive impact on society.

[!warning] GNOME needs solution architects

The GNOME community has designers who have a good product vision. It is also full of experts on their module, but has a shortage of people with a good technical overview of the project, who can turn product issues into technical ones at the scale of the whole project.

So what now?

"The year of the Linux desktop" has become a meme now for a reason. The Linux community, if such a nebulous thing exists, is very good at solving technical problems. But building a project bigger than ourselves and putting it in the hands of the millions of people who need it is not just a technical problem.

Here are some critical next steps for the GNOME Community and Foundation to reclaim personal computing from the trifecta of tech behemoths, and fulfill an increasingly important need for democracies.

Learn from experience

Last year, a team of volunteers led by Sonny Piers and Tobias Bernard wrote a grant bid for the Sovereign Tech Fund, and got granted €1M. There are some major takeaways from this adventure.

At risk of stating the obvious, money does solve problems! The team tackled significant technical issues not just for GNOME but for the free desktop in general. I urge organizations and governments that take their digital independence seriously to contribute meaningfully to the project.

Finally and unsurprisingly, one-offs are not sustainable. The Foundation needs to build sustainable revenue streams from a diverse portfolio to grow its team. A €1M grant is extremely generous from a single organization. It was a massive effort from the Sovereign Tech Agency, and a significant part of their 2024 budget. But it is also far from enough to sustain a project like GNOME if every volunteer was paid, let alone paid a fair wage.

Tread carefully, change democratically

Governance and funding are a chicken and egg problem. Funders won't send money to the project if they are not confident that the project will use it wisely, and if they can't weigh in on the project's direction. Without money to support the effort, only volunteers can set up the technical governance processes on their spare time.

Governance changes must be done carefully though. Breaking the status quo without a plan comes with significant risks. It can demotivate current volunteers, make the project lose tractions for newcomers, and die before enough funding makes it to the project to sustain it. A lot of people have invested significant amounts of time and effort into GNOME, and this must be treated with respect.

Build a focused MVP

For the STF project, the GNOME Foundation relied on contractors and consultancies. To be fully operational and efficient it must get in a position of hiring people with the most critical skills. I believe right now the most critical profile is the solution architect one. With more revenue, developers and designers can join the team as it grows.

But for that to happen, the Foundation needs to:

  1. Define who GNOME is for in priority, bearing in mind that "everyone" doesn't exist.
  2. Build a team of representatives of that audience, and a product roadmap: what problems do these people have that GNOME could solve, how could GNOME solve it for them, how could people get to using GNOME, and what tradeoffs would they have to make when using GNOME.
  3. Build the technical roadmap (the steps to make it happen).
  4. Fundraise to implement the roadmap, factoring in the roadmap creation costs.
  5. Implement, and test

The Foundation can then build on this success and start engaging with policymakers, manufacturers, vendors to extent its reach.

Alternative proposals

The model proposed has a significant benefit: it gives clarity. You can give money to the GNOME Foundation to contribute to the maintenance and evolution of GNOME project, instead of only supporting its infrastructure costs. It unlocks the possibility to fund user research that would also benefit all the downstreams.

It is possible to take the counter-point and argue that GNOME doesn't have to be an end-user product, but should remain an upstream that several organizations use for their own product and contribute to.

The "upstream only" model is status-quo, and the main advantage of this model is that it lets contributing organizations focus on what they need the most. The GNOME Foundation would need to scale down to a minimum to only support the shared assets and infrastructure of the project and minimize its expenses. Another (public?) organization would need to tackle the problem of making GNOME a well integrated end-user product.

In the "upstream only" model, there are two choices:

  • Either the governance of GNOME itself remains the same, a do-ocracy where whoever has the skills, knowledge and financial power to do so can influence the project.
  • Or the Community can introduce a more formal governance model to define what is part of GNOME and what is not, like Python PEPs and Rust's RFCs.

It's an investment

Building an operating system usable by the masses is a significant effort and requires a lot of expertise. It is tempting to think that since Microsoft, Google and Apple are already shipping several operating systems each, that we don't need one more.

However, let's remember that these are all American companies, building proprietary ecosystems that they have complete control over. In these uncertain times, Europe must not treat the USA as a direct enemy, but the current administration makes it clear that it would be reckless to continue treating it as an ally.

Building an international, transparent operating system that provides an open platform for people to use and for which developers can distribute apps will help secure EU's digital sovereignty and security, at a cost that wouldn't even make a dent in the budget. It's time for policymakers to take their responsibilities and not let America control the digital public space.

February 27, 2025

GNOME is participating in Google Summer of Code 2025!

The Google Summer of Code 2025 mentoring organizations have just been announced and we are happy that GNOME’s participation has been accepted!

If you are interested in having a internship with GNOME, check gsoc.gnome.org for our project ideas and getting started information.

The price of statelessness is eternal waiting

Most CI systems I have seen have been stateless. That is, they start by getting a fresh Docker container (or building one from scratch), doing a Git checkout, building the thing and then throwing everything away. This is simple and matematically pure, but really slow. This approach is further driven by the way how in cloud computing CPU time and network transfers are cheap but storage is expensive (or at least it is possible to get almost infinite CI build time for open source projects but not persistent storage). Probably because the cloud vendor needs to take care of things like backups, they can't dispatch the task on any machine on the planet but instead on the one that already has the required state and so on.

How much could you reduce resource usage (or, if you prefer, improve CI build speed) by giving up on statelessness? Let's find out by running some tests. To get a reasonably large code base I used LLVM. I did not actually use any cloud or Docker in the tests, but I simulated them on a local media PC. I used 16 cores to compile and 4 to link (any more would saturate the disk). Tests were not run.

Baseline

Creating a Docker container with all the build deps takes a few minutes. Alternatively you can prebuild it, but then you need to download a 1 GB image.

Doing a full Git checkout would be wasteful. There are basically three different ways of doing a partial checkout: shallow clone, blobless and treeless. They take the following amount of time and space:

  • shallow: 1m, 259 MB
  • blobless: 2m 20s, 961 MB
  • treeless: 1m 46s, 473 MB
Doing a full build from scratch takes 42 minutes.

With CCache

Using CCache in Docker is mostly a question of bind mounting a persistent directory in the container's cache directory. A from-scratch build with an up to date CCache takes 9m 30s.

With stashed Git repo

Just like the CCache dir, the Git checkout can also be persisted outside the container. Doing a git pull on an existing full checkout takes only a few seconds. You can even mount the repo dir read only to ensure that no state leaks from one build invocation to another.

With Danger Zone

One main thing a CI build ensures is that the code keeps on building when compiled from scratch. It is quite possible to have a bug in your build setup that manifests itself so that the build succeeds if a build directory has already been set up, but fails if you try to set it up from scratch. This was especially common back in ye olden times when people used to both write Makefiles by hand and to think that doing so was a good idea.

Nowadays build systems are much more reliable and this is not such a common issue (though it can definitely still occur). So what if you would be willing to give up full from-scratch checks on merge requests? You could, for example, still have a daily build that validates that use case. For some organizations this would not be acceptable, but for others it might be reasonable tradeoff. After all, why should a CI build take noticeably longer than an incremental build on the developer's own machine. If anything it should be faster, since servers are a lot beefier than developer laptops. So let's try it.

The implementation for this is the same as for CCache, you just persist the build directory as well. To run the build you do a Git update, mount the repo, build dir and optionally CCache dirs to the container and go.

I tested this by doing a git pull on the repo and then doing a rebuild. There were a couple of new commits, so this should be representative of the real world workloads. An incremental build took 8m 30s whereas a from scratch rebuild using a fully up to date cache took 10m 30s.

Conclusions

The amount of wall clock time used for the three main approaches were:

  • Fully stateless
    • Image building: 2m
    • Git checkout: 1m
    • Build: 42m
    • Total: 45m
  • Cached from-scratch
    • Image building: 0m (assuming it is not "apt-get update"d for every build)
    • Git checkout: 0m
    • Build: 10m 30s
    • Total: 10m 30s
  • Fully cached
    • Image building: 0m
    • Git checkout: 0m
    • Build: 8m 30s
    • Total: 8m 30s
Similarly the amount of data transferred was:

  • Fully stateless
    • Image: 1G
    • Checkout: 300 MB
  • Cached from-scratch:
    • Image: 0
    • Checkout: O(changes since last pull), typically a few kB
  • Fully cached
    • Image: 0
    • Checkout: O(changes since last pull)
The differences are quite clear. Just by using CCache the build time drops by almost 80%. Persisting the build dir reduces the time by a further 15%. It turns out that having machines dedicated to a specific task can be a lot more efficient than rebuilding the universe from atoms every time. Fancy that.

The final 2 minute improvement might not seem like that much, but on the other hand do you really want your developers to spend 2 minutes twiddling their thumbs for every merge request they create or update? I sure don't. Waiting for CI to finish is one of the most annoying things in software development.

February 26, 2025

scikit-survival 0.24.0 released

It’s my pleasure to announce the release of scikit-survival 0.24.0.

A highlight of this release the addition of cumulative_incidence_competing_risks() which implements a non-parameteric estimator of the cumulative incidence function in the presence of competing risks. In addition, the release adds support for scikit-learn 1.6, including the support for missing values for ExtraSurvivalTrees.

Analysis of Competing Risks

In classical survival analysis, the focus is on the time until a specific event occurs. If no event is observed during the study period, the time of the event is considered censored. A common assumption is that censoring is non-informative, meaning that censored subjects have a similar prognosis to those who were not censored.

Competing risks arise when each subject can experience an event due to one of $K$ ($K \geq 2$) mutually exclusive causes, termed competing risks. Thus, the occurrence of one event prevents the occurrence of other events. For example, after a bone marrow transplant, a patient might relapse or die from transplant-related causes (transplant-related mortality). In this case, death from transplant-related mortality precludes relapse.

The bone marrow transplant data from Scrucca et al., Bone Marrow Transplantation (2007) includes data from 35 patients grouped into two cancer types: Acute Lymphoblastic Leukemia (ALL; coded as 0), and Acute Myeloid Leukemia (AML; coded as 1).

from sksurv.datasets import load_bmt
bmt_features, bmt_outcome = load_bmt()
diseases = bmt_features["dis"].cat.rename_categories(
{"0": "ALL", "1": "AML"}
)
diseases.value_counts().to_frame()
dis count
AML 18
ALL 17

During the follow-up period, some patients might experience a relapse of the original leukemia or die while in remission (transplant related death). The outcome is defined similarly to standard time-to-event data, except that the event indicator specifies the type of event, where 0 always indicates censoring.

import pandas as pd
status_labels = {
0: "Censored",
1: "Transplant related mortality",
2: "Relapse",
}
risks = pd.DataFrame.from_records(bmt_outcome).assign(
label=lambda x: x["status"].replace(status_labels)
)
risks["label"].value_counts().to_frame()
label count
Relapse 15
Censored 11
Transplant related mortality 9

The table above shows the number of observations for each status.

Non-parametric Estimator of the Cumulative Incidence Function

If the goal is to estimate the probability of relapse, transplant-related death is a competing risk event. This means that the occurrence of relapse prevents the occurrence of transplant-related death, and vice versa. We aim to estimate curves that illustrate how the likelihood of these events changes over time.

Let’s begin by estimating the probability of relapse using the complement of the Kaplan-Meier estimator. With this approach, we treat deaths as censored observations. One minus the Kaplan-Meier estimator provides an estimate of the probability of relapse before time $t$.

import matplotlib.pyplot as plt
from sksurv.nonparametric import kaplan_meier_estimator
times, km_estimate = kaplan_meier_estimator(
bmt_outcome["status"] == 1, bmt_outcome["ftime"]
)
plt.step(times, 1 - km_estimate, where="post")
plt.xlabel("time $t$")
plt.ylabel("Probability of relapsing before time $t$")
plt.ylim(0, 1)
plt.grid()

However, this approach has a significant drawback: considering death as a censoring event violates the assumption that censoring is non-informative. This is because patients who died from transplant-related mortality have a different prognosis than patients who did not experience any event. Therefore, the estimated probability of relapse is often biased.

The cause-specific cumulative incidence function (CIF) addresses this problem by estimating the cause-specific hazard of each event separately. The cumulative incidence function estimates the probability that the event of interest occurs before time $t$, and that it occurs before any of the competing causes of an event. In the bone marrow transplant dataset, the cumulative incidence function of relapse indicates the probability of relapse before time $t$, given that the patient has not died from other causes before time $t$.

from sksurv.nonparametric import cumulative_incidence_competing_risks
times, cif_estimates = cumulative_incidence_competing_risks(
bmt_outcome["status"], bmt_outcome["ftime"]
)
plt.step(times, cif_estimates[0], where="post", label="Total risk")
for i, cif in enumerate(cif_estimates[1:], start=1):
plt.step(times, cif, where="post", label=status_labels[i])
plt.legend()
plt.xlabel("time $t$")
plt.ylabel("Probability of event before time $t$")
plt.ylim(0, 1)
plt.grid()

The plot shows the estimated probability of experiencing an event at time $t$ for both the individual risks and for the total risk.

Next, we want to to estimate the cumulative incidence curves for the two cancer types — acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML) — to examine how the probability of relapse depends on the original disease diagnosis.

_, axs = plt.subplots(2, 2, figsize=(7, 6), sharex=True, sharey=True)
for j, disease in enumerate(diseases.unique()):
mask = diseases == disease
event = bmt_outcome["status"][mask]
time = bmt_outcome["ftime"][mask]
times, cif_estimates, conf_int = cumulative_incidence_competing_risks(
event,
time,
conf_type="log-log",
)
for i, (cif, ci, ax) in enumerate(
zip(cif_estimates[1:], conf_int[1:], axs[:, j]), start=1
):
ax.step(times, cif, where="post")
ax.fill_between(times, ci[0], ci[1], alpha=0.25, step="post")
ax.set_title(f"{disease}: {status_labels[i]}", size="small")
ax.grid()
for ax in axs[-1, :]:
ax.set_xlabel("time $t$")
for ax in axs[:, 0]:
ax.set_ylim(0, 1)
ax.set_ylabel("Probability of event before time $t$")

The left column shows the estimated cumulative incidence curves (solid lines) for patients diagnosed with ALL, while the right column shows the curves for patients diagnosed with AML, along with their 95% pointwise confidence intervals. The plot indicates that the estimated probability of relapse at $t=40$ days is more than three times higher for patients diagnosed with ALL compared to AML.

If you want to run the examples above yourself, you can execute them interactively in your browser using binder.

February 25, 2025

GNOME in GSoC 2025

Hi Everyone!

Google Summer of Code 2025 is here! Interested in being a part of it? Read on!

GNOME Foundation has been a part of Google Summer of Code for almost every iteration, and we have applied for this year as well, and are waiting for it's confirmation!

Our tentative projects list is now available at GNOME GSoC Website

To make it easier for newcomers, we’ve built resources to help navigate both GSoC and the GNOME ecosystem:

You can also watch the awesome video on GNOME's impact and history on YouTube - GUADEC 2017 - Jonathan Blandford - The History of GNOME

From my experience, GNOME has been an incredible community filled with inspiring people. If you're looking to make an impact with one of the largest, oldest and most influential free software communities, I’d highly recommend giving GNOME a try.

You might just find a second home here while honing your skills alongside some of the best engineers around.

GNOME was my intro into the larger FOSS community when I became a GSoC 2022 Intern there, and has helped me on countless occasions, and I hope it will be the same for you!

If you have been a part of GNOME and want to contribute as a mentor, then let us know as well, GNOME can always utilise some great mentors!

For any questions, feel free to join the chat :D

Also, you can check out my previous post on the GSoC process for more insights on LinkedIn

Looking forward to seeing you in GSoC 2025!

February 24, 2025

ThinkPad X1 Carbon Gen 12 camera support and other IPU6 camera work

I have been working on getting the camera on the ThinkPad X1 Carbon Gen 12 to work under Fedora.

This requires 3 things:

  1. Some ov08x40 sensor patches, these are available as downstream cherry-picks in Fedora kernels >= 6.12.13
  2. A small pipewire fix to avoid WirePlumber listing a bunch of bogus extra "ipu6" Video Sources, these fixes are available in Fedora's pipewire packages >= 1.2.7-4
  3. I2C and GPIO drivers for the new Lattice USB IO-expander, these drivers are not available in the upstream / mainline kernel yet

I have also rebased the out of tree IPU6 ISP and proprietary userspace stack in rpmfusion and I have integrated the USBIO drivers into the intel-ipu6-kmod package. So for now getting the cameras to work on the X1 Carbon Gen 12 requires installing the out of tree drivers through rpmfusion. Follow these instructions to enable rpmfusion, you need both the free and nonfree repos.

Then make sure you have a new enough kernel installed and install the rpmfusion akmod for the USBIO drivers:

sudo dnf update 'kernel*'
sudo dnf install akmod-intel-ipu6

The latest version of the out of tree IPU6 ISP driver can co-exist with the mainline / upstream IPU6 CSI receiver kernel driver. So both the libcamera software ISP FOSS stack and Intel's proprietary stack can co-exist now. If you do not want to use the proprietary stack you can disable it by running 'sudo ipu6-driver-select foss'.

After installing the kmod package reboot and then in Firefox go to Mozilla's webrtc test page and click on the "Camera" button, you should now get a camera permisson dialog with 2 cameras: "Built in Front Camera" and "Intel MIPI Camera (V4L2)" the "Built in Front Camera" is the FOSS stack and the "Intel MIPI Camera (V4L2)" is the proprietary stack. Note the FOSS stack will show a strongly zoomed in (cropped) image, this is caused by the GUM test-page, in e.g. google-meet this will not be the case.

I have also been making progress with some of the other open IPU6 issues:




comment count unavailable comments

libinput and 3-finger dragging

Ready in time for libinput 1.28 [1] and after a number of attempts over the years we now finally have 3-finger dragging in libinput. This is a long-requested feature that allows users to drag by using a 3-finger swipe on the touchpad. Instead of the normal swipe gesture you simply get a button down, pointer motion, button up sequence. Without having to tap or physically click and hold a button, so you might be able to see the appeal right there.

Now, as with any interaction that relies on the mere handful of fingers that are on our average user's hand, we are starting to have usage overlaps. Since the only difference between a swipe gesture and a 3-finger drag is in the intention of the user (and we can't detect that yet, stay tuned), 3-finger swipes are disabled when 3-finger dragging is enabled. Otherwise it does fit in quite nicely with the rest of the features we have though.

There really isn't much more to say about the new feature except: It's configurable to work on 4-finger drag too so if you mentally substitute all the threes with fours in this article before re-reading it that would save me having to write another blog post. Thanks.

[1] "soonish" at the time of writing

GNOME 48 and a changed tap-and-drag drag lock behaviour

This is a heads up as mutter PR!4292 got merged in time for GNOME 48. It (subtly) changes the behaviour of drag lock on touchpads, but (IMO) very much so for the better. Note that this feature is currently not exposed in GNOME Settings so users will have to set it via e.g. the gsettings commandline tool. I don't expect this change to affect many users.

This is a feature of a feature of a feature, so let's start at the top.

"Tapping" on touchpads refers to the ability to emulate button presses via short touches ("taps") on the touchpad. When enabled, a single-finger tap corresponds emulates a left mouse button click, a two-finger tap a right button click, etc. Taps are short interactions and to be recognised the finger must be set down and released again within a certain time and not move more than a certain distance. Clicking is useful but it's not everything we do with touchpads.

"Tap-and-drag" refers to the ability to keep the pointer down so it's possible to drag something while the mouse button is logically down. The sequence required to do this is a tap immediately followed by the finger down (and held down). This will press the left mouse button so that any finger movement results in a drag. Releasing the finger releases the button. This is convenient but especially on large monitors or for users with different-than-whatever-we-guessed-is-average dexterity this can make it hard to drag something to it's final position - a user may run out of touchpad space before the pointer reaches the destination. For those, the tap-and-drag "drag lock" is useful.

"Drag lock" refers to the ability of keeping the mouse button pressed until "unlocked", even if the finger moves off the touchpads. It's the same sequence as before: tap followed by the finger down and held down. But releasing the finger will not release the mouse button, instead another tap is required to unlock and release the mouse button. The whole sequence thus becomes tap, down, move.... tap with any number of finger releases in between. Sounds (and is) complicated to explain, is quite easy to try and once you're used to it it will feel quite natural.

The above behaviour is the new behaviour which non-coincidentally also matches the macOS behaviour (if you can find the toggle in the settings, good practice for easter eggs!). The previous behaviour used a timeout instead so the mouse button was released automatically if the finger was up after a certain timeout. This was less predictable and caused issues with users who weren't fast enough. The new "sticky" behaviour resolves this issue and is (alanis morissette-stylue ironically) faster to release (a tap can be performed before the previous timeout would've expired).

Anyway, TLDR, a feature that very few people use has changed defaults subtly. Bring out the pitchforks!

As said above, this is currently only accessible via gsettings and the drag-lock behaviour change only takes effect if tapping, tap-and-drag and drag lock are enabled:

  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-to-click true
  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-and-drag true
  $ gsettings set org.gnome.desktop.peripherals.touchpad tap-and-drag-lock true
  
All features above are actually handled by libinput, this is just about a default change in GNOME.

February 21, 2025

Flathub Safety: A Layered Approach from Source to User

With thousands of apps and billions of downloads, Flathub has a responsibility to help ensure the safety of our millions of active users. We take this responsibility very seriously with a layered, in-depth approach including sandboxing, permissions, transparency, policy, human review, automation, reproducibility, auditability, verification, and user interface.

Apps and updates can be fairly quickly published to Flathub, but behind the scenes each one takes a long journey full of safety nets to get from a developer’s source code to being used on someone’s device. While information about this process is available between various documentation pages and the Flathub source code, I thought it could be helpful to share a comprehensive look at that journey all in one place.

Flatpak Security & Sandboxing

Each app on Flathub is distributed as a Flatpak. This app packaging format was specifically designed with security and safety at its core, and has been continuously improved over the past decade. It has received endorsements, development, and wide adoption from organizations such as Bambu Lab, Bitwig, CodeThink, Collabora, Discord, The Document Foundation, elementary, Endless, GDevelop, KiCad, Kodi, GNOME, Intel, KDE, LibreOffice, Mozilla, OBS Studio, Plex, Prusa Research, Purism, Red Hat, System76, Telegram, Valve, and many more.

Flatpak logo

From a technical perspective, Flatpak does not require elevated privileges to install apps, isolates apps from one another, and limits app access to the host environment. It makes deep use of existing Linux security technologies such as cgroups, namespaces, bind mounts, and seccomp as well as Bubblewrap for sandboxing.

Flatpak apps are also built from a declarative manifest, which defines the exact sources and environment to build from to enable auditability and as much reproducibility as possible.

Due to Flatpak’s sandboxing, apps don’t have permission to access many aspects of the host OS or user data they might need. To get that access, apps must either request it using Portals or use static permissions.

Portals & Static Permissions

Most permissions can be requested and granted on demand via an API called Portals. These permissions do not need to be given ahead of time, as desktop environments provide the mechanisms to give user consent and control over them e.g. by indicating their use, directly prompting the user before the permission is granted, and allowing revocation.

Illustration of portal, light Illustration of a portal, dark

Portals include APIs for handling auto-start and background activity; access to the camera, clipboard, documents, files, location, screen casting, screenshots, secrets like passwords, trash, and USB devices; setting global shortcuts; inhibiting suspend or shut down; capturing input; monitoring memory, network, or power profiles; sending notifications; printing; setting a wallpaper; and more. In each case, the user’s desktop environment (like GNOME or KDE) manages if and how a user is notified or prompted for permissions—and if the permission is not granted, the app must handle it gracefully.

Some permissions are not covered by Portals, such as basic and generally safe resources for which dynamic permissions wouldn’t make sense. In these cases—or if a Portal does not yet exist or is not widely adopted for a certain permission—developers may use static permissions. These are set by the developer at build time in the public build manifest.

Static permissions are intended to be as narrowly-scoped as possible and are unchanging for the life of each release of an app. They are not generally designed to be modified by an end user except in cases of development, debugging, or reducing permissions. Due to this, Flatpak always prefers apps to use Portals over static permissions whenever possible.

Shared Runtimes & Modules

Every app is built against a Flatpak runtime hosted by Flathub. The runtimes provide basic dependencies, are well-maintained by the Linux community, and are organized according to various platforms a developer may target; for example, GNOME, KDE, or a generic FreeDesktop SDK. This means many apps—especially those targeting a platform like GNOME or KDE and using its developer libraries—don’t need to pull in external dependencies for critical components.

Runtimes are automatically installed with apps that require them, and are updated separately by the user’s OS, app store, or CLI when needed. When a dependency in a runtime is updated, e.g. for a critical security update, it rolls out as an update to all users of apps that use that runtime.

In some cases there are commonly-used libraries not provided directly by one of the available runtimes. Flathub provides shared modules for these libraries to centralize the maintenance. If an app needs to bundle other dependencies, they must be defined in the manifest. We also provide tooling to automatically suggest updates to app dependencies.

Submission & Human Review

Once an app is developed, it must be submitted to Flathub for consideration to be hosted and distributed. At this stage, human Flathub reviewers will review the app to ensure it follows the requirements. Of note:

  • Apps must be sandboxed with as narrow permissions as possible while still functioning, including using appropriate runtime permissions instead of broad static permissions when possible. All broad static permissions need to be justified by the submitter during review.

  • Apps must not be misleading or malicious, which covers impersonating other apps or including outright malicious code or functionality.

  • App IDs must accurately reflect the developer’s domain name or code hosting location; e.g. if an app is submitted that purports to be Lutris, its ID must be obviously associated with that app (in this case, Lutris.net).

The app’s Flatpak manifest is reviewed, including all static permissions. Each of the documented requirements are checked—and if a reviewer finds something out of place they request changes to the submission, ask for rationale, or reject it completely.

Automated Testing

In addition to human review, Flathub also makes use of automated testing for a number of quality and safety checks. For example, our automated tests block unsafe or outright wrong permissions, such as apps requesting access to whole session or system buses or unsafe bus names. Our automated tests also help ensure reproducible builds by disallowing pointing at bare git branches without a specific commit.

Reproducibility & Auditability

Once an app has been approved and passes initial tests, it is built using the open source and publicly-available flatpak-builder utility from the approved public manifest, on Flathub’s infrastructure, and without network access. Sources for the app are validated against the documented checksums, and the build fails if they do not match.

For further auditability, we specify the git commit of the manifest repo used for the build in the Flatpak build subject. The build itself is signed by Flathub’s key, and Flatpak/OSTree verify these signatures when installing and updating apps.

We mirror the exact sources each app is built against in case the original source goes down or there is some other issue, and anyone can build the Flatpak back from those mirrored sources to reproduce or audit the build. The manifest used to build the app is hosted on Flathub’s GitHub org, plus distributed to every user in the app’s sandbox at /app/manifest.json—both of which can be compared, inspected, and used to rebuild the app exactly as it was built by Flathub.

Verification

Apps can be verified on Flathub; this process confirms that an app is published by the original developer or an authorized party by proving ownership of the app ID. While all apps are held to the same high standards of safety and review on Flathub, this extra layer helps users confirm that the app they are getting is also provided or authorized by its developer.

Verified checkmark

Over half of the apps on Flathub so far are verified, with the number regularly increasing.

App Store Clients

Once an app is developed, submitted, tested, approved, built, and distributed, it appears in app store clients like Flathub.org, KDE Discover, GNOME Software, and elementary AppCenter—as well as the Flatpak CLI. While exact implementations vary and the presentation is up to the specific app store client, generally each will show:

  • Static permissions and their impact on safety
  • Open Age Rating Service rating and details
  • If an app uses outdated runtimes
  • Release notes for each release
  • If static permissions increase between releases

Flathub.org and GNOME Software also display the app’s verified status.

Updates

Once an app is accepted onto Flathub, it still remains subject to number of safety protections built into the flow:

  • Flathub maintains ownership over the manifest repo, while app developers are invited as limited collaborators
  • The manifest repo’s default branch is protected, preventing direct pushes without a pull request
  • The manifest repo’s commit history cannot be rewritten, making it harder to sneak something in
  • Flathub’s automated tests must pass before a PR can be merged and an update can be pushed
  • Static permission changes are held for human review before an update is released to users
  • Critical MetaInfo changes are held for human review, e.g. if an app name, developer name, app summary, or license changes

Build moderation dashboard showing permission changes of Kodi, light Build moderation dashboard showing permission changes of Kodi, dark

Special Cases

There are a few special cases to some of the points above which I would be remiss not to mention.

Flathub has granted a select group of trusted partners, including Mozilla and OBS Studio, the ability to directly upload their builds from their own infrastructure. These projects have an entire CI pipeline which validates the state of their app, and they perform QA before tagging the release and pushing it to Flathub. Even for these few cases of direct uploads, we require a public manifest and build pipeline to enable similar reproducibility and auditability as outlined above. We also require the apps to be verified, and still run automated tests such as our linter against them.

Lastly, some apps (around 6%) use extra-data to instruct Flatpak to download and unpack an existing package (e.g. a Debian package) during installation. This process runs in a tight unprivileged Flatpak sandbox that does not allow host filesystem or network access, and the sandbox cannot be modified by app developers. These are largely proprietary apps that cannot be built on Flathub’s infrastructure, or apps using complex toolchains that require network access during build. This is discouraged since it does not enable the same level of auditability nor multi-architecture support that building from source does. As a result, this is heavily scrutinized during human review and only accepted as a last resort.

Even with the above, the vast majority of apps are built reproducibly from source on Flathub’s infrastructure. The handful of apps that aren’t still greatly benefit from the transparency and auditability built into all of the other layers.

Incident Response

While we expect to catch the vast majority of safety issues with the above, we are also able to respond to anything that may have slipped through. For example, we have the ability to remove an app from the Flathub remote in case we find that it’s malicious. We can also revert, recall, or block broken or malicious app updates.

We take security reports and legal issues very seriously; please contact the Flathub admins to report an issue, or chat with us on Matrix.


In Summary…

As you can see, Flathub takes safety very seriously. We’ve worked with the greater Linux and FreeDesktop ecosystem for over a decade on efforts such as Flatpak, OSTree, Portals, and even desktop environments and app store clients to help build the best app distribution experience—for both users and app developers—with safety as a core requirement. We believe our in-depth, multi-layered approach to safety has set a high bar that few others have met—and we will continue to raise it.

Thank you to all contributors to Flatpak, Flathub, and the technologies our ecosystem relies on. Thanks to the thousands of developers for trusting us with app distribution, and to bbhtt, Jordan, and Sonny for reviewing this post. And as always, thank you to the millions of users trusting Flathub as your source of apps on Linux. ♥

February 19, 2025

The Fedora Project Leader is willfully ignorant about Flathub

Update 1: Cassidy wrote a much more comprehensive and well written explanation about the review guidelines, permission, Flathub infrastructure and other things discussed in this post. I highly recommend you check it out.

Update 2: A couple people mentioned that the mystery hardware survey application was indeed Hardware Probe. Miller actually opened a thread on Flathub’s discourse about it. At the time it was a terminal application and due to a bug that affected gnome-software, the confirmation prompt was getting skipped. This wasn’t affecting other store fronts or launching from the the application grid.

Now to the original post:

Today I woke up to a link of an interview from the current Fedora Project Leader, Matthew Miller. Brodie who conducted the interview mentioned that Miller was the one that reached out to him. The background of this video was the currently ongoing issue regarding OBS, Bottles and the Fedora project, which Niccolò made an excellent video explaining and summarizing the situation. You can also find the article over at thelibre.news. “Impressive” as this story is, it’s for another time.

What I want to talk in this post, is the outrageous, smearing and straight up slanderous statements about Flathub that the Fedora Project Leader made during the interview..

I am not directly involved with the Flathub project (A lot of my friends are), however I am a maintainer of the GNOME Flatpak Runtime, and a contributor to the Freedesktop-sdk and ElementaryOS Runtimes. I also maintain applications that get published on Flathub directly. So you can say I am someone invested in the project and that has put a lot of time into it. It was extremely frustrating to hear what would only qualify as reddit-level completely made up arguments with no base in reality coming directly from Matthew Miller.

Below is a transcript, slightly edited for brevity, of all the times Flathub and Flatpak was mentioned. You can refer to the original video as well as there were many more interesting things Miller talked about.

It starts off with an introduction and some history and around the 10-minute mark, the conversation starts to involve Flathub.

Miller: [..] long way of saying I think for something like OBS we’re not really providing anything by packaging that. Miller: I think there is an overall place for the Fedora Flatpaks, because Flathub part of the reason its so popular (there’s a double edged sword), (its) because the rules are fairly lax about what can go into Flathub and the idea is we want to make it as easy for developers to get their things to users, but there is not really much of a review

This is not the main reason why Flathub is popular, its a lot more involved and interesting in practice. I will go into this in a separate post hopefully soon.

Claiming that Flathub does not have any review process or inclusion policies is straight up wrong and incredibly damaging. It’s the kind of thing we’ve heard ad nauseam from Flathub haters, but never from a person in charge of one of the most popular distributions and that should have really really known better.

You can find the Requirements in the Flathub documentation if you spend 30 seconds to google for them, along with the submission guidelines for developers. If those documents qualify as a wild west and free for all, I can’t possibly take you seriously.

I haven’t maintained a linux distribution package myself so I won’t go to comparisons between Flathub and other distros, however you can find people, with red hats even, that do so and talked about it. Of course this is one off examples and social bias from my part. But it proves how laughable of a claim is that things are not reviewed. Additionally, the most popular story I hear from developers is how Flathub requirements are often stricter and sometimes cause annoyances.

Screenshot of the post from this link: https://social.vivaldi.net/@sesivany/114030210735848325

Additionally, Flathub has been the driving force behind encouraging applications to update their metadata, completely reworking the User Experience and handling off permissions and made them prominent to the user. (To the point where even network access is marked as potentially-unsafe).

Miller: [..] the thing that says verified just says that it’s verified from the developer themselves.

No, verified does not mean that the developer signed off into it. Let’s take another 30 seconds to look into the Flathub documentation page about exactly this.

A verified app on Flathub is one whose developer has confirmed their ownership of the app ID […]. This usually also may mean that either the app is maintained directly by the developer or a party authorized or approved by them.

It still went through the review process and all the rest of requirements and policies apply. The verified program is basically a badge to tell users this is a supported application by the upstream developers, rather than the free for all that exists currently where you may or may not get an application released from years ago depending on how stable your distribution is.

Sidenote, did you know that 1483/3003 applications on Flathub are verified as of the writing of this post? As opposed to maybe a dozen of them at best in the distributions. You can check for yourself

Miller: .. and it doesn’t necessarily verify that it was build with good practices, maybe it was built in a coffee shop on some laptop or whatever which could be infected with malware or whatever could happen

Again if Miller had done the bare minimum effort, he would have come across the Requirements page which describes exactly how an Application in Flathub is built, instead of further spreading made up takes about the infrastructure. I can’t stress enough how damaging it has been throughout the years to claim that “Flathub may be potential Malware”. Why it’s malware? Because I don’t like its vibes and I just assume so..

I am sure If I did the same about Fedora in a very very public medium with thousand of listeners I would probably end up with a Layers letter from Redhat.

Now Applications in Flathub are all built without a network access, in Flathub’s build servers, using flatpak-builder and Flatpak Manifests which are a declarative format, which means all the sources required to build the application are known, validated/checksumed, the build is reproducible to the extend possible, you can easily inspect the resulting binaries and the manifest itself used to build the application ends up in /app/manifest.json which you can also inspect with the following command and use it to rebuild the application yourself exactly like how it’s done in Flathub.

$ flatpak run --command=cat org.gnome.TextEditor /app/manifest.json
{
  "id" : "org.gnome.TextEditor",
  "runtime" : "org.gnome.Platform",
  "runtime-version" : "47",
  "runtime-commit" : "d93ca42ee0c4ca3a84836e3ba7d34d8aba062cfaeb7d8488afbf7841c9d2646b",
  "sdk" : "org.gnome.Sdk",
  "sdk-commit" : "3d5777bdd18dfdb8ed171f5a845291b2c504d03443a5d019cad3a41c6c5d3acd",
  "command" : "gnome-text-editor",
  "modules" : [
    {
...

The exception to this, are proprietary applications naturally, and a handful of applications (under an OSI approved license) where Flathub developers helped the upstream projects integrate a direct publishing workflow into their Deployment pipelines. I am aware of Firefox and OBS as the main examples, both of which publish in Flathub through their Continues Deployment (CI/CD) pipeline the same way they generate their builds for other platforms they support and the code for how it happens is available on their repos.

If you have issues trusting Mozilla’s infrastructure, then how are you trusting Firefox in the first place and good luck auditing gecko to make sure it does not start to ship malware. Surely distribution packagers audit every single change that happens from release to release for each package they maintain and can verify no malicious code ever gets merged. The xz backdoor was very recent, and it was identified by pure chance, none of this prevented it.

Then Miller proceeds to describe the Fedora build infrastructure and afterward we get into the following:

Miller: I will give an example of something I installed in Flathub, I was trying to get some nice gui thing that would show me like my system Hardware stats […] one of them ones I picked seemed to do nothing, and turns out what it was actually doing, there was no graphical application it was just a script, it was running that script in the background and that script uploaded my system stats to a server somewhere.

Firstly we don’t really have many details to be able to identify which application it was, I would be very curious to know. Now speculating on my part, the most popular application matching that description it’s Hardware Probe and it absolutely has a GUI, no matter how minimal. It also asks you before uploading.

Maybe there is a org.upload.MySystem application that I don’t know about, and it ended up doing what was in the description, again I would love to know more and update the post if you could recall!

Miller: No one is checking for things like that and there’s no necessarily even agreement that that was was bad.

Second time! Again with the “There is no review and inclusion process in Flathub” narrative. There absolutely is, and these are the kinds of things that get brought up during it.

Miller: I am not trying to be down on Flathub because I think it is a great resource

Yes, I can see that, however in your ignorance you were something much worse than “Down”. This is pure slander and defamation, coming from the current “Fedora Project Leader”, the “Technically Voice of Fedora” (direct quote from a couple seconds later). All the statements made above are manufactured and inaccurate. Myths that you’d hear from people that never asked, looked or cared about any of these cause the moment you do you its obvious how laughable all these claims are.

Miller: And in a lot of ways Flathub is a competing distribution to Fedora’s packaging of all applications.

Precisely, he is spot on here, and I believe this is what kept Miller willfully ignorant and caused him to happily pick the first anit-flatpak/anti-flathub arguments he came across on reddit and repeat the verbatim without putting any thought into it. I do not believe Miller is malicious on purpose, I do truly believe he means well and does not know better.

However, we can’t ignore the conflict that arises from his current job position as an big influence to why incidents like this happened. Nor the influence and damage this causes when it comes from a person of Matthew Miller’s position.

Moving on:

Miller: One of the other things I wanted to talk about Flatpak, is the security and sandboxing around it. Miller: Like I said the stuff in the Flathub are not really reviewed in detail and it can do a lot of things:

Third time with the no review theme. I was fuming when I first heard this, and I am very very angry about still, If you can’t tell. Not only is this an incredibly damaging lie as covered above, it gets repeated over and over again.

With Flatpak basically the developer defines what the permissions are. So there is a sandbox, but the sandbox is what the person who put it there is, and one can imagine that if you were to put malware in there you might make your sandboxing pretty loose.

Brodie: One of the things you can say is “I want full file system access, and then you can do anything”

No, again it’s stated in the Flathub documentation, permissions are very carefully reviewed and updates get blocked when permissions change until another review has happened.

Miller: Android and Apple have pretty strong leverage against application developers to make applications work in their sandbox

Brodie: the model is the other way around where they request permissions and then the user grants them whereas Flatpak, they get the permission and then you could reject them later

This is partially correct, the first part about leverage will talk about in a bit, but here’s a primer on how permissions work in Flatpak and how it compares to the sandboxing technologies in iOS and Android.

In all of them we have a separation between Static and Dynamic permissions. Static are the ones the application always has access to, for example the network, or the ability to send you notifications. These are always there and are mentioned at install time usually. Dynamic permissions are the ones where the application has to ask the user before being able to access a resource. For example opening a file chooser dialog so the user can upload a file, the application the only gets access to the file the user consented or none. Another example is using the camera on the device and capturing photos/video from it.

Brodie here gets a bit confused and only mentions static permissions. If I had to guess it would be cause we usually refer to the dynamic permissions system in the Flatpak world as “Portals”.

Miller: it didn’t used to be that way and and in fact um Android had much weaker sandboxing like you could know read the whole file system from one app and things like that […] they slowly tightened it and then app developers had to adjust Miller: I think with the Linux ecosystem we don’t really have the way to tighten that kind of thing on app developers … Flatpak actually has that kind of functionality […] with portals […] but there’s no not really a strong incentive for developers to do that because, you know well, first of all of course my software is not going to be bad so why should I you know work on sandboxing it, it’s kind of extra work and I I don’t know I don’t know how to solve that. I would like to get to the utopian world where we have that same security for applications and it would be nice to be able to install things from completely untrusted places and know that they can’t do anything to harm your system and that’s not the case with it right now

As with any technology and adoption, we don’t get to perfection from day 1. Static permissions are necessary to provide a migration path for existing applications and until you have developed the appropriate and much more complex dynamic permissions mechanisms that are needed. For example up until iOS 18 it wasn’t possible to give applications access to a subset of your contacts list. Think of it like having to give access your entire filesystem instead of the specific files you want. Similarly partial-only access to your photos library arrived couple years ago in IOS and Android.

In an ideal world all permissions are dynamic, but this takes time and resources and adaptation for the needs of applications and the platform as development progresses.

Now about the leverage part.

I do agree that “the Linux ecosystem” as a whole does not have any leverage on applications developers. This is cause Miller is looking at the wrong place for it. There is no Linux ecosystem but rather Platforms developers target.

GNOME and KDE, as they distribute all their applications on Flathub absolutely have leverage. Similarly Flathub itself has leverage by changing the publishing requirements and inclusion guidelines. Which I kept being told they don’t exist.. Every other application that wants to publish also has to adhere by the rules on Flathub. ElementaryOS and their Appcenter has leverage on developers. Canonical does have the same pull as well with the Snapstore. Fedora on the other hand doesn’t have any leverage cause the Fedora Flatpak repository is irrelevant, broken and nobody wants to use it.

[..] The xz backdoor gets brought up when discussing dependencies and how software gets composed together.

Miller: we try to keep all of those things up to date and make sure everything is patched across the dist even when it’s even when it’s difficult. I think that really is one of the best ways to keep your system secure and because the sandboxing isn’t very strong that can really be a problem, you know like the XZ thing that happened before. If XZ is just one place it’s not that hard of an update but if you’ve got a 100 Flatpaks from different places […] and no consistency to it it’s pretty hard to manage that

I am not going to get in depth about this problem domain and the arguments over it. In fact I have been writing another blog post for a while. I hope to publish shortly. Till then I can not recommend high enough Emmanuele’s and Lennart’s blog posts, as well as one of the very early posts from Alex when Flatpak was in early design phase on the shortcomings of the current distribution model.

Now about bundled dependencies. The concept of Runtimes has served us well so far, and we have been doing a pretty decent job providing most of the things applications need but would not want to bundle themselves. This makes the Runtimes a single place for most of the high profile dependencies (curl, openssl, webkitgtk and so on) that you’d frequently update for security vulnerabilities and once it’s done they roll out to everyone without needing to do anything manual to update the applications or even rebuilt them.

Applications only need to bundle their direct dependencies,and as mentioned above, the flatpak manifest includes the exact definition of all of them. They are available to anyone to inspect and there’s tooling that can scan them and hopefully in the future alert us.

If the Docker/OCI model where you end bundling the entire toolchain, runtime, and now you have to maintain it and keep up with updates and rebuild your containers is good enough for all those enterprise distributions, then the Flatpak model which is much more efficient, streamlined and thought out and much much much less maintenance intensive, it is probably fine.

Miller: part of the idea of having a distro was to keep all those things consistent so that it’s easier for everyone, including the developers

As mentioned above, nothing that fundamentally differs from the leverage that Flathub and the Platform Developers have.

Brodie: took us 20 minutes to get to an explanation [..] but the tldr Fedora Flatpak is basically it is built off of the Fedora RPM build system and because that it is more well tested and sort of intended, even if not entirely for the Enterprise, designed in a way as if an Enterprise user was going to use it the idea is this is more well tested and more secure in a lot of cases not every case.
Miller: Yea that’s basically it

This is a question/conclusion that Brodie reaches with after the previous statements and by far the most enraging thing in this interview. This is also an excellent example of the damage Matthew Miller caused today and if I was a Flathub developer I would stop on nothing sort of a public apology from the Fedora project itself. Hell I want this just being an application developer that publishes on it. The interview has been basically shitting on both the Developers of Flathub and the people that choose to publish in it. And if that’s not enough there should be an apology just out of decency. Dear god..

Brodie: how should Fedora handle upstreams that don’t want to be packaged  like the OBS case here where they did not want there to be a package in Fedora Flatpak or another example is obviously bottles which has made a lot of noise about the packaging

Lastly I want to touch on this closing question in light of recent events.

Miller: I think we probably shouldn’t do it. We should respect people’s wishes there. At least when it is an open source project working in good faith there. There maybe some other cases where the software, say theoretically there’s somebody who has commercial interests in some thing and they only want to release it from their thing even though it’s open source. We might want to actually like, well it’s open source we can provide things, we in that case we might end up you having a different name or something but yeah I can imagine situations where it makes sense to have it packaged in Fedora still but in general especially and when it’s a you know friendly successful open source project we should be friendly yeah. The name thing is something people forget history like that’s happened before with Mozilla with Firefox and Debian.

This is an excellent idea! But it gets better:

Miller: so I understand why they strict about that but it was kind of frustrating um you know we in Fedora have basically the same rules if you want to take Fedora Linux and do something out of it, make your own thing out of it, put your own software on whatever, you can do that but we ask you not to call it Fedora if it’s a fedora remix brand you can use in some cases otherwise pick your own name it’s all open source but you know the name is ours. yeah and I the Upstream as well it make totally makes sense.

Brodie: yeah no the name is completely understandable especially if you do have a trademark to already even if you don’t like it’s it’s common courtesy to not name the thing the exact same thing

Miller: yeah I mean and depending on the legalities like you don’t necessarily have to register a trademark to have the trademark kind of protections under things so hopefully lawyers you can stay out of the whole thing because that always makes the situations a lot more complicated, and we can just get along talking like human beings who care about making good software and getting it to users.

And I completely agree with all of these, all of it. But let’s break it down a bit because no matter how nice the words and intentions it hasn’t been working out this way with the Fedora community so far.

First, Miller agrees the Fedora project should be respecting of application developer’s wishes to not have their application distributed by fedora but rather it be a renamed version if Fedora wishes to keep distributing it.

However, every single time a developer has asked for this, they have been ridiculed, laughed at and straight up bullied by Fedora packagers and the rest of the Fedora community. It has been a similar response from other distribution projects and companies as well, it’s not just Fedora. You can look at Bottle’s story for the most recent example. It is very nice to hear Miller’s intentions but means nothing in practice.

Then Miller proceeds to assure us why he understand that naming and branding is such a big deal to those projects (unlike the rest of the Fedora community again). He further informs us how Fedora has the exact same policies and asks from people that want to fork Fedora. Which makes the treatment that every single application developer has received when asking about the same exact thing ever more outrageous.

What I didn’t know is that in certain cases you don’t even need to have a trademark yet to be covered by some of the protections, depending on jurisdiction and all.

And last we come into lawyers. Neither Fedora nor application developers would want it to ever come to this, and it was stated multiple times by Bottles developers that they don’t want to have to file for a trademark so they can be taken seriously. Similarly, OBS developers said how resorting to legal action would be the last thing they would want to do and would rather have the issue resolved before that. But it took until OBS, a project of a high enough profile, with the resources required to acquire a trademark and to threaten legal action before the Fedora Leadership cared to treat application developers like human beings and get the Fedora packagers and community members to comply. (Something which they had stated multiple times they simply couldn’t do).

I hate all of this. Fedora and all the other distributions need to do better. They all claim to care about their users but happily keep shipping broken and miss configured software to them over the upstream version, just cause it’s what aligns with their current interests. In this case is the promotion of Fedora tooling and Fedora Flatpaks over the application in Flathub they have no control over. In previous incidents it was about branding applications like the rest of the system even though it was making them unusable. And I can find you and list you with a bunch of examples from other distributions just as easily.

They don’t care about their users, they care about their bottom line first and foremost. Any civil attempts at fixing issues get ignored and laughed at, up until there is a threat of a legal action or a big enough PR damage, drama and shitshow that they can’t ignore it anymore and have to backtrack on them.

This is my two angry cents. Overall I am not exactly sure how Matthew Miller managed in a rushed and desperate attempt at damage control for the OBS drama, to not only to make it worse, but to piss off the entire Flathub community at the same time. But what’s done is done, let’s see what we can do to address the issues that have festered and persisted for years now.

February 18, 2025

TIL that Google Docs can preview suggestions

Most of our organizational and operational knowledge at work is stored in a Bookstack instance that we completely control. Bookstack is a great tool for this purpose, but it's not a silver bullet when it comes to document editing. Its most notable limitation is that it doesn't handle concurrent editing really well.

I self-host a HedgeDoc instance for my personal needs, but it also has its own limits. It only supports markdown and it doesn't support fine grained permissions to different files.

I spent some time looking for alternatives, but Google Docs is hands down the best collaborative document editing suites out there for whoever has to work with people who are not engineers, at a company scale.

It's far from perfect though. One of its useful features is the "suggestion" mode that lets someone suggest additions/deletions in the document without making any definitive change. The document owner can then review those suggestions and accept or decline them.

Suggestions can add up pretty quickly (depending on how nitpicky your reviewer is 🤭) and rapidly make the document unreadable. Fortunately there's a way to preview how the document would look like if all the suggestions got accepted (or declined).

This very handy dandy tool is located under the Tools > Review suggested edits menu.

February 13, 2025

GNOME Should Kick the Foot to the Curb… Mostly

This past week volunteers working with the GNOME design and engagement teams debuted a brand new GNOME.org website—one that was met largely with one of two reactions:

  1. It’s beautiful and modern, nice work! and

  2. Where is the foot‽

You see, the site didn’t[^logo update] feature the GNOME logo at the top of the page—it just had the word GNOME, with the actual logo relegated to the footer. Admittedly, some folks reacted both ways (it’s pretty, but where’s the foot?). To me, it seems that the latter reaction was mostly the sentiment of a handful of long-time contributors who have understandably grown very cozy with the current GNOME logo:

GNOME Logo, which is a foot GNOME Logo, which is a foot (dark version)

[^logo update]: 2024-02-14: I wrote a quick merge request to use the logo on the website yesterday since I figured someone else would, anyway. I wanted to demonstrate what it would look like (and do it “right” if it was going to happen). That change has since been merged.

Why the foot?

The current GNOME logo is a four-toed foot that is sort of supposed to look like a letter G. According to legend (read: my conversations with designers and contributors who have been working with GNOME for more years than I have fingers and toes), it is basically a story of happenstance: an early wallpaper featured footprints in the sand, that was modified into an icon for the menu, that was turned into a sort of logo while being modified to look like the letter G, and then that version was flattened and cleaned up a bit and successfully trademarked by the GNOME Foundation.

Evolution of the logo, 1997–2002

Graphic shared by Michael Downey on Mastodon

So, why do people like it? My understanding (and please drop a comment if I’m wrong) is that it often boils down to one or more of:

  1. It’s always been this way; as long as GNOME has had an official logo, it’s been a variation of the foot.

  2. It’s a trademark so it’s not feasible to change it from a legal or financial perspective.

  3. It has personality, and anything new would run the risk of being bland.

  4. It has wide recognition at least within the open source enthusiast and developer space, so changing it would be detrimental to the brand equity.

What’s the problem?

I’m the first to admit that I don’t find the foot to be a particularly good logo. Over time, I’ve narrowed down my thoughts (and the feedback I’ve heard from others) into a few recurring reasons:

  1. It doesn’t convey anything about the name or project which by itself may be fine—many logos don’t directly. But it feels odd to have such a bold logo choice that doesn’t directly related to the name “GNOME,” or to any specific aspect of the project.

  2. It’s an awkward shape that doesn’t fit cleanly into a square or circle, especially at smaller sizes (e.g. for a social media avatar or favicon). It’s much taller than it is wide, and it’s lopsided weight-wise. This leads to frustrations from designers when trying to fit the logo into a square or circle space, leading to excessive amounts of whitespace and/or error-prone manual alignment compared to other elements.

  3. It is actively off-putting and unappealing to at least some folks including much of the GNOME design team, newer contributors, people outside the open source bubble—and apparently potentially entire cultures (which has been raised multiple times over the past 20+ years). Anecdotally, almost everyone new I’ve introduced GNOME to has turned their nose up at the “weird foot,” whether it’s when showing the website or rocking a tee or sticker to support the project. It doesn’t exactly set a great first impression for a community and modern computing platform. And yes, there are a bunch of dumb memes out there about GNOME devs all being foot fetishists which—while I’m not one to shame what people are into—is not exactly the brand image you want for your global, inclusive open source project.

  4. It raises the question of what the role of the design team is: if the design team cannot be allowed to effectively lead the design of the project, what are we even doing? I think this is why the topic feels so existential to me as a member of the design team. User experience design includes the moment someone first interacts with the brand of a product through them actually using it day-to-day—and right now, the design team’s hands are tied for the first half of that journey.

Issues with the foot Issues with the foot (dark)

The imbalance and complexity make for non-ideal situations

So what can we do?

While there are some folks that would push for a complete rebrand of GNOME—name included, I feel like there’s a softer approach we could take to the issue. I would also point out that the vast majority of people using GNOME—those on Ubuntu, RHEL, Fedora, Endless OS, Debian, etc.—are not seeing the foot anywhere. They’re seeing their distro’s logo, and for many, are using using e.g. “Ubuntu” and may not even be aware they’re using GNOME.

Given all of the above, I propose that a path forward would be to:

  1. Phase the foot out from any remaining user-facing spaces since it’s hard to work with in all of the contexts we need to use a logo, and it’s not particularly attractive to new users or welcoming to potential contributors—something we need to keep in mind as an aging open source project. This has been an unspoken natural phenomenon as members of the GNOME design team have soured a bit on trying to make designs look nice while accommodating the foot; as a result we have started to see less prominent usage of the foot e.g. on release notes, GNOME Circle, This Week in GNOME, the GNOME Handbook, the new website (before it was re-added), and in other spaces where the people doing the design work aren’t the most fond of it.

  2. Commission a new brand logo to represent GNOME to the outside world; this would be the logo you’d expect to see at GNOME.org, on user-facing social media profiles, on event banners, on merch, etc. We’ve been mulling ideas over in the design team for literal years at this point, but it’s been difficult to pursue anything seriously without attracting very loud negative feedback from a handful of folks—perhaps if it is part of a longer-term plan explicitly including the above steps, it could be something we’d be able to pursue. And it could still be something quirky, cute, and whimsical! I personally don’t love the idea of something super generic as a logo—I think something that connects to “gnomes,” our history, and/or our modern illustration style would be great here. But importantly, it would need to be designed with the intent of its modern usage in mind, e.g. working well at small sizes, in social media avatars, etc.

  3. Refresh the official GNOME brand guidelines by explicitly including our modern use of color, animation, illustrations, and recurring motifs (like the amazing wallpapers from Jakub!). This is something that has sort of started happening naturally, e.g. with the web team’s newer web designs and as the design team made the decision to move to Inter-based Adwaita Sans for the user interface—and this push continues to receive positive feedback from the community. But much of these efforts have not been reflected in the official project brand guidelines, causing an awkward disconnect between what we say the brand is and how it’s actually widely used and perceived.

  4. Immortalize the foot as a mascot, something to be used in developer documentation, as an easter egg, and perhaps in contributor-facing spaces. It’s much easier to tell newcomers, “oh this is a goofy icon that used to be our logo—we love it, even if it’s kind of silly” without it having to represent the whole project from the outside. It remains a symbol for those “in the know” within the contributor community while avoiding it necessarily representing the entire GNOME brand.

  5. Stretch goal: title-case Gnome as a brand name. We’ve long moved past GNOME being an acronym (GNU Network Object Model Environment?)—with a bit of a soft rebrand, I feel we could officially say that it’s spelled “Gnome,” especially if done so in an official logotype. As we know, much like the pronunciation of GNOME itself, folks will do what they want—and they’re free to!—but this would be more about how the brand name is used/styled in an official capacity. I don’t feel super strongly about this one, but it is awkward to have to explain why it’s called GNOME and all caps but not actually an acronym but it used to be—and why the logo is a foot—any time I tell someone what I contribute to. ;)

What do you think?

I genuinely think GNOME as a project and community is in a good place to move forward with modernizing our outward image a bit. Members of the design team like Jamie, kramo, Brage, Jakub, Tobias, Sam, and Allan and other contributors across the project like Alice, Sophie, and probably half a dozen more I am forgetting have been working hard at modernizing our UI and image when it comes to software—I think it’s time we caught up with the outward brand itself.

Hit me up on Mastodon or any of the links in the footer to tell me if you think I’m right, or if I’ve gotten this all terribly wrong. :)

February 11, 2025

Enter TeX

As promised, I wanted to write a blog post about this application.

Enter TeX is a TeX / LaTeX text editor previously named LaTeXila and then GNOME LaTeX. It is based on the same libraries as gedit.

Renames

LaTeXila was a fun name that I picked up when I was a student back in 2009. Then the project was renamed to GNOME LaTeX in 2017 but it was not a great choice because of the GNOME trademark. Now it is called Enter TeX.

By having "TeX" in the name is more future-proof than "LaTeX", because there is also Plain TeX, ConTeXt and GNU Texinfo. Only LaTeX is currently well supported by Enter TeX, but the support for other variants would be a welcome addition.

Note that the settings and configuration files are automatically migrated from LaTeXila or GNOME LaTeX to Enter TeX.

There is another rename: the namespace for the C code has been changed from "Latexila" to "Gtex", to have a shorter and better name.

Other news

If you're curious, you can read the top of the NEWS file, it has all the details.

If I look at the achievements file, there is also the port from Autotools to Meson that was done recently and is worth mentioning.

Known issue for the icons

Enter TeX unfortunately suffers from a combination of changes in adwaita-icon-theme and GThemedIcon (part of GIO). Link to the issue on GitLab.

Compare the two screenshots and choose the one you prefer:

screenshot 1

screenshot 2

As an interim solution, what I do is to install adwaita-icon-theme 41.0 in the same prefix as where Enter TeX is installed (not a system-wide prefix).

To summarize

  • LaTeXila -> GNOME LaTeX -> Enter TeX
  • C code namespace: Latexila -> Gtex
  • Build system: Autotools -> Meson
  • An old version of adwaita-icon-theme is necessary.

This article was written by Sébastien Wilmet, currently the main developer behind Enter TeX.

Various news about gedit

A new year begins, a good time to share some various news about gedit.

gedit-text-editor.org

gedit has a new domain name for its website:

gedit-text-editor.org

Previously, the gedit homepage was on the GNOME wiki, but the wiki has been retired. So a new website has been set up.

Some work on the website is still necessary, especially to better support mobile devices (responsive web design), and also for printing the pages. If you are a seasoned web developer and want to contribute, don't hesitate to get in touch!

Wrapping-up statistics for 2024

The total number of commits in gedit and gedit-related git repositories in 2024 is: 1042. More precisely:

300	enter-tex
365	gedit
52	gedit-plugins
50	gspell
13	libgedit-amtk
54	libgedit-gfls
47	libgedit-gtksourceview
161	libgedit-tepl

It counts all contributions, translation updates included.

The list contains two apps, gedit and Enter TeX. The rest are shared libraries (re-usable code available to create other text editors).

Enter TeX is a TeX/LaTeX editor previously named LaTeXila and GNOME LaTeX. It depends on Gedit Technology and drives some of its development. So it makes sense to include it alongside gedit. A blog post about Enter TeX will most probably be written, to shed some light on this project that started in 2009.

Onwards to 2025

The development continues! To get the latest news, you can follow this blog or, alternatively, you can follow me on Mastodon.

This article was written by Sébastien Wilmet, currently the main developer behind gedit.

GNOME Has No Czech Translators

For at least the last 15 years, the translations of GNOME into Czech have been in excellent condition. With each release, I would only report that everything was translated, and for the last few years, this was also true the vast majority of the documentation. However, last year things started to falter. Contributors who had been carrying this for many years left, and there is no one to take over after them. Therefore, we have decided to admit it publicly: GNOME currently has no Czech translators, and unless someone new takes over, the translations will gradually decline.

Personally, I started working on GNOME translations in 2008 when I began translating my favorite groupware client – Evolution. At that time, the leadership of the translation team was taken over by Petr Kovář, who was later joined by Marek Černocký who maintained the translations for many years and did an enormous amount of work. Thanks to him, GNOME was almost 100% translated into Czech, including the documentation. However, both have completely withdrawn from the translations. For a while, they were replaced by Vojtěch Perník and Daniel Rusek, but the former has also left, and Dan has now come to the conclusion that he can no longer carry on the translations alone.

I suggested to Dan that instead of trying to appeal to those who the GNOME translations have relied on for nearly two decades—who have already contributed a lot and are probably facing some form of burnout or have simply moved on to something else after so many years—it would be better to reach out to the broader community to see if there is someone from a new generation who would be willing and energetic enough to take over the translations. Just as we did nearly two decades ago.

It may turn out that an essential part of this process will be that the GNOME translations into Czech will decline for some time.Because the same people have been doing the job for so many years, the community has gotten used to taking excellent translations for granted. But it is not. Someone has to do the work. As more and more English terms appear in the GNOME interface, perhaps dissatisfaction will motivate someone to do something about it. After all, that was the motivation for the previous generation to get involved.

If someone like that comes forward, Dan and I are willing to help them with training and gradually hand over the project. We may both continue to contribute in a limited capacity, but the project needs someone new, ideally not just one person, but several, because carrying it alone is a path to burnout. Interested parties can contact us in the mailing list of the Czech translation team at diskuze-l10n-cz@lists.openalt.org.