February 14, 2025

tracepoints: gnarly but worth it

Hey all, quick post today to mention that I added tracing support to the Whippet GC library. If the support library for LTTng is available when Whippet is compiled, Whippet embedders can visualize the GC process. Like this!

Screenshot of perfetto showing a generational PCC trace

Click above for a full-scale screenshot of the Perfetto trace explorer processing the nboyer microbenchmark with the parallel copying collector on a 2.5x heap. Of course no image will have all the information; the nice thing about trace visualizers like is that you can zoom in to sub-microsecond spans to see exactly what is happening, have nice mouseovers and clicky-clickies. Fun times!

on adding tracepoints

Adding tracepoints to a library is not too hard in the end. You need to pull in the lttng-ust library, which has a pkg-config file. You need to declare your tracepoints in one of your header files. Then you have a minimal C file that includes the header, to generate the code needed to emit tracepoints.

Annoyingly, this header file you write needs to be in one of the -I directories; it can’t be just in the the source directory, because lttng includes it seven times (!!) using computed includes (!!!) and because the LTTng file header that does all the computed including isn’t in your directory, GCC won’t find it. It’s pretty ugly. Ugliest part, I would say. But, grit your teeth, because it’s worth it.

Finally you pepper your source with tracepoints, which probably you wrap in some macro so that you don’t have to require LTTng, and so you can switch to other tracepoint libraries, and so on.

using the thing

I wrote up a little guide for Whippet users about how to actually get traces. It’s not as easy as perf record, which I think is an error. Another ugly point. Buck up, though, you are so close to graphs!

By which I mean, so close to having to write a Python script to make graphs! Because LTTng writes its logs in so-called Common Trace Format, which as you might guess is not very common. I have a colleague who swears by it, that for him it is the lowest-overhead system, and indeed in my case it has no measurable overhead when trace data is not being collected, but his group uses custom scripts to convert the CTF data that he collects to... GTKWave (?!?!?!!).

In my case I wanted to use Perfetto’s UI, so I found a script to convert from CTF to the JSON-based tracing format that Chrome profiling used to use. But, it uses an old version of Babeltrace that wasn’t available on my system, so I had to write a new script (!!?!?!?!!), probably the most Python I have written in the last 20 years.

is it worth it?

Yes. God I love blinkenlights. As long as it’s low-maintenance going forward, I am satisfied with the tradeoffs. Even the fact that I had to write a script to process the logs isn’t so bad, because it let me get nice nested events, which most stock tracing tools don’t allow you to do.

I fixed a small performance bug because of it – a worker thread was spinning waiting for a pool to terminate instead of helping out. A win, and one that never would have shown up on a sampling profiler too. I suspect that as I add more tracepoints, more bugs will be found and fixed.

fin

I think the only thing that would be better is if tracepoints were a part of Linux system ABIs – that there would be header files to emit tracepoint metadata in all binaries, that you wouldn’t have to link to any library, and the actual tracing tools would be intermediated by that ABI in such a way that you wouldn’t depend on those tools at build-time or distribution-time. But until then, I will take what I can get. Happy tracing!

#187 Triple Buffered Notifications

Update on what happened across the GNOME project in the week from February 07 to February 14.

GNOME Core Apps and Libraries

Mutter

A Wayland display server and X11 window manager and compositor library.

Georges Stavracas (feaneron) announces

Today, just in time for this edition of This Week in GNOME and after 5 years, more than a thousand review comments, and multiple massive refactorings and rewrites, the legendary merge request mutter!1441 was merged.

This merge requests introduces an additional render buffer when Mutter is not able to keep up with the frames.

The technique commonly known as dynamic triple buffering can help in situations where the total time to generate a frame - including CPU and GPU work - is longer than one refresh cycle. This improves the concurrency capabilities of Mutter by letting the compositor start working on the next frame as early as possible, even when the previous frame isn’t displayed.

In practice, this kind of situation can happen with sudden burst of activity in the compositor. For example, when the GNOME Shell overview is opened after a period of low activity.

This should improve the perceived smoothness of GNOME, with less skipped frames and more fluid animations.

GNOME Shell

Core system user interface for things like launching apps, switching windows, system search, and more.

Julian Sparber (Away till Jan 7th) reports

The long awaited notification grouping was merged this week into GNOME Shell, just in time for GNOME 48. This was a huge effort by multiple parties, especially by Florian Müllner who spend countless hours reviewing code changes. This is probably one of the most visible features added to GNOME thanks to the STF grant.

GNOME Contacts

Keep and organize your contacts information.

Adrien Plazas announces

Contacts received some small last minute changes right in time for GNOME 48:

  • its contact editor’s spacing have been overhauled to match other GNOME apps,
  • its birthday editing row and dialog got redesigned to not only look better but work better on mobile as well.

GNOME Circle Apps and Libraries

Tobias Bernard announces

This week Drum Machine was accepted into Circle! It’s a delightful little app to play with drum patterns and prototype track ideas. Congratulations!

Third Party Projects

Krafting - Vincent reports

SemantiK got two releases last week: 1.4.0 and 1.5.0. They both bring new improvements, code refactoring, more translation work (thanks to @johnpetersa19 for the Brazilian Portuguese translation), and a revamped language selector!

The next big step would be to create more Language Pack, if you want to help with that, feel free to contact me via Matrix!

Krafting - Vincent reports

Also, last week, I’ve been hard at work fixing bugs throughout all my apps, and making them fully responsive on small screens, making them perfect for Mobile Linux ! 🎉📱

Hex Colordle got some bug fixes and small improvements to message when you lose.

Playlifin Voyager and PedantiK got some UI tweaks and bug fixes

Reddy got some better image scaling, making it way better on small screens, as well as some library version bumps.

Gir.Core

Gir.Core is a project which aims to provide C# bindings for different GObject based libraries.

Marcel Tiede says

GirCore verion 0.6.2 was released. It features support for .NET 9 and modernized the internal binding code resulting in better garbage collector integration and the removal of reflection based code. As a result there are several breaking changes. A new beginner friendly tutorial was contributed and can be found on the homepage. Please see the release notes for more details.

Fractal

Matrix messaging app for GNOME written in Rust.

Kévin Commaille announces

Due to a couple of unfortunate but important regressions in Fractal 10, we are releasing Fractal 10.1 so our users don’t have to wait too long for them to be addressed. This minor version fixes the following issues:

  • Some rooms were stuck in an unread state, even after reading them or marking them as read.
  • Joining or creating a room would crash the app.

This version is available right now on Flathub.

If you want to help us avoid regressions like that in the future, you could use Fractal Nightly! Or even better, you could pick up one of our issues and become part of the problem solution.

Events

Kristi Progri announces

GUADEC 2025 Call for Papers is officially open! Submit your paper by March 16th via this link: https://events.gnome.org/event/259/abstracts/#submit-abstract

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

February 13, 2025

GNOME Should Kick the Foot to the Curb… Mostly

This past week volunteers working with the GNOME design and engagement teams debuted a brand new GNOME.org website—one that was met largely with one of two reactions:

  1. It’s beautiful and modern, nice work! and

  2. What have you done with the foot? You have ruined GNOME!

You see, the site doesn’t feature the GNOME logo at the top of the page—it just has the word GNOME, with the actual logo relegated to the footer. It’s pretty safe to say (from my own observations, at least) that the second reaction was mostly the sentiment of a handful of long-time contributors who had grown very cozy with the current GNOME logo:

GNOME Logo, which is a foot GNOME Logo, which is a foot (dark version)

What’s the problem?

It’s a four-toed foot that is sort of supposed to look like a letter G. According to legend (read: my conversations with designers and contributors who have been working with GNOME for more years than I have fingers and toes), the logo is basically a story of happenstance: an early wallpaper featured footprints in the sand, that was modified into an icon for the menu, that was turned into a sort of logo while being modified to look like the letter G, and then that version was cleaned up a bit and successfully trademarked by the GNOME Foundation.

Evolution of the logo, 1997–2002

Graphic shared by Michael Downey on Mastodon

If I’m being honest, it’s not a particularly good logo: it doesn’t convey anything about the project (which by itself is fine… many logos don’t directly!), it’s an awkward shape that doesn’t fit cleanly into a square or circle, especially at smaller sizes (e.g. for a social media avatar or favicon), and—I’ll be completely honest—way too many people have turned their nose up at the weird foot sticker on my laptop. It doesn’t exactly set a great first impression for a community and modern computing platform.

Issues with the foot Issues with the foot (dark)

The imbalance and complexity make for non-ideal situations

But… it’s cozy! Folks who have been around in the community for longer than me have clearly gotten used to the foot, and appreciate it for being goofy, whimsical, and unserious. It has some amount of brand recognition, at least within the open source space.

So what can we do?

While there are some folks that would push for a complete rebrand of GNOME—name included, I feel like there’s a softer approach we could take to the issue. In some ways, it has been an unspoken natural phenomenon as members of the GNOME design team have soured a bit on trying to make designs look nice while accommodating a funky four-toed G-ish foot.

I propose that we:

  1. Phase the foot out from user-facing spaces since it’s hard to work with in all of the contexts we need to use a logo, and it’s not particularly attractive to new users or contributors—something we need to keep in mind as an aging open source project. We’ve started to see less prominent usage of the foot anyway, e.g. on release notes, the new website, GNOME Circle, This Week in GNOME, GNOME Handbook, and in other spaces where contributors aren’t the most fond of it.

  2. Continue to modernize the GNOME brand by leaning into our use of color, animation, illustrations, and recurring motifs (like the amazing wallpapers from Jakub!). Again this is something that has sort of started happening naturally, e.g. with the web team’s newer web designs and as the design team made the decision to move to Inter-based Adwaita Sans for the user interface. And this push continues to receive positive feedback—it’s just not yet officially reflected as well as it could be in the brand guidelines.

  3. Immortalize the foot as a mascot, something to be used as an easter egg, in developer documentation, and in contributor-facing spaces. It’s a bit easier to tell newcomers, “oh this is a fun icon that used to be our logo—we love it, even if it’s kind of goofy” without it having to represent our entire brand from the outside. It remains a symbol for the contributor community while avoiding it necessarily representing the entire GNOME brand.

  4. Commission a new logo to represent GNOME to the outside world; this would be the logo you’d expect to see at GNOME.org, on user-facing social media profiles, on event banners, etc. We’ve been mulling ideas over in the design team for literal years at this point, but it’s been difficult to pursue anything seriously without attracting very loud negative feedback from a handful of folks—perhaps if it is part of a longer-term plan explicitly including the above steps, it could be something we’d be able to pursue. And it could still be something quirky, cute, and whimsical! I personally don’t love the idea of something as generic as a circle as a logo—I think something that connects to “GNOME” or our history would be great here.

  5. Stretch goal: title-case Gnome as a brand name. We’ve long moved past GNOME being an acronym (GNU Network Object Model Environment?)—with a bit of a soft rebrand, I feel we could officially say that it’s spelled “Gnome,” especially if done so in an official logotype. As we know, much like the pronunciation of GNOME itself, folks will do what they want—and they’re free to!—but this would be more about how the brand name is used in an official capacity. I don’t feel super strongly about this one, but it is awkward to have to explain why it’s all-caps, why it’s called GNOME, and why the logo is a foot any time I tell someone what I contribute to. ;)

What do you think?

I promise I’m not trying to start flame wars here; I genuinely think GNOME as a project and community is in a good place to move forward with modernizing our image a bit. Members of the design team like Jamie, kramo, Jakub, Tobias, Sam, and Allan and contributors across the project like Alice, Sophie, and probably half a dozen more I am forgetting have been working hard at modernizing our UI and image when it comes to software—I think it’s time we caught up with the brand itself.

Hit me up on Mastodon or any of the links in the footer to tell me if you think I’m right, or if I’ve gotten this all terribly wrong. :)

February 12, 2025

2025-02-12 Wednesday

February 11, 2025

2025-02-11 Tuesday

  • Planning call, sync with Karen, Andras & partners. Out for a run in the afternoon with J. - lovely. Worked late trying to dig back through piled-up E-mail.

Enter TeX

As promised, I wanted to write a blog post about this application.

Enter TeX is a TeX / LaTeX text editor previously named LaTeXila and then GNOME LaTeX. It is based on the same libraries as gedit.

Renames

LaTeXila was a fun name that I picked up when I was a student back in 2009. Then the project was renamed to GNOME LaTeX in 2017 but it was not a great choice because of the GNOME trademark. Now it is called Enter TeX.

By having "TeX" in the name is more future-proof than "LaTeX", because there is also Plain TeX, ConTeXt and GNU Texinfo. Only LaTeX is currently well supported by Enter TeX, but the support for other variants would be a welcome addition.

Note that the settings and configuration files are automatically migrated from LaTeXila or GNOME LaTeX to Enter TeX.

There is another rename: the namespace for the C code has been changed from "Latexila" to "Gtex", to have a shorter and better name.

Other news

If you're curious, you can read the top of the NEWS file, it has all the details.

If I look at the achievements file, there is also the port from Autotools to Meson that was done recently and is worth mentioning.

Known issue for the icons

Enter TeX unfortunately suffers from a combination of changes in adwaita-icon-theme and GThemedIcon (part of GIO). Link to the issue on GitLab.

Compare the two screenshots and choose the one you prefer:

screenshot 1

screenshot 2

As an interim solution, what I do is to install adwaita-icon-theme 41.0 in the same prefix as where Enter TeX is installed (not a system-wide prefix).

To summarize

  • LaTeXila -> GNOME LaTeX -> Enter TeX
  • C code namespace: Latexila -> Gtex
  • Build system: Autotools -> Meson
  • An old version of adwaita-icon-theme is necessary.

This article was written by Sébastien Wilmet, currently the main developer behind Enter TeX.

Various news about gedit

A new year begins, a good time to share some various news about gedit.

gedit-text-editor.org

gedit has a new domain name for its website:

gedit-text-editor.org

Previously, the gedit homepage was on the GNOME wiki, but the wiki has been retired. So a new website has been set up.

Some work on the website is still necessary, especially to better support mobile devices (responsive web design), and also for printing the pages. If you are a seasoned web developer and want to contribute, don't hesitate to get in touch!

Wrapping-up statistics for 2024

The total number of commits in gedit and gedit-related git repositories in 2024 is: 1042. More precisely:

300	enter-tex
365	gedit
52	gedit-plugins
50	gspell
13	libgedit-amtk
54	libgedit-gfls
47	libgedit-gtksourceview
161	libgedit-tepl

It counts all contributions, translation updates included.

The list contains two apps, gedit and Enter TeX. The rest are shared libraries (re-usable code available to create other text editors).

Enter TeX is a TeX/LaTeX editor previously named LaTeXila and GNOME LaTeX. It depends on Gedit Technology and drives some of its development. So it makes sense to include it alongside gedit. A blog post about Enter TeX will most probably be written, to shed some light on this project that started in 2009.

Onwards to 2025

The development continues! To get the latest news, you can follow this blog or, alternatively, you can follow me on Mastodon.

This article was written by Sébastien Wilmet, currently the main developer behind gedit.

GNOME Has No Czech Translators

For at least the last 15 years, the translations of GNOME into Czech have been in excellent condition. With each release, I would only report that everything was translated, and for the last few years, this was also true the vast majority of the documentation. However, last year things started to falter. Contributors who had been carrying this for many years left, and there is no one to take over after them. Therefore, we have decided to admit it publicly: GNOME currently has no Czech translators, and unless someone new takes over, the translations will gradually decline.

Personally, I started working on GNOME translations in 2008 when I began translating my favorite groupware client – Evolution. At that time, the leadership of the translation team was taken over by Petr Kovář, who was later joined by Marek Černocký who maintained the translations for many years and did an enormous amount of work. Thanks to him, GNOME was almost 100% translated into Czech, including the documentation. However, both have completely withdrawn from the translations. For a while, they were replaced by Vojtěch Perník and Daniel Rusek, but the former has also left, and Dan has now come to the conclusion that he can no longer carry on the translations alone.

I suggested to Dan that instead of trying to appeal to those who the GNOME translations have relied on for nearly two decades—who have already contributed a lot and are probably facing some form of burnout or have simply moved on to something else after so many years—it would be better to reach out to the broader community to see if there is someone from a new generation who would be willing and energetic enough to take over the translations. Just as we did nearly two decades ago.

It may turn out that an essential part of this process will be that the GNOME translations into Czech will decline for some time.Because the same people have been doing the job for so many years, the community has gotten used to taking excellent translations for granted. But it is not. Someone has to do the work. As more and more English terms appear in the GNOME interface, perhaps dissatisfaction will motivate someone to do something about it. After all, that was the motivation for the previous generation to get involved.

If someone like that comes forward, Dan and I are willing to help them with training and gradually hand over the project. We may both continue to contribute in a limited capacity, but the project needs someone new, ideally not just one person, but several, because carrying it alone is a path to burnout. Interested parties can contact us in the mailing list of the Czech translation team at diskuze-l10n-cz@lists.openalt.org.

February 10, 2025

whippet at fosdem

Hey all, the video of my FOSDEM talk on Whippet is up:

Slides here, if that’s your thing.

I ended the talk with some puzzling results around generational collection, which prompted yesterday’s post. I don’t have a firm answer yet. Or rather, perhaps for the splay benchmark, it is to be expected that a generational GC is not great; but there are other benchmarks that also show suboptimal throughput in generational configurations. Surely it is some tuning issue; I’ll be looking into it.

Happy hacking!

C++ compiler daemon testing tool

In an earlier blog post I wrote about a potential way of speeding up C++ compilations (or any language that has a big up-front cost). The basic idea is to have a process that reads in all stdlib header code that is suspended. Compilations are done by sending the actual source file + flags to this process, which then forks and resumes compilation. Basically this is a way to persist the state of the compiler without writing (or executing) a single line of serialization code.

The obvious follow up question is what is the speedup of this scheme. That is difficult to say without actually implementing the system. There are way too many variables and uncertainties to make any sort of reasonable estimate.

So I implemented it. 

Not in an actual compiler, heavens no, I don't have the chops for that. Instead I implemented a completely fake compiler that does the same steps a real compiler would need to take. It spawns the daemon process. It creates a Unix domain socket. It communicates with the running daemon. It produces output files. The main difference is that it does not actually do any compilation, instead it just sleeps to simulate work. Since all sleep durations are parameters, it is easy to test the "real" effect of various schemes.

The code is in this GH repo.

The default durations were handwavy estimates based on past experience. In past measurements, stdlib includes take by far the largest chunk of the total compilation time. Thus I estimated that compilation without this scheme would take 5 seconds per file whereas compilations with it would take 1 second. If you disagree with these assumptions, feel free to run the test yourself with your own time estimates.

The end result was that on this laptop that has 22 cores a project with 100 source files took 26 seconds to compile without the daemon and 7 seconds with it. This means the daemon version finished in just a hair over 25% of a "regular" build.

Wouldn't you want your compilations to finish in a quarter of the time with zero code changes? I sure would.

(In reality the speedup is probably less than that. How much? No idea. Someone's got to implement that to find out.)

February 07, 2025

Fact-checking ChatGPT with Yocto Project release names

For fun, I decided to ask to "explain the themes in the Yocto Project release names". It went about as well as I expected, to be honest:

The Yocto Project follows a thematic naming convention for its release names. The themes have evolved over time:

So far so good.

Types of Birds (Older Releases) Early Yocto Project releases were named after different bird species. Examples include:

Incorrect, that was not one of the themes.

Dizzy (a reference to Dizzy Gillespie, but also sounds like a bird name)

A jazz musician that "sounds like a bird name"? One hell of a stretch, and not the reference.

Fido (a breed of pigeon)

From what I can tell, Fido is not a breed of pigeon. Also not the reference.

Jethro (a reference to Jethro Tull, but also resembles a bird-like theme)

Jethro Tull the rock band? Not the reference.

I do wonder why it decided the theme was types of birds, then made up one bird breed and listed two musicians that "sound like" bird names.

Continuing on...

Muppets Characters (Mid-Series) The Yocto Project later transitioned to naming releases after characters from The Muppets, a popular puppet show created by Jim Henson. Examples include:

No, it didn't.

Krogoth (a lesser-known Muppet)

Krogoth is so lesser-known that the Muppets Wiki doesn't mention it, so I think this is made-up.

Morty (another Muppet character)

According to the Muppets Wiki "Morty is Rick Moranis' agent. He is seen in Muppets Tonight episode 202 [in 1997]".

Considering how incorrect everything else is I actually think this is more likely to be a coincidence than an actual reference to reality. There's been a lot of characters over the decades of shows.

Rocko (potentially a reference to Rocko from Rocko’s Modern Life, though not strictly Muppet-related)

"not strictly"? A cartoon from the 1990s by some of the people that later gave us Spongebob Squarepants is not the reference for this release.

Sci-Fi and Fantasy References (Current Theme) The most recent trend in Yocto release names leans toward science fiction and fantasy references. Examples include:

Surprisingly, this is not the current theme.

Langdale (a reference to a location, possibly inspired by British landscapes, but also fits a fantasy tone)

Oh you're close, go on you can do it!

Mickledore (a name with a Tolkien-esque or fantasy feel)

I can only imagine why that is...

Nanbield (continuing the theme of mysterious, fantasy-like names)

I give up.

Remember people: verify everything that LLMs say, as they are first-rate bullshit artists.

#186 Media Parsing

Update on what happened across the GNOME project in the week from January 31 to February 07.

GNOME Core Apps and Libraries

Sam Thursfield says

The LocalSearch filesystem extractor is switching from GStreamer to ffmpeg / libav for media file parsing in the next major release. GStreamer served us well for almost 20 years in this role as a media parsing library, but it was designed for media playback and not fast metadata extraction. It’s plugin support meant that it behaves differently depending which plugins are installed on a given system, which makes it impossible to fully test. The last few years saw the LocalSearch metadata extractor gain much better sandboxing, but due to the way GStreamer loads plugins, we had to poke several holes in the sandbox to make it work, and play a whack-a-mole game to blocklist any GStreamer plugins that wouldn’t work in the sandbox.

The new ffmpeg-based implementation is faster, and also safer due to tighter sandboxing. It supports all the media formats we need to parse, and in fact, on most systems GStreamer was already processing many filetypes using libav. Thanks to Carlos Garnacho for the merge request.

GJS

Use the GNOME platform libraries in your JavaScript programs. GJS powers GNOME Shell, Polari, GNOME Documents, and many other apps.

ptomato says

In GNOME 48.beta, the GJS interactive console will be asynchronous. You can, for example, create a window with a button, connect a signal handler, click the button, and the signal handler will run when the button is clicked. Previously, the signal handler wouldn’t run because it was blocked by the console waiting for input. (This doesn’t yet make await work in the interactive console, but it is a prerequisite.) Thanks to Evan Welsh for doing the thorough research here!

ptomato says

Also in GNOME 48.beta, we have an easier way to create GObject.Value thanks to Gary Li. Usually for C APIs that use GValue, GJS transparently substitutes native JS values. However, in some cases you need to use the GObject.Value wrapper in JS. Previously you would create an empty object, call init to set the type, and then store a value:

const value = new GObject.Value();
value.init(String);
value.set_string('a string');

Now you can just do it in one: new GObject.Value(String, 'a string');

GNOME Releases

Jeremy Bicha reports

The Debian GNOME team has announced that GNOME 48 will be included in Debian 13 “Trixie”. Debian 13 will be released later this year.

GNOME Circle Apps and Libraries

Shortwave

Internet radio player with over 30000 stations.

Felix reports

Shortwave 5.0 is now available, bringing background playback and completely revamped stream recording!

For more details, check out the blog post: https://blogs.gnome.org/haeckerfelix/2025/02/05/shortwave-5-0/

Third Party Projects

fabrialberio says

Pins (formerly PinApp) version 2.0 is now available! The new version is the result of a complete rewrite, switching from Python to C. This brings a lot of major and minor improvements, including a new grid view and support for autostart applications. Checkout the app on Flathub.

petsoi says

Words! 0.3 is now available with support for multiple dictionaries and different word lengths! I’ve also added a German dictionary. If you’d like to see your language included, feel free to submit a word list.

Shell Extensions

Just Perfection reports

The GNOME Shell 48 extensions porting guide has been released! If you need any help with the port, you can ask us on the GNOME Extensions Matrix Channel.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

February 06, 2025

Remote deploy to server using SSH and Gitlab

These are my notes for setting up a “deploy-to-remote-webserver” workflow with ssh and an sftp-only chroot, in case someone might find it useful.

Compiled from various bits and pieces around the internet.

Setting up SSH

  • Add a group: groupadd -r remote-deploy. This will add a system group
  • Create a folder for authorized keys for users in that group: mkdir -p /etc/ssh/authorized
  • Modify ssh_config
AuthorizedKeysFile /etc/ssh/authorized/%u .ssh/authorized_keys
 
Match Group remote-deploy
    ChrootDirectory %h
    ForceCommand internal-sftp
    AllowTcpForwarding no
    X11Forwarding no
    PasswordAuthentication no

Setting up the chroot jail

It is important that the path up until the home folder is owned by root:root. Below that, create a folder that is owned by the user and any supplementary group you might need to access (e.g. www-data)

mkdir -p /path/to/chroot/jail/deploy-user
chown -R root:root /path/to/chroot/jail/deploy-user
chmod -R 755 /path/to/chroot/jail/deploy-user
mkdir /path/to/chroot/jail/deploy-user/public
chown deploy-user:www-data /path/to/chroot/jail/deploy-user
chmod 755 /path/to/chroot/jail/deploy-user

Setting up the individual user(s)

  • Add a user to the group created above: useradd -g remote-deploy -M -d /path/to/chroot/jail -s /usr/sbin/nologin deploy-user
  • Generate a new pair of keys and move the public key to the newly configured added keystore
ssh-keygen -o deploy-user -t ed25519
mv deploy-user.pub /etc/ssh/authorized/deploy-user

Setting up the gitlab pipeline

  • Obtain the host key frm the remote host. To be able to make the variable masked in Gitlab later, it needs to be encoded: ssh-keyscan -q deploy-host.example.com 2>/dev/null | bas64 -w0

  • Log into your gitlab instance, go to Settings->CI/CD->Variables

    • Click on “Add variable”

    • Set “Masked and Hidden” and “Protect variable”

    • Add a comment like “SSH host key for deploy host”

    • Set name to SSH_HOST_KEY

    • Paste output of the keyscan command

    • Create another variable with the same Settings

    • Add a comment like “SSH private key for deploy user”

    • Set name to SSH_PRIVATE_KEY

    • Paste output of base64 -w0 deploy-user, where deploy-user is the private key generated above

  • Setting up ssh in the pipeline script

mkdir ~/.ssh
echo -n $SSH_HOST_KEY | base64 -d > ~/.ssh/known_hosts

# If necessary, start ssh-agent
eval `ssh-agent`
echo -n $SSH_PRIVATE_KEY | base64 -d | ssh-add -

February 05, 2025

PipeWire ♥ Sovereign Tech Agency

In my previous post, I alluded to an exciting development for PipeWire. I’m now thrilled to officially announce that Asymptotic will be undertaking several important tasks for the project, thanks to funding from the Sovereign Tech Fund (now part of the Sovereign Tech Agency).

Some of you might be familiar with the Sovereign Tech Fund from their funding for GNOME, GStreamer and systemd – they have been investing in foundational open source technology, supporting the digital commons in key areas, a mission closely aligned with our own.

We will be tackling three key areas of work.

ASHA hearing aid support

I wrote a bit about our efforts on this front. We have already completed the PipeWire support for single ASHA hearing aids, and are actively working on support for stereo pairs.

Improvements to GStreamer elements

We have been working through the GStreamer+PipeWire todo list, fixing bugs and making it easier to build audio and video streaming pipelines on top of PipeWire. A number of usability improvements have already landed, and more work on this front continues

A Rust-based client library

While we have a pretty functional set of Rust bindings around the C-based libpipewire already, we will be creating a pure Rust implementation of a PipeWire client, and provide that via a C API as well.

There are a number of advantages to this: type and memory safety being foremost, but we can also leverage Rust macros to eliminate a lot of boilerplate (there are community efforts in this direction already that we may be able to build upon).

This is a large undertaking, and this funding will allow us to tackle a big chunk of it – we are excited, and deeply appreciative of the work the Sovereign Tech Agency is doing in supporting critical open source infrastructure.

Watch this space for more updates!

Keeping your system-wide configuration files intact after updating SteamOS

Introduction

If you use SteamOS and you like to install third-party tools or modify the system-wide configuration some of your changes might be lost after an OS update. Read on for details on why this happens and what to do about it.


As you all know SteamOS uses an immutable root filesystem and users are not expected to modify it because all changes are lost after an OS update.

However this does not include configuration files: the /etc directory is not part of the root filesystem itself. Instead, it’s a writable overlay and all modifications are actually stored under /var (together with all the usual contents that go in that filesystem such as logs, cached data, etc).

/etc contains important data that is specific to that particular machine like the configuration of known network connections, the password of the main user and the SSH keys. This configuration needs to be kept after an OS update so the system can keep working as expected. However the update process also needs to make sure that other changes to /etc don’t conflict with whatever is available in the new version of the OS, and there have been issues due to some modifications unexpectedly persisting after a system update.

SteamOS 3.6 introduced a new mechanism to decide what to to keep after an OS update, and the system now keeps a list of configuration files that are allowed to be kept in the new version. The idea is that only the modifications that are known to be important for the correct operation of the system are applied, and everything else is discarded1.

However, many users want to be able to keep additional configuration files after an OS update, either because the changes are important for them or because those files are needed for some third-party tool that they have installed. Fortunately the system provides a way to do that, and users (or developers of third-party tools) can add a configuration file to /etc/atomic-update.conf.d, listing the additional files that need to be kept.

There is an example in /etc/atomic-update.conf.d/example-additional-keep-list.conf that shows what this configuration looks like.

Sample configuration file for the SteamOS updater

Developers who are targeting SteamOS can also use this same method to make sure that their configuration files survive OS updates. As an example of an actual third-party project that makes use of this mechanism you can have a look at the DeterminateSystems Nix installer:

https://github.com/DeterminateSystems/nix-installer/blob/v0.34.0/src/planner/steam_deck.rs#L273

As usual, if you encounter issues with this or any other part of the system you can check the SteamOS issue tracker. Enjoy!


  1. A copy is actually kept under /etc/previous to give the user the chance to recover files if necessary, and up to five previous snapshots are kept under /var/lib/steamos-atomupd/etc_backup ↩

Shortwave 5.0

You want background playback? You get background playback! Shortwave 5.0 is now available and finally continues playback when you close the window, resolving the “most popular” issue on GitLab!

Shortwave uses the new Flatpak background portal for this, which means that the current playback status is now also displayed in the “Background Apps” menu.

The recording feature has also been overhauled. I have addressed a lot of user feedback here, e.g. you can now choose between 3 different modes:

  • Save All Tracks: Automatically save all recorded tracks
  • Decide for Each Track: Temporarily record tracks and save only the ones you want
  • Record Nothing: Stations are played without recording

In addition to that the directory for saving recorded tracks can be customized, and users can now configure the minimum and maximum duration of recordings.

There is a new dialog window with additional details and options for current or past played tracks. For example, you no longer need to worry about forgetting to save your favorite track when the recording is finished – you can now mark tracks directly during playback so that they are automatically saved when the recording is completed.

You don’t even need to open Shortwave for this, thanks to the improved notifications you can decide directly when a new track gets played whether you want to save it or not.

Of course the release also includes the usual number of bug fixes and improvements. For example, the volume can now be changed using the keyboard shortcut.

Enjoy!

Get it on Flathub

On the Go: Making it Easier to Find Linux Apps for Phones & Tablets

With apps made for different form factors, it can be hard to find what works for your specific device. For example, we know it can be a bit difficult to find great apps that are actually designed to be used on a mobile phone or tablet. To help solve this, we’re introducing a new collection: On the Go.

On the go: Apps for your Linux phones and tablets On the go: Apps for your Linux phones and tablets

As the premier source of apps for Linux, Flathub serves a wide range of people across a huge variety of hardware: from ultra powerful developer workstations to thin and light tablets; from handheld gaming consoles to a growing number of mobile phones. Generally any app on Flathub will work on a desktop or laptop with a large display, keyboard, and mouse or trackpad. However, devices with only touch input and smaller screen sizes have more constraints.

Revealing the App Ecosystem

Using existing data and open standards, we’re now highlighting apps on Flathub that report as being designed to work on these mobile form factors. This new On the Go collection uses existing device support data submitted by app developers in their MetaInfo, the same spec that is used to build those app’s listings for Flathub and other app store clients. The collection is featured on the Flathub.org home page for all devices.

Foliate app adapting across a desktop, tablet, and phone

Many of these apps are adaptive across screen sizes and input methods; you might be surprised to know that your favorite app on your desktop will also work great on a Linux phone, tablet, or Steam Deck’s touch screen. We aim to help reveal just how rich and well-rounded the app ecosystem already is for these devices—and to give app developers another place for their apps to shine and be discovered.

Developers: It’s Up to You

As of this writing there are over 150 apps in the collection, but we expect there are cases where app developers have not provided the requisite device support data.

If you’re the creator of an app that should work well on mobile form factors but isn’t featured in the collection, take a minute to double-check the documentation and your own apps’s MetaInfo to ensure it’s accurate. Device support data can also be used by native app store clients across form factors to determine what apps are displayed or how they are ranked, so it’s a good idea to ensure it’s up to date regardless of what your app supports.

February 04, 2025

The trials and tribulations of supporting CJK text in PDF

In the past I may have spoken critically on Truetype fonts and their usage in PDF files. Recently I have come to the conclusion that it may have been too harsh and that Truetype fonts are actually somewhat nice. Why? Because I have had to add support for CFF fonts to CapyPDF. This is a font format that comes from Adobe. It encodes textual PostScript drawing operations into binary bytecode. Wikipedia does not give dates, but it seems to have been developed in the late 80s - early 90s. The name CFF is an abbeviation for "complicated font format".

Double-checks notes.

Compact font format. Yes, that is what I meant to write. Most people reading this have probably not ever even seen a CFF file so you might be asking why is supporting CFF fonts even a thing nowadays? It's all quite simple. Many of the Truetype (and especially OpenType) fonts you see are not actually Truetype fonts. Instead they are Transfontners, glyphs in disguise. It is entirely valid to have a Truetype font that is merely an envelope holding a CFF font. As an example the Noto CJK fonts are like this. Aggregation of different formats is common in font files, and the main reason OpenType fonts have like four different and mutually incompatible ways of specifying color emoji. None of the participating entities were willing to accept anyone else's format so the end result was to add all of them. If you want Asian language support, you have to dive into the bowels of the CFF rabid hole.

As most people probably do not have sufficient historical perspective, let's start by listing out some major computer science achievements that definitely existed when CFF was being designed.

  • File format magic numbers
  • Archive formats that specify both the offset and size of the elements within
  • Archive formats that afford access to their data in O(number of items in the archive) rather than O(number of bytes in the file)
  • Data compression
CFF chooses to not do any of this nonsense. It also does not believe in consistent offset types. Sometimes the offsets within data objects refer to other objects by their order in the index they are in. Sometimes they refer to number of bytes from the beginning of the file. Sometimes they refer to number of bytes from the beginning of the object the offset data is written in. Sometimes it refers to something else. One of the downsides of this is that while some of the data is neatly organized into index structures with specified offsets, a lot of it is just free floating in the file and needs the equivalent of three pointer dereferences to access.

Said offsets are stored with a variable width encoding like so:

This makes writing subset CFF font files a pain. In order to write an offset value at some location X, you first must serialize everything up to that point to know where the value would be written. To know the value to write you have to serialize the the entire font up to the point where that data is stored. Typically the data comes later in the file than its offset location. You know what that means? Yes, storing all these index locations and hotpatching them afterwards once you find out where the actual data pointed to ended up in. Be sure to compute your patching locations correctly lest you end up in lengthy debugging sessions where your subset font files do not render correctly. In fairness all of the incorrect writes were within the data array and thus 100% memory safe, and, really, isn't that the only thing that actually matters?

One of the main data structures in a CFF file is a font dictionary stored in, as the docs say, "key-value pairs". This is not true. The "key-value dictionary" is neither key-value nor is it a dictionary. The entries must come in a specific order (sometimes) so it is not a dictionary. The entries are not stored as key-value pairs but as value-key pairs. The more accurate description of "value-key somewhat ordered array" does lack some punch so it is understandable that they went with common terminology. The backwards ordering of elements to some people confusion bring might, but it perfect sense makes, as the designers of the format a long history with PostScript had. Unknown is whether some of them German were.

Anyhow, after staring directly into the morass of madness for a sufficient amount of time the following picture emerges.

Final words

The CFF specification document contains data needed to decipher CFF data streams in nice tabular format, which is easy to convert to an enum. Trying it fails with an error message saying that the file has prohibited copypasting. This is a bit rich coming from Adobe, whose current stance seems to be that they can take any document opened with their apps and use it for AI training. I'd like to conclude this blog post by sending the following message to the (assumed) middle manager who made the decision that publicly available specification documents should prohibit copypasting:

YOU GO IN THE CORNER AND THINK ABOUT WHAT YOU HAVE DONE! AND DON'T EVEN THINK ABOUT COMING BACK UNTIL YOU ARE READY TO APOLOGIZE TO EVERYONE FOR YOU ACTIONS!

February 03, 2025

A RightsStatements ontology

RightsStatements logotype

At LaOficina we are currently working on a project to digitize family photographs and one of the challenges is the correct traceability of intellectual property. In this case we have encountered the difficulty of knowing the exact conditions of the received material, a situation that is not new and which is already addressed by the RightsStatements vocabulary, which includes 12 terms that are used, among others, by the Europeana community. Therefore, it is obvious that we need to add this vocabulary to our Wikibase Suite instance. By the way, as an exercise, I have taken the opportunity to compose it from scratch as an independent OWL ontology. It is very simple, but probably it has some conceptual flaws. If it is useful to someone, please use it without restrictions: righstatements-ontology.ttl

If you find something wrong please reach me.

Looking ahead at 2025 and Fedora Workstation and jobs on offer!

So a we are a little bit into the new year I hope everybody had a great break and a good start of 2025. Personally I had a blast having gotten the kids an air hockey table as a Yuletide present :). Anyway, wanted to put this blog post together talking about what we are looking at for the new year and to let you all know that we are hiring.

Artificial Intelligence
One big item on our list for the year is looking at ways Fedora Workstation can make use of artificial intelligence. Thanks to IBMs Granite effort we know have an AI engine that is available under proper open source licensing terms and which can be extended for many different usecases. Also the IBM Granite team has an aggressive plan for releasing updated versions of Granite, incorporating new features of special interest to developers, like making Granite a great engine to power IDEs and similar tools. We been brainstorming various ideas in the team for how we can make use of AI to provide improved or new features to users of GNOME and Fedora Workstation. This includes making sure Fedora Workstation users have access to great tools like RamaLama, that we make sure setting up accelerated AI inside Toolbx is simple, that we offer a good Code Assistant based on Granite and that we come up with other cool integration points.

Wayland
The Wayland community had some challenges last year with frustrations boiling over a few times due to new protocol development taking a long time. Some of it was simply the challenge of finding enough people across multiple projects having the time to follow up and help review while other parts are genuine disagreements of what kind of things should be Wayland protocols or not. That said I think that problem has been somewhat resolved with a general understanding now that we have the ‘ext’ namespace for a reason, to allow people to have a space to review and make protocols without an expectation that they will be universally implemented. This allows for protocols of interest only to a subset of the community going into ‘ext’ and thus allowing protocols that might not be of interest to GNOME and KDE for instance to still have a place to live.

The other more practical problem is that of having people available to help review protocols or providing reference implementations. In a space like Wayland where you need multiple people from multiple different projects it can be hard at times to get enough people involved at any given time to move things forward, as different projects have different priorities and of course the developers involved might be busy elsewhere. One thing we have done to try to help out there is to set up a small internal team, lead by Jonas Ådahl, to discuss in-progress Wayland protocols and assign people the responsibility to follow up on those protocols we have an interest in. This has been helpful both as a way for us to develop internal consensus on the best way forward, but also I think our contribution upstream has become more efficient due to this.

All that said I also believe Wayland protocols will fade a bit into the background going forward. We are currently at the last stage of a community ‘ramp up’ on Wayland and thus there is a lot of focus on it, but once we are over that phase we will probably see what we saw with X.org extensions over time, that for the most time new extensions are so niche that 95% of the community don’t pay attention or care. There will always be some new technology creating the need for important new protocols, but those are likely to come along a relatively slow cadence.

High Dynamic Range

HDR support in GNOME Control Center

HDR support in GNOME Control Center

As for concrete Wayland protocols the single biggest thing for us for a long while now has of course been the HDR support for Linux. And it was great to see the HDR protocol get merged just before the holidays. I also want to give a shout out to Xaver Hugl from the KWin project. As we where working to ramp up HDR support in both GNOME Shell and GTK+ we ended up working with Xaver and using Kwin for testing especially the GTK+ implementation. Xaver was very friendly and collaborative and I think HDR support in both GNOME and KDE is more solid thanks to that collaboration, so thank you Xaver!

Talking about concrete progress on HDR support Jonas Adahl submitted merge requests for HDR UI controls for GNOME Control Center. This means you will be able to configure the use of HDR on your system in the next Fedora Workstation release.

PipeWire
I been sharing a lot of cool PipeWire news here in the last couple of years, but things might slow down a little as we go forward just because all the major features are basically working well now. The PulseAudio support is working well and we get very few bug reports now against it. The reports we are getting from the pro-audio community is that PipeWire works just as well or better as JACK for most people in terms of for instance latency, and when we do see issues with pro-audio it tends to be more often caused by driver issues triggered by PipeWire trying to use the device in ways that JACK didn’t. We been resolving those by adding more and more options to hardcode certain options in PipeWire, so that just as with JACK you can force PipeWire to not try things the driver has problems with. Of course fixing the drivers would be the best outcome, but for some of these pro-audio cards they are so niche that it is hard to find developers who wants to work on them or who has hardware to test with.

We are still maturing the video support although even that is getting very solid now. The screen capture support is considered fully mature, but the camera support is still a bit of a work in progress, partially because we are going to a generational change the camera landscape with UVC cameras being supplanted by MIPI cameras. Resolving that generational change isn’t just on PipeWire of course, but it does make the a more volatile landscape to mature something in. Of course an advantage here is that applications using PipeWire can easily switch between V4L2 UVC cameras and libcamera MIPI cameras, thus helping users have a smooth experience through this transition period.
But even with the challenges posed by this we are moving rapidly forward with Firefox PipeWire camera support being on by default in Fedora now, Chrome coming along quickly and OBS Studio having PipeWire support for some time already. And last but not least SDL3 is now out with PipeWire camera support.

MIPI camera support
Hans de Goede, Milan Zamazal and Kate Hsuan keeps working on making sure MIPI cameras work under Linux. MIPI cameras are a step forward in terms of technical capabilities, but at the moment a bit of a step backward in terms of open source as a lot of vendors believe they have ‘secret sauce’ in the MIPI camera stacks. Our works focuses mostly on getting the Intel MIPI stack fully working under Linux with the Lattice MIPI aggregator being the biggest hurdle currently for some laptops. Luckily Alan Stern, the USB kernel maintainer, is looking at this now as he got the hardware himself.

Flatpak
Some major improvements to the Flatpak stack has happened recently with the USB portal merged upstream. The USB portal came out of the Sovereign fund funding for GNOME and it gives us a more secure way to give sandboxed applications access to you USB devcices. In a somewhat related note we are still working on making system daemons installable through Flatpak, with the usecase being applications that has a system daemon to communicate with a specific piece of hardware for example (usually through USB). Christian Hergert got this on his todo list, but we are at the moment waiting for Lennart Poettering to merge some pre-requisite work into systemd that we want to base this on.

Accessibility
We are putting in a lot of effort towards accessibility these days. This includes working on portals and Wayland extensions to help facilitate accessibility, working on the ORCA screen reader and its dependencies to ensure it works great under Wayland. Working on GTK4 to ensure we got top notch accessibility support in the toolkit and more.

GNOME Software
Last year Milan Crha landed the support for signing the NVIDIA driver for use on secure boot. The main feature Milan he is looking at now is getting support for DNF5 into GNOME Software. Doing this will resolve one of the longest standing annoyances we had, which is that the dnf command line and GNOME Software would maintain two separate package caches. Once the DNF5 transition is done that should be a thing of the past and thus less risk of disk space being wasted on an extra set of cached packages.

Firefox
Martin Stransky and Jan Horak has been working hard at making Firefox ready for the future, with a lot of work going into making sure it supports the portals needed to function as a flatpak and by bringing HDR support to Firefox. In fact Martin just got his HDR patches for Firefox merged this week. So with the PipeWire camera support, Flatpak support and HDR support in place, Firefox will be ready for the future.

We are hiring! looking for 2 talented developers to join the Red Hat desktop team
We are hiring! So we got 2 job openings on the Red Hat desktop team! So if you are interested in joining us in pushing the boundaries of desktop linux forward please take a look and apply. For these 2 positions we are open to remote workers across the globe and while the job adds list specific seniorities we are somewhat flexible on that front too for the right candidate. So be sure to check out the two job listings and get your application in! If you ever wanted to work fulltime on GNOME and related technologies this is your chance.

freedoms-for-who, revisited briefly

In talking with someone about “preferred form for modification” over the weekend at FOSDEM, the FSF (now sort-of-OSI?) four freedoms came up. They’re not bad, but they’re extremely developer-focused in their “use case”. This is not a new observation, of course, but the changed technical and social context of AI seems to be bringing it to the fore that different users have different variations on the values and why open is so important to them.

Here’s a very quick, rough cut of what I proposed instead as key freedoms, and who those matter for. These are not exclusive (eg in many cases developers also care about replication; businesses obviously frequently care about modification, etc.) but compared to the original four freedoms, convey that different stakeholders have different needs—all of which have been served, but not explicitly called out as metrics, by the FOSS movement over the years.

  • modification (foremost, for developers): We like to play with things, and often can make them better in the process. Enough said.
  • replication (foremost, for scientists): It’s not really science if you can’t replicate it, and you can’t replicate it if it isn’t really yours. High overlap with modification and transparency, but something other constituencies can often live without.
  • transparency (foremost, for governments): you can’t effectively regulate what you can’t understand, and it’s never OK for something that needs to be regulated to be opaque. (Obviously we allow this all the time but as we’re all reminded this week that leads to all kinds of malignancies.)
  • cost of re-use (foremost, for business): This is perhaps the least unique, but it’s important to remember that statistically businesses very rarely modify open source. They pick and choose what they use, and compose them in almost entirely unique ways, but that’s a property of architectural design and cost, not of ability to modify.

These of course get you to mostly the same place as the traditional four freedoms. But importantly for the discussion of “open in AI”, replication and transparency require a focus on data, while for many businesses (and certainly for most developers) weights are sufficient and may well be preferred in many/most cases.

One could imagine other use cases and priorities, of course. But I wanted to bang this out quick and there was a nice symmetry to having four. Leave more on the various socials :)

Linux App Summit 2025 – Call for Papers Open Now

It’s that time again, and we are in our 7th year doing this conference (if you include LAS GNOME).

This year, LAS will be held in Tirana, Albania and we are going all out this year to make it the best conference representing apps on Linux.

For those who don’t know or have not heard of Linux App Summit, the idea is to have desktops work together to help enable application developers to build apps on the Linux platform. It’s a parallel effort to the Flathub and Snapstore app stores.

LAS is positioned to promote third party developers, inform the ecosystems are the advances on the desktop, and for developers, designers, and contributors working on the desktops to meet each other and discuss how to move the platform forward as a community.

LAS’s success depends on all of you. If you’re passionate about making Linux as a viable alternative to proprietary platform then we need your help! Linux enables local control of your technology that you can adapt to your needs. Build local ecosystems enabling a local economy. A community driven platform will protect your privacy, safety, and security without bowing to shareholders or to politicians. This is the place to tell us about it!

So, I ask all of you to attend LAS, help drive our numbers up. Have a great idea that you won’t to share with this ecosystem of developers? Implemented something on a phone, in an automobile, or something else? Have a great concept? Want to update all of us on what the next version of your app is going to do?

Through here, we can find out what is missing? What do we need to do move forward? What are the trends we should be looking at?

Feel free to reach out to our team and we’ll be happy to answer any questions at info@linuxappsummit.org.

You can submit a talk at https://iinuxappsummit.org/cfp. You can register for the conference at https://linuxappsummit.org/register.

Can’t come in person or give the talk in person? Not to worry! LAS is a hybrid conference and you can attend from remote even though we would love to meet you in person.

Finally, we are looking for sponsors. If you know of a company who would make a great sponsor please reach out. If you’re interested in sponsoring LAS, you can email us at sponsors@linuxappsummit.org. For more info at https://linuxappsummit.org/sponsor.

 

 

What's next for Flathub build infrastructure

There is a storm coming and we are re-architecting our build infrastructure.

Buildbot has never been designed to do what Flathub needs: taking arbitrary inputs like application IDs and dynamically creating new pipelines. However, there's no misuse one cannot achieve if something is being configured in A Real Programming Language, and so, back in 2019, Alex Larsson piled a bunch of hacks so we could have not only dynamic configuration based on Flathub organization in GitHub, but also some custom views displaying latest builds.

Fast-forward to 2025, these hacks no longer work with the latest release of Buildbot, rendering our soft fork stuck on a version from 2021. For whatever reason, updating GitHub CI status stopped to work well, allowing people to merge untested code changes. It also requires periodical restarts because it grinds to a halt for no particular reason, dropping new builds in the meantime. The worst offense: I never liked it.

Interlude: Equinix Metal née Packet has been sponsoring our heavy-lifting servers doing actual building for the past 5 years. Unfortunately, they are shutting down, meaning we need to move out by the end of April 2025.

This perfect storm means we need to effectively re-architecture build infrastructure from scratch.

webhook-proxy

Let's start from improving Buildbot reliability where it falls short. While GitHub shows delivery status of webhook events, and even provides a button to trigger another attempt, there's no public API exposing this data.

As there's no way I'm fixing Buildbot itself, I decided we need a middleman which will take care of redelivering if Buildbot's webhook endpoint responded with non-200 response.

webhook-proxy is a simple Python service which accepts payloads from GitHub, marks new pull requests as pending CI, then forwards unchanged events to Buildbot. Should Buildbot have a hiccup, it will open a circuit breaker and retry with exponential back off until it eventually succeeds.

It ain't much, but it's honest work.

justpak

An important piece of the future Buildbot replacement is being agnostic of its CI/CD implementation. The idea is to provide only a business logic service, while actual building is delegated elsewhere. As such, it requires some relatively generic tool to execute tasks instead of maintaining two or more pipeline definitions for whatever is the CI/CD solution of the decade.

After evaluating modern make replacements, I settled on just. It seems simple enough for executing external commands, and has a nifty way of defining recipes with a specific shebang from the get-go, so it's all a single file even when using a mix of Python and Bash.

I copied all build steps done by the Buildbot workers to specific recipes to have a single entry point. Then I started doing back flips with GitHub Actions to see if this is a viable way in the first place; turns out it is, although not without some hurdles.

While GitHub Actions have a native way of executing all steps inside a Docker container, the host running said container is full of bloat and barely has any free disk space. bbhtt suggested how to remove unneeded files, which also meant each build step is prefixed with docker run as we want to re-use existing flatpak-builder-lint image instead of meddling with Ubuntu.

Then I went to GNOME GitLab to implement identical pipeline because I have no mouth and I must write YAML. GitLab has its set of quirks but after configuring its runners to stop dropping job output and kindly asking GitLab to stop unconditionally kill jobs whose logs exceeded a certain size, we've got the answer: it will blend!

Application IDBuildbotGitHub ActionsGNOME GitLab
Vim4m 29s3m 03s1m 46s
Fractal28m 29s31m 09s23m 37s
QGIS123m 00s198m 00s124m 43s
Ungoogled Chromium75m 28stimed out at 6h177m 29s

Apps on the smaller size can be safely built on GitHub Actions, but anything larger like Chromium, LibreOffice or KDE runtime will need some special treatment by being routed to GNOME GItLab. It's still not as fast as existing infrastructure is, but it will no longer be existing in a quarter, so there's nothing to be complaining about here. As both GitHub Actions and GitLab CI allow triggering workflows through API, it will be also easy to integrate with some external system.

The work-in-progress repo can be found here.

I was initially panicking about replacing aarch64 runners, but it turns out GitHub is providing that since September 2024. Build times are in line with the table above, meaning we are only worried about outliers. I have submitted Flathub to Works on Arm program, and if it gets accepted, we will handle that similarly to x86_64.

The rest of the owl

It's all cool and dandy, but the crucial part is still missing: a service encapsulating business logic. Buildbot encapsulates starting new builds, figuring out where they should go (Is it a test build? Is it a beta build?), retries for failed builds and managing publishing, all of that with a fancy UI.

There's no punchline here: I'm only starting the legwork to figure out which language and framework to use. If you would like to see something specific implemented, don't hesitate to request it here.

February 02, 2025

From Intern To Impact: Building A Future As An Engineer

As my Outreachy internship with GNOME concludes, I’m reflecting on the journey, the effort, the progress, and, most importantly, the future I envision in tech.

The past few months as an intern have been both challenging and incredibly rewarding. From quickly learning a new programming language to meet project demands, to embracing test-driven development and tackling progressively complex tasks, every experience has been a stepping stone. Along the way, I’ve honed my collaboration and communication skills, expanded my professional network, and developed a deep appreciation for the power of community and open source.

This Outreachy internship has been a pivotal experience, solidifying these values and teaching me the importance of embracing challenges and continuously improving my technical and interpersonal skills, preparing me for the next stage of my engineering career. The supportive environment I found in the GNOME community has been instrumental to my growth. I’m incredibly grateful for my mentor, Federico, who exemplified what true mentorship should be. He showed me the importance of collaborative spirit, genuine understanding of team members, and even taking time for laughter – all of which made transitioning to a new environment seamless and comfortable. His guidance fostered open communication, ensuring seamless synchronization and accessibility. Just before writing this, I had a call with Federico, Felipe (the GNOME Internship Coordinator, an awesome person!), and Aryan to discuss my career post-internship.

While the career advice was invaluable, what truly stood out was their collaborative willingness to support my growth. This dedication to fostering progress is something I deeply admire and will strive to make a core part of my own engineering culture.

My journey from intern to engineer has been significantly shaped by the power of community, and I’m now ready to push myself further, take on new challenges, and build a solid, impactful, and reputable career in technology.

Skills

I possess a strong foundation in several key technologies essential for software and infrastructure engineering.

My primary languages are Golang and Rust, allowing me to build high-performance and reliable systems. I also have experience with Python. I’m a quick learner and eager to expand my skillset further.

Career Goals

My ultimate career aspiration is to secure a role that challenges me to grow as an engineer while contributing to impactful and innovative projects. I am particularly drawn to:

    • Cultivating a culture of creativity and structured development while optimizing myself to become the best engineer I can be—just like my Outreachy experience.
    • Developing and sustaining critical infrastructure that powers large-scale, globally utilized systems, ensuring reliability, security, and seamless operation.
    • Exploring opportunities at MANGA or other big tech companies to work on complex systems, bridging software engineering, security, and infrastructure.
motivations

While the challenge of growth is my primary motivation, the financial stability offered by these roles is also important, enabling me to further invest in my personal and professional development.

Relocation is a significant draw, offering the opportunity to experience different cultures, gain new perspectives and immerse myself in a global engineering community.

As an introverted and private person, I see this as a chance to push beyond my comfort zone, engage with a diverse range of collaborators, and build meaningful connections.

Job Search

I am actively seeking software engineering, infrastructure, and site reliability roles. I am particularly interested in opportunities at large tech companies, where I can contribute to complex systems and further develop my expertise in Golang and Rust after concluding my Outreachy internship with Gnome is concluded in march 2025.

exploring the opportunities

I’m eager to explore software engineering, open source, infrastructure, and site reliability roles. If your team is seeking someone with my skills and experience, I’d welcome a conversation. Connect with me via email or LinkedIn.

I’m excited about the future and ready to take the next step in my career. With the foundation I’ve built during this internship, I’m confident in my ability to make a meaningful impact in the tech industry

February 01, 2025

What’s new in GTK, winter 2025 edition

We just had a GTK hackfest at FOSDEM. A good time for an update on whats new and exciting in GTK, with an eye towards 4.18.

GTK hackfest 2025Requirements

You can no longer call gdk_display_get_default() or gdk_display_open() before gtk_init(). This was causing problems due to incomplete initialization, so we made it fail with a (hopefully clear) error message. If you are affected by this, the usual fix is to just call gtk_init() as early as possible.

On Windows, we have a hard requirement on Windows 10 now. All older versions are long unsupported, and having to deal with a maze of ifdefs and unavailable APIs makes development harder than it should be. Dropping support for very old versions also simplifies the code down the stack, in Pango and GLib.

The same idea applies to macOS, where we now require macOS 10.15.

Spring cleaning

The old GL renderer has been removed. This may be unwelcome news for people stuck on very old drivers and hardware. But we will continue to make the new renderers work as well as possible on the hardware that they can support.

The X11 and Broadway backends have been deprecated, as a clear signal that we intend to remove them in the GTK 5. In the meantime, they continue to be available. We have also deprecated GtkShortcutsWindow, since it needs a new design. The replacement will appear in libadwaita, hopefully next cycle.

It is worth reminding everybody that there is no need to act on deprecations until you are actively porting your app to the next major version of GTK, which is not on the horizon yet.

Incremental improvements

Widget layout and size allocation has received quite a bit of attention this cycle, with the goal of improving performance (by avoiding binary search as much as possible) and correctness. Nevertheless, these changes have some potential for breakage, so if you see wrong or suboptimal layouts in applications, please let us know.

GTK has had difficulties for a while getting its pointer sizes right with fractional scaling on Wayland, but this should all be solved in GTK 4.18. No more huge pointers. Fixing this also required changes on the mutter side.

New beginnings

Accessibility in GTK 4.18 is taking a major step forward, with the new AccessKit backend, which gives us accessibility on  Windows and macOS, for the very first time. The at-spi backend is still the default on Linux, and has seen a number of improvements as well.

And, maybe the biggest news: We have an Android backend now. It is still experimental, so you should expect some rough edges and loose ends. For example, there is no GL renderer support yet. But it is exciting that you can just try gtk4-demo on your phone now, and have it mostly work.

Enjoy!

January 29, 2025

Integrating jj-fzf into Emacs

Introduction Built on jj and fzf, jj-fzf offers a text-based user interface (TUI) that simplifies complex versioning control operations like rebasing, squashing, and merging commits. This post will guide you through integrating jj-fzf into your Emacs workflow, allowing to switch between emacs and jj…

New Website

I finally got distracted enough to finish my website that has been saying "under construction" for over a year, since I set up this server for my Sharkey instance.

I've wanted to do this for a while - one, so that I actually have a home page, and two, so that I can move my blog here instead of using WordPress.

Setup

Initially I wanted to use a static generator like Hugo, but then I discovered that the web server I'm using (Caddy) can do templates. That's perfectly enough for a simple blog, so I don't actually need a separate generator. This very article is a markdown document, parsed and embedded into a nice-looking page and RSS feed using templates.

In addition, I get all the niceties I couldn't get before:

  • Using Markdown instead of HTML with WordPress-specific additions for e.g. image galleries.

  • Proper code listings with syntax highlighting (you'll have to view this on the original page though, not from Planet GNOME or your RSS reader):

    <property name="child">
      <object class="AdwToastOverlay" id="toast_overlay">
        <property name="child">
          <object class="AdwNavigationView" id="nav_view"/>
        </property>
      </object>
    </property>
    
  • Dark mode support, incl. for images (again, no clue if this works on Planet GNOME):

    Screenshot of Apostrophe with this text as Markdown on the left and a preview on the right
    Me writing this very post in Apostrophe
  • Just simple niceties like smaller monospace font - I do this a lot and I don't particularly like the way the WordPress theme I was using presents it.

  • Finally, while migrating my old posts I had an opportunity to update broken links (such as to the old documentation) and add missing alt text as in quite a few images I set WordPress description instead of alt text and never noticed. If you really want the old version, it's still on the old website and each migrated article links to its original counterpart.

So yeah, so far this was fairly pleasant, I expected much worse. There are still a bunch of things I want to add (e.g. previewing images at full size on click), but it's not like my old blog had that either.

January 28, 2025

Pre-FOSDEM Maps wrap-up

 As I've done some times previous years, I thought it would be appropriate to give a bit of a status update on goings on with regards to Maps before heading for this year's FOSDEM


Refreshed Location Marker

One of the things that landed since the December update are the new revamped location markers


The marker now uses the system accent color, and sports a “torch” indicating the current heading (when known).


And the circle indicating approximate accuracy of the location now has an outer contour.

And on these notes, I would also like to take the opportunity to mention the BeaconDB project (https://beacondb.net/) with the goal of building a community-sourced wireless positioning database. It is compatible the now-defunct Mozilla Location Service (MLS) and works as a drop-in-replacement with GeoClue.

Improved Visuals for Public Transit Routes Lists

The “badges” showing line numbers/names for public transit journeys, and markers for shown on the map when selecting a trip has been improved to avoid some odd label alignments and better looking contours (on lower contrast against light or dark background). The labels are now drawn directly using GSK instead of piggy-backing on a GtkLabel, doing some Cairo drawing on top of that. One additional benefit here is that it also gets rid of some of the remaining usages of the GdkPixbuf APIs (which will be gone in a future GTK 5).





 

Transitous Move to MOTIS 2

On the subject of transit, Transitous has now migrated to the new MOTIS 2 API. And consequently the support in Maps has been updated to use the new API (this is also backported to the stable 47.3 release).

The new API is easier to use, and more in-line with the internal data types in Maps, so the code was also a bit simpler. Also now with the new API we get the walking instructions directly from MOTIS instead of using GraphHopper to compute walking  “legs”. This has made searching for routes in Maps quite a bit faster as well.


FOSDEM

And when talking about FOSDEM, me, Felix Gündling, and Jonah Brüchert will host a talk about Transitous in the “Railways and Open Transport” devroom (K.6.401) on Sunday @ 16:30 CET

https://fosdem.org/2025/schedule/event/fosdem-2025-4105-gnome-maps-meets-transitous-meets-motis/

So maybe see you in FOSDEM!

January 27, 2025

The GNOME LATAM 2024 recordings are now live!

In October 2024 some members of our community gathered in Medelin, Colombia, for another edition of GNOME Latam. Some of us joined remotely for a schedule packed with talks about GNOME and its ecosystem.

The talks, in Spanish and Portuguese, are now published on YouTube. Check it out!

Python 2

In 2020, the Python foundation declared Python 2 as not maintained anymore.

Python 2 is really old, not maintained and should not be used by anyone in any modern environment, but software is complex and python2 still exists in some modern Linux distributions like Tumbleweed.

The past week the request to delete Python 2 from Tumbleweed was created and is going through the staging process.

The main package keeping Python 2 around for Tumbleweed was Gimp 2, that doesn't depends directly on Python 2, but some of the plugins depends on it. Now that we've Gimp 3 in Tumbleweed, we are able to finally remove it.

Python 2

The first version of Python 2 was released around 2000, so it's now 25 years old. That's not true, because software is a living creature, so as you may know, Python 2 grew during the following years with patch and minor releases until 2020 that was the final release 2.7.18.

But even when it was maintained until 2020, it was deprecated for a long time so everyone "should" have time to migrate to python 3.

Py3K

I started to write python code around the year 2006. I was bored during a summer internship at my third year of computer science, and I decided to learn something new. In the following months / years I heard a lot about the futurist Python 3000, but I didn't worry too much until it was officially released and the migration started to be a thing.

If you have ever write python2 code you will know about some of the main differences with python3:

  • print vs print()
  • raw_input() vs input()
  • unicode() vs str
  • ...

Some tools appeared to make it easier to migrate from python2 to python3, and even it was possible to have code compatible with both versions at the same time using the __future__ module.

You should have heard about the six package, 2 * 3 = 6. Maybe the name should be five instead of six, because it was a Python "2 and 3" compatibility library.

Python in Linux command line

When python3 started to be the main python, there were some discussion about how to handle that in different Linux distributions. The /usr/bin/python binary was present and everyone expect that to be python2, so almost everyone decided to keep that relation forever and distribute python3 as /usr/bin/python3, so you can have both installed without conflicts and there's no confusion.

But python is an interpreted language, and if you have python code, you can't tell if it's python2 or python3. The shebang line in the executable python scripts should point to the correct interpreter and that should be enough like #!/usr/bin/python3 will use the python3 interpreter and #!/usr/bin/python will use python2.

But this is not always true, some distributions uses python3 in /usr/bin/python like Archlinux or if you create a virtualenv with python3, the python binary points to the python3 interpreter, so a shebang like #!/usr/bin/python could be something valid for a python3 script.

In any case, the recommended and safest way is to always use python3 binary because that way it'll work correctly "everywhere".

Goodbye

It's time to say goodbye to python2, at least we can remove it now from Tumbleweed. It'll be around for some more time in Leap, but it's the time to let it go.

January 25, 2025

JJ-FZF 0.25.0: Major New Features

The jj-fzf project has just seen a new release with version 0.25.0. This brings some new features, several smaller improvements, and some important changes to be aware of. For the uninitiated, jj-fzf is a feature-rich command-line tool that integrates jj and fzf, offering fast commit navigation with…

January 24, 2025

Time to write proposals for GSoC 2025 with GNOME!

It is that time of the year again when we start gathering ideas and mentors for Google Summer Code.

@Mentors, please submit new proposals in our Project ideas GitLab repository before the end of January.

Proposals will be reviewed by the GNOME GSoC Admins and posted in https://gsoc.gnome.org/2025 when approved.

If you have any doubts, please don’t hesitate to contact the GNOME Internship Committee.

January 23, 2025

Extracting Texts And Elements From SVG2

Have you ever wondered how SVG files render complex text layouts with different styles and directions so seamlessly? At the core of this magic lies text layout algorithms—an essential component of SVG rendering that ensures text appears exactly as intended.

Text layout algorithms are vital for rendering SVGs that include styled or bidirectional text. However, before layout comes text extraction—the process of collecting and organizing text content and properties from the XML tree to enable accurate rendering.

The Extraction Process

SVGs, being XML-based formats, resemble a tree-like structure similar to HTML. To extract information programmatically, you navigate through nodes in this structure.

Each node in the XML tree holds critical details for implementing the SVG2 text layout algorithm, including:

    • Text content
    • Bidi-control properties (manage text directionality)
    • Styling attributes like font and spacing
Understanding Bidi-Control

Bidi-control refers to managing text direction (e.g., Left-to-Right or Right-to-Left) using special Unicode characters. This is crucial for accurately displaying mixed-direction text, such as combining English and Arabic.

A Basic Example
<text>
  foo
  <tspan>bar</tspan>
  baz
</text>

The diagram and code sample shows the structure librsvg creates when it parses this XML tree.

Here, the <text> element has three children:

    1. A text node containing the characters “foo”.
    2. A <tspan> element with a single child text node containing “bar”.
    3. Another text node containing “baz”.

When traversed programmatically, the extracted text from this structure would be “foobarbaz”.

To extract text from the XML tree:

    1. Start traversing nodes from the <text> element.
    2. Continue through each child until the final closing tag.
    3. Concatenate character content into a single string.

While this example seems straightforward, real-world SVG2 files introduce additional complexities, such as bidi-control and styling, which must be handled during text extraction.

Handling Complex SVG Trees

Real-world examples often involve more than just plain text nodes. Let’s examine a more complex XML tree that includes styling and bidi-control:

Example:

<text>
  "Hello"
  <tspan font-style="bold;">bold</tspan>
  <tspan direction="rtl" unicode-bidi="bidi-override">مرحبا</tspan>
  <tspan font-style="italic;">world</tspan>
</text>
text extraction illustration credit: Federico (my mentor)
credit: Federico (my mentor)

In this example, the <text> element has four children:

    1. A text node containing “Hello”.
    2. A <tspan> element with font-style: bold, containing the text “bold”.
    3. A <tspan> element with bidi-control set to RTL (Right-To-Left), containing Arabic text “مرحبا”.
    4. Another <tspan> element with font-style: italic, containing “world”.

This structure introduces challenges, such as:

    • Styling: Managing diverse font styles (e.g., bold, italic).
    • Whitespace and Positioning: Handling spacing between nodes.
    • Bidirectional Control: Ensuring proper text flow for mixed-direction content.

Programmatically extracting text from such structures involves traversing nodes, identifying relevant attributes, and aggregating the text and bidi-control characters accurately.

Why Test-Driven Development Matters

One significant insight during development was the use of Test-Driven Development (TDD), thanks to my mentor Federico. Writing tests before implementation made it easier to visualize and address complex scenarios. This approach turned what initially seemed overwhelming into manageable steps, leading to robust and reliable solutions.

Conclusion

Text extraction is the foundational step in implementing the SVG2 text layout algorithm. By effectively handling complexities such as bidi-control and styling, we ensure that SVGs render text accurately and beautifully, regardless of direction or styling nuances.

If you’ve been following my articles and feel inspired to contribute to librsvg or open source projects, I’d love to hear from you! Drop a comment below to share your thoughts, ask questions, or offer insights. Your contributions—whether in the form of questions, ideas, or suggestions—are invaluable to both the development of librsvg and the ongoing discussion around SVG rendering. 😊

In my next article, we’ll explore how these extracted elements are processed and integrated into the text layout algorithm. Stay tuned—there’s so much more to uncover!

DIY 12V DC Power Supply

Let’s talk about our journey of creating something from scratch (almost?) for our Electronics I final project. It wasn’t groundbreaking like a full-blown multi-featured DC power supply, but it was a fulfilling learning experience.

Spoiler alert: mistakes were made, lessons were learned, and yes, we had fun.

Design and Calculations

Everything began with brainstorming and sketching out ideas. This was our chance to put all the knowledge from our lectures to the test—from diode operating regions to voltage regulation. It was exciting but also a bit daunting.

The first decision was our power supply's specifications. We aimed for a 12V output—a solid middle ground between complexity and functionality. Plus, the 5V option was already claimed by another group. For rectification, we chose a full-wave bridge rectifier due to its efficiency compared to the half-wave alternative.

Calculations? Oh yes, there were plenty! Transformers, diodes, capacitors, regulators—everything had to line up perfectly on paper before moving to reality.

We started at the output, aiming for a stable 12V. To achieve this, we selected the LM7812 voltage regulator. It was an obvious choice: simple, reliable, and readily available. With an input range of 14.5 to 27V, it could easily provide the 12V we needed.

Since the LM7812 can handle a maximum input voltage of 27V, a 12-0-12V transformer would be perfect. However, only a 6-0-6V transformer was available, so we had to make do with that. Regarding with the diode, we used 1N4007 diodes as it is readily available and can handle our desired specifications.

Assuming the provided input voltage for the regulator is 15.5V, which is also the output of the rectifier $ V_{\text{p(rec)}} $, the output voltage of the secondary side of the transformer $ V_{\text{p(sec)}} $ must be:

$$ V_{\text{p(sec)}} = V_{\text{p(rec)}} + 1.4V = 15.5V + 1.4V = 16.9V_{\text{pk}} $$

Note: The 1.4V was to account for the voltage drop across the diodes.

or in RMS,

$$ \frac{16.9V_{\text{pk}}}{\sqrt{2}} = 11.95V_{\text{rms}} $$

This is perfect four our 6-0-6V transformer maximum output voltage of 12V in RMS.

Using the formula for ripple factor,

$$ r = \frac{V_{\text{r(pp)}}}{V_{\text{dc}}} $$

$$ V_{\text{r(pp)}} = r \times V_{\text{dc}} $$

we can determine the value of the filter capacitor, given a ripple factor $ r $ of 3% or 0.03, and output DC voltage $ V_{\text{dc}} $ of 12V.

$$ V_{\text{r(pp)}} = \frac{V_{\text{p(rect)}}}{f \times R_\text{L} \times C} $$

$$ C = \frac{V_{\text{p(rect)}}}{f \times R_\text{L} \times V_{\text{r(pp)}}} = \frac{V_{\text{p(rect)}}}{f \times R_\text{L} \times r \times V_{\text{dc}}} $$

We also know that a typical frequency of the AC input is 60Hz and we have to multiply it by 2 to get the frequency of the full-wave rectified output.

$$ f = 2 \times 60Hz = 120Hz $$

Also, given the maximum load current of 50mA, we can calculate the assumed load resistance.

$$ R_{\text{L}} = \frac{V_{\text{dc}}}{I_{\text{L}}} = \frac{12V}{50mA} = 240\Omega $$

Substituting the values,

$$ C = \frac{15.5V}{120Hz \times 240\Omega \times 0.03 \times 12V} = 1495 \mu F \approx 1.5 mF $$

Here is the final schematic diagram of our design based on the calculations:

Schematic Diagram

Construction

Moving on, we had to put our design into action. This was where the real fun began. We had to source the components, breadboard the circuit, design the PCB, and 3D-print the enclosure.

Breadboarding

The breadboarding phase was a mix of excitement and confusion. We had to double-check every connection and component.

Circuit Overview

Breadboard Close-up

It was a tedious process, but the feeling when the 12V LED lit up? Priceless.

Initial Testing

PCB Design, Etching and Soldering

For the PCB design, we used EasyEDA. It was our first time using it, but it was surprisingly intuitive. We just had first to recreate the schematic diagram, then layout the components and traces.

EasyEDA Schematic

Tracing the components on the PCB was a bit tricky, but we managed to get it done. It is like playing connect-the-dots, except no overlapping lines are allowed since we only had a single-layer PCB.

PCB Tracing

At the end, it was satisfying to see the final design.

PCB Layout

We had to print it on a sticker paper, transfer it to the copper board, cut it, drill it, etch it, and solder the components. It was a long process, but the result was worth it.

PCB Soldered

Did we also mention that we soldered the regulator in reverse for the first time? Oops. But hey, we learned from it.

Custom Enclosure

To make our project stand out, we decided to 3D-print a custom enclosure. Designing it on SketchUp was surprisingly fun.

3D Model

It was also satisfying to see the once a software model come to life as a physical object.

3D Printed

Testing

Testing day was a rollercoaster. Smoke-free? Check. Output voltage stable? Mostly.

Line Regulation Via Varying Input Voltage

For the first table, we vary the input voltage, and we measured the input voltage, the transformer output, the filter output, the regulator output, and the percent voltage regulation.

Trial No.Input Voltage ($ V_{\text{rms}} $)Transformer Output ($ V_{\text{rms}} $)Filter Output ($ V_{\text{DC}} $)Regulator Output ($ V_{\text{DC}} $)% Voltage Regulation
121312.113.5811.975
221411.213.8211.925
321510.713.7312.0310
421611.513.8011.9310
521710.813.2612.019
621811.013.5911.929
722011.313.7411.922
822212.513.6111.962
922412.313.5711.9310
1022611.913.8811.9410
Average-11.5313.6711.9535.5

Note: The load resistor is a 22Ω resistor.

Table 1 Graph

Load Regulation Via Varying Load Resistance

For the second table, we vary the load resistance, and we measured the input voltage, the transformer output, the filter output, the regulator output, and the percent voltage regulation.

Trial No.Load Resistance ($ \Omega $)Transformer Output ($ V_{\text{rms}} $)Filter Output ($ V_{\text{DC}} $)Regulator Output ($ V_{\text{FL(DC)}} $)% Voltage Regulation
122010.611.9610.2216.4385
250010.712.8311.434.1120
31k11.113.0511.463.8394
42k11.113.0611.483.6585
55k10.613.2011.493.5683
66k10.913.2611.781.0187
710k11.213.3911.850.4219
811k11.313.9111.870.2527
920k11.313.5311.890.0841
1022k11.113.2711.900
Average-10.9913.1511.543.3394

Note: The primary voltage applied to the transformer was 220V in RMS. The $ V_{\text{NL(DC)}} $ used in computing the % voltage regulation is 11.9 V.

Table 2 Graph

Data Interpretation

Looking at the tables, the LM7812 did a great job keeping the output mostly steady at 12V, even when we threw in some wild input voltage swings—what a champ! That said, when the load resistance became too low, it struggled a bit, showing the limits of our trusty (but modest) 6-0-6V transformer. On the other hand, our filtering capacitors stepped in like unsung heroes, keeping the ripples under control and giving us a smooth DC output.

Closing Words

This DC power supply project was a fantastic learning experience—it brought classroom concepts to life and gave us hands-on insight into circuit design and testing. While it performed well for what it is, it’s important to note that this design isn’t meant for serious, high-stakes applications. Think of it more as a stepping stone than a professional-grade benchmark.

Overall, we learned a lot about troubleshooting, design limitations, and real-world performance. With a bit more fine-tuning, this could even inspire more advanced builds down the line. For now, it’s a win for learning and the satisfaction of making something work (mostly) as planned!

Special thanks to our professor for guiding us and to my amazing groupmates—Roneline, Rhaniel, Peejay, Aaron, and Rohn—for making this experience enjoyable and productive (ask them?). Cheers to teamwork and lessons learned!

If you have any questions or feedback, feel free to leave a comment below. We’d love to hear your thoughts or critiques. Until next time, happy tinkering!

January 22, 2025

Crosswords 0.3.14

I released Crosswords-0.3.14 this week. This is a checkpoint release—there are a number of experimental features that are still under development. However, I wanted to get a stable release out before changing things too much. Download the apps on flathub! (game, editor)

Almost all the work this cycle happened in the editor. As a result, this is the first version of the editor that’s somewhat close to my vision and that I’m not embarrassed giving to a crossword constructor to use. If you use it, I’d love feedback as to how it went.

Read on for more details.

Libipuz

Libipuz got a version bump to 0.5.0. Changes include:

  • Adding GObject-Introspection support to the library. This meant a bunch of API changes to fix methods that were C-only. Along the way, I took the time to standardize and clean up the API.
  • Documenting the library. It’s about 80% done, and has some tutorials and examples. The API docs are here.
  • Validating both the docs and introspections. As mentioned last post, Philip implemented a nonogram app on top of libipuz in Typescript. This work gave me confidence in the overall API approach.
  • Porting libipuz to rust. I worked with GSoC student Pranjal and Federico on this. We got many of the leaf structures ported and have an overall approach to the main class hierarchy. Progress continues.

The main goal for libipuz in 2025 is to get a 1.0 version released and available, with some API guarantees.

Autofill

I have struggled to implement the autofill functionality for the past few years. The simple algorithm I wrote would fill out 1/3 of the board, and then get stuck. Unexpectedly, Sebastian showed up and spent a few months developing a better approach. His betterfill algorithm is able to fill full grids a good chunk of the time. It’s built around failing fast in the search tree, and some clever heuristics to force that to happen. You can read more about it at his site.

NOTE: filling an arbitrary grid is NP-hard. It’s very possible to have grids that can’t be easily solved in a reasonable time. But as a practical matter, solving — and failing to solve — is faster now.

I also fixed an annoying issue with the Grid editor. Previously, there were subtabs that would switch between the autofill and edit modes. Tabs in tabs are a bad interface, and I found it particularly clunky to use. However, it let me have different interaction modes with the grid. I talked with Scott a bit about it and he made an off-the-cuff suggestion of merging the tabs together and adding grid selection to the main edit tab. So far it’s working quite nicely, though a tad under-discoverable.

Word Definitions and Substrings

The major visible addition to the Clue phase is the definition tab. They’re pulled from Wiktionary, and included in a custom word-list stored with the editor. I decided on a local copy because Wiktionary doesn’t have an API for pulling definitions and I wanted to keep all operations fast. I’m able to look up and render the definitions extremely quickly.

New dictionary tabI also made progress on a big design goal for the editor: the ability to work with substrings in the clue phase. For those who are unfamiliar with cryptic crosswords, answers are frequently broken down into substrings which each have their own subclues to indicate them. The idea is to show possibilities for these indicators to provide ideas for puzzle constructors.

Note: If you’re unfamiliar with cryptic clues, this video is a charming introduction to them.

It’s a little confusing to explain, so perhaps an example would help. In this video the answers to some cryptic clues are broken down into their parts. The tabs show how they could have been constructed.

Next steps?

  • Testing: I’m really happy with how the cryptic authoring features are coming together, but I’m not convinced it’s useful yet. I want to try writing a couple of crosswords to be sure.
  • Acrostic editor: We’re going to land Tanmay’s acrostic editor early in the cycle so we have maximum time to get it working
  • Nonogram player: There are a few API changes needed for nonograms
  • Word score: I had a few great conversations with Erin about scoring words — time for a design doc.
  • Game cleanup: I’m over due for a cycle of cleaning up the game. I will go through the open bugs there and clean them up.

Thanks again to all supporters, translators, packagers, testers, and contributors!

January 21, 2025

Status update, 21/01/2025

Happy new year everyone!

As a new year’s resolution, I’ve decided to improve SEO for this blog, so from now on my posts will be in FAQ format.

What are Sam Thursfield’s favourite music releases of 2025?

Glad you asked. I posted my top 3 music releases here on Mastodon. (I also put them on Bluesky, because why not? If you’re curious, Christine Lemmer-Webber has a great technical comparison between Bluesky and the Fediverse).

Here is a Listenbrainz playlist with these and my favourites from previous years. There’s also a playlist on Spotify, but watch out for fake Spotify music. I read a great piece by Liz Pelly on how Spotify has created thousands of fake artists to avoid paying musicians fairly.

What has Sam Thursfield learned at work recently?

That’s quite a boring question, but ok. I used FastAPI for the first time. It’s pretty good.

And I have been learning the theory behind the C4 model, which I like more and more. The trick with the C4 model is, it doesn’t claim solve your problems for you. It’s a tool to help you to think in a more structured way so that you have to solve them yourself. More on that in a future post.

Should Jack Dorsey be allowed to speak at FOSDEM 2025?

Now that is a very interesting question!

FOSDEM is a “free and non-commercial” event, organised “by the community for the community”. The community, in this case, being free and open source software developers. It’s the largest event of its kind, and organising such a beast for little to no money for 25 years running, is a huge achievement. We greatly appreciate the effort the organisers put in! I will be at FOSDEM ’25, talking about automated QA infrastructure, helping out at the GNOME booth, and wandering wherever fate leads me.

Jack Dorsey is a Silicon Valley billionaire, you might remember him from selling Twitter to Elon Musk, touting blockchains, and quitting the board of Bluesky because they added moderation features into the protocol. Many people rolled eyes at the announcement that he will be speaking at FOSDEM this year in a talk titled “Infusing Open Source Culture into Company DNA”.

Drew DeVault stepped forward to organise a protest against Dorsey speaking, announced under the heading “No Billionares at FOSDEM“. More than one person I’ve spoken to is interested in joining. Other people I know think it doesn’t make sense to protest one keynote speaker out of the 1000s who have stepped on the stage over the years.

Protests are most effective when they clearly articulate what is being protested and what we want to change. The world in 2025 is a complex, messy place though which is changing faster than I can keep up with. Here’s an attempt to think through why this is happening.

Firstly, the”Free and Open Source Software community” is a convenient fiction, and in reality it is made up of many overlapping groups, with an interest in technology being sometimes the only thing we have in common. I can’t explain here all of the nuance, but lets look at one particular axis, which we could call pro-corporate vs. anti-corporate sentiments.

What I mean by corporate here is quite specific but if you’re alive and reading the news in 2025 you probably have some idea what I mean. A corporation is a legal abstraction which has some of the same rights as a human — it can own property, pay tax, employ people, and participate in legal action — while not actually being a human. A corporation can’t feel guilt, shame, love or empathy. A publicly traded corporation must make a profit — if it doesn’t, another corporation will eat it. (Credit goes to Charlie Stross for this metaphor :-). This leads to corporations that can behave like psychopaths, without being held accountable in the way that a human would. Quoting Alexander Biener:


Elites avoiding accountability is nothing new, but in the last three decades corporate avoidance has reached new lows. Nobody in the military-industrial complex went to jail for lying about weapons of mass destruction in Iraq. Nobody at BP went to jail for the Deepwater oil spill. No traders or bankers (outside of Iceland) were incarcerated for the 2008 financial crash. No one in the Sackler family was punished after Purdue Pharma peddled the death of half a million Americans.

I could post some more articles but I know you have your own experiences of interacting with corporations. Abstractions are useful, powerful and dangerous. Corporations allowed huge changes and improvements in technology and society to take place. They have significant power over our lives. And they prioritize making money over all the things we as individual humans might prioritize, such as fairness, friendliness, and fun.


On the pro-corporate end at FOSDEM, you’ll find people who encourage use of open source in order to share effort between companies, to foster collaboration between teams in different locations and in different organisations, to reduce costs, to share knowledge, and to exploit volunteer labour. When these people are at work, they might advocate publishing code as open source to increase trust in a product, or in the hope that it’ll be widely adopted and become ubiquitous, which may give them a business advantage. These people will use the term “open source” or “FOSS” a lot, they probably have well-paid jobs or businesses in the software industry.

Topics on the pro-corporate side this year include: making a commercial product better (example), complying with legal regulations (example) or consuming open source in corporate software (example)

On the anti-corporate end, you’ll find people whose motivations are not financial (although they may still have a well-paid job in the software industry). They may be motivated by certain values and ethics or an interest in things which aren’t profitable. Their actions are sometimes at odds with the aims of for-profit corporations, such as fighting planned obsolescence, ensuring you have the right to repair a device you bought, and the right to use it however you want even when the manufacturer tries to impose safeguards (sometimes even when you’re using it to break a law). They might publish software under restrictive licenses such as the GNU GPL3, aiming to share it with volunteers working in the open while preventing corporations from using their code to make a profit. They might describe what they do as Free Software rather than “open source”.

Talks on the anti-corporate side might include: avoiding proprietary software (example, example), fighting Apple’s app store monopoly (example), fighting “Big Tech” (example), sidestepping a manufacturer’s restrictions on how you can use your device (example), or the hyper-corporate dystopia depicted in Snow Crash (example).

These are two ends of a spectrum. Neither end is hugely radical. The pro-corporate talks discuss complying with regulations, not lobbying to remove them. The anti-corporate talks are not suggesting we go back to living as hunter-gatherers. And most topics discussed at FOSDEM are somewhere between these poles: technology in a personal context (example), in an educational context (example), history lessons (example).

Many talks are “purely technical”, which puts them in the centre of this spectrum. It’s fun to talk about technology for its own sake and it can help you forget about the messiness of the real world for a while, and even give the illusion that software is a purely abstract pursuit, separate from politics, separate from corporate power, and separate from the experience of being a human.

But it’s not. All the software that we discuss at FOSDEM is developed by humans, for humans. Otherwise we wouldn’t sit in a stuffy room to talk about it would we?

The coexistence of the corporate and the anti-corporate worlds at FOSDEM is part of its character. Few of us are exclusively at the anti-corporate end: we all work on laptops built by corporate workers in a factory in China, and most of us have regular corporate jobs. And few of us are entirely at the pro-corporate end: the core principle of FOSS is sharing code and ideas for free rather than for profit.

There are many “open source” events that welcome pro-corporate speakers, but are hostile to anti-corporate talks. Events organised by the Linux Foundation rarely have talks about “fighting Big Tech”, and you need $700 in your pocket just to attend them. FOSDEM is is one of the largest events where folk on the anti-corporate end of the axis are welcome.


Now let’s go back to the talk proposed by Manik Surtani and Jack Dorsey titled “Infusing Open Source Culture into Company DNA”. We can assume it’s towards the pro-corporate end of the spectrum. You can argue that a man with a billion dollars to his name has opportunities to speak which the anti-corporate side of the Free Software community can only dream of, so why give him a slot that could go to someone more deserving?

I have no idea how the main track and keynote speakers at FOSDEM are selected. One of the goals of the protest explained here is “to improve the transparency of the talk selection process, sponsorship terms, and conflict of interest policies, so protests like ours are not necessary in the future.”

I suspect there may be something more at work too. The world in 2025 is a tense place — we’re living through a climate crisis, combined with a housing crisis in many countries, several wars, a political shift to the far-right, and ever increasing inequality around the world. Corporations, more powerful than most governments, are best placed to help if they wanted, but we see very little news about that happening. Instead, they burn methane gas to power new datacenters and recommend we “mainline AI into the veins of the nation“.

None of this is uniquely Jack Dorsey’s fault, but as the first Silicon Valley billionaire to step on the stage of a conference with a strong anti-corporate presence, it may be that he has more to learn from us than we do from him. I hope that, as a long time advocate of free speech, he is willing to listen.

January 20, 2025

fwupd 2.0.4 and DBXUpdate-20241101

I’ve just tagged fwupd 2.0.4 — with lots of nice new features, and most importantly with new protocol support to allow applying the latest dbx security update.

The big change to the uefi-dbx plugin is the switch to an ISO date as a dbx version number for the Microsoft KEK.

The original trick of ‘count the number of Microsoft-owned hashes‘ worked really well, just until Microsoft started removing hashes in the distributed signed dbx file. In 2023 we started ‘fixing up‘ the version based on the last-added checksum to make the device have an artificially lower version than in reality. This fails with the latest DBXUpdate-20241101 update, where frustratingly, more hashes were removed than added. We can’t allow fwupd to update to a version that’s lower than what we’ve got already, and it somewhat gave counting hashes idea the death blow.

Instead of trying to map the hash into a low-integer version, we now use the last-listed hash in the EFI signature list to map directly to an ISO date, e.g. 20250117. We’re providing the mapping in a local quirk file so that the offline machine still shows something sensible, but are mainly relying on the remote metadata from the LVFS that’s always up to date. There’s even more detail in the plugin README for the curious.

We also changed the update protocol from org.uefi.dbx to org.uefi.dbx2 to simplify the testing matrix — and because we never want version 371 upgrading to 20230314 automatically — as that would actually be a downgrade and difficult to explain.

If we see lots of dbx updates going out with 2.0.4 in the next few hours I’ll also backport the new protocol into 1_9_X for the soon-to-be-released 1.9.27 too.

January 15, 2025

non-profit social networks: benchmarking responsibilities and costs

I’m trying to blog quicker this year. I’m also sick with the flu. Forgive any mistakes caused by speed, brevity, or fever.

Monday brought two big announcements in the non-traditional (open? open-ish?) social network space, with Mastodon moving towards non-profit governance (asking for $5M in donations this year), and Free Our Feeds launching to do things around ATProto/Bluesky (asking for $30+M in donations).

It’s a little too early to fully understand what either group will do, and this post is not an endorsement of specifics of either group—people, strategies, etc.

Instead, I just want to say: they should be asking for millions.

There’s a lot of commentary like this one floating around:

I don’t mean this post as a critique of Jan or others. (I deliberately haven’t linked to the source, please don’t pile on Jan!) Their implicit question is very well-intentioned. People are used to very scrappy open source projects, so millions of dollars just feels wrong. But yes, millions is what this will take.

What could they do?

I saw a lot of comments this morning that boiled down to “well, people run Mastodon servers for free, what does anyone need millions for”? Putting aside that this ignores that any decently-sized Mastodon server has actual server costs (and great servers like botsin.space shut down regularly in part because of those), and treats the time and emotional trauma of moderation as free… what else could these orgs be doing?

Just off the top of my head:

  • Moderation, moderation, moderation, including:
    • moderation tools, which by all accounts are brutally badly needed in Masto and would need to be rebuilt from scratch by FoF. (Donate to IFTAS!)
    • multi-lingual and multi-cultural, so you avoid the Meta trap of having 80% of users outside the US/EU but 80% of moderation in the US/EU.
  • Jurisdictionally-distributed servers and staff
    • so that when US VP Musk comes after you, there’s still infrastructure and staff elsewhere
    • and lawyers for this scenario
  • Good governance
    • which, yes, again, lawyers, but also management, coordination, etc.
    • (the ongoing WordPress meltdown should be a great reminder that good governance is both important and not free)
  • Privacy compliance
    • Mention “GDPR compliance” and “Mastodon” in the same paragraph and lots of lawyers go pale; doing this well would be a fun project for a creative lawyer and motivated engineers, but a very time-consuming one.
    • Bluesky has similar challenges, which get even harder as soon as meaningfully mirrored.

And all that’s just to have the same level of service as currently.

If you actually want to improve the software in any way, well, congratulations: that’s hard for any open source software, and it’s really hard when you are doing open source software with millions of users. You need product managers, UX designers, etc. And those aren’t free. You can get some people at a slight discount if you’re selling them on a vision (especially a pro-democracy, anti-harassment one), but in the long run you either need to pay near-market or you get hammered badly by turnover, lack of relevant experience, etc.

What could that cost, $10?

So with all that in mind, some benchmarks to help frame the discussion. Again, this is not to say that an ATProto- or ActivityPub-based service aimed at achieving Twitter or Instagram-levels of users should necessarily cost exactly this much, but it’s helpful to have some numbers for comparison.

  • Wikipedia: (source)
    • legal: $10.8M in 2023-2024 (and Wikipedia plays legal on easy mode in many respects relative to a social network—no DMs, deliberately factual content, sterling global brand)
    • hosting: $3.4M in 2023-2024 (that’s just hardware/bandwidth, doesn’t include operations personnel)
  • Python Package Index
    • $20M/year in bandwidth from Fastly in 2021 (source) (packages are big, but so is social media video, which is table stakes for a wide-reaching modern social network)
  • Twitter
    • operating expenses, not including staff, of around $2B/year in 2022 (source)
  • Signal
  • Content moderation
    • Hard to get useful information on this on a per company basis without a lot more work than I want to do right now, but the overall market is in the billions (source).
    • Worth noting that lots of the people leaving Meta properties right now are doing so in part because tens of thousands of content moderators, paid unconscionably low wages, are not enough.

You can handwave all you want about how you don’t like a given non-profit CEO’s salary, or you think you could reduce hosting costs by self-hosting, or what have you. Or you can pushing the high costs onto “volunteers”.

But the bottom line is that if you want there to be a large-scale social network, even “do it as cheap as humanly possible” is millions of costs borne by someone.

What this isn’t

This doesn’t mean “give the proposed new organizations a blank check”. As with any non-profit, there’s danger of over-paying execs, boards being too cozy with execs and not moving them on fast enough, etc. (Ask me about founder syndrome sometime!) Good governance is important.

This also doesn’t mean I endorse Bluesky’s VC funding; I understand why they feel they need money, but taking that money before the techno-social safeguards they say they want are in place is begging for problems. (And in fact it’s exactly because of that money that I think Free Our Feeds is intriguing—it potentially provides a non-VC source of money to build those safeguards.)

But we have to start with a realistic appraisal of the problem space. That is going to mean some high salaries to bring in talented people to devote themselves to tackling hard, long-term, often thankless problems, and lots of data storage and bandwidth.

And that means, yes, millions of dollars.

January 14, 2025

IPU6 camera support status update

The initial IPU6 camera support landed in Fedora 41 only works on a limited set of laptops. The reason for this is that with MIPI cameras every different sensor and glue-chip like IO-expanders needs to be supported separately.

I have been working on making the camera work on more laptop models. After receiving and sending many emails and blog post comments about this I have started filing Fedora bugzilla issues on a per sensor and/or laptop-model basis to be able to properly keep track of all the work.

Currently the following issues are being either actively being worked on, or are being tracked to be fixed in the future.

Issues which have fixes pending (review) upstream:


Open issues with various states of progress:

See all the individual bugs for more details. I plan to post semi-regular status updates on this on my blog.

This above list of issues can also be found on my Fedora 42 change proposal tracking this and I intent to keep an updated complete list of all x86 MIPI camera issues (including closed ones) there.



comment count unavailable comments

Flatpak 1.16 is out!

Last week I published the Flatpak 1.16.0 release This marks the beginning of the 1.16 stable series.

This release comes after more than two years since Flatpak 1.14, so it’s pretty packed with new features, bug fixes, and improvements. Let’s have a look at some of the highlights!

USB & Input Devices

Two new features are present in Flatpak 1.16 that improve the handling of devices:

  • The new input device permission
  • Support for USB listing

The first, while technically still a sandbox hole that should be treated with caution, allows some apps to replace --device=all with --device=input, which has a far smaller surface. This is interesting in particular for apps and games that use joysticks and controllers, as these are usually exported by the kernel under /dev/input.

The second is likely the biggest new feature in the Flatpak release! It allows Flatpak apps to list which USB devices they intend to use. This is stored as static metadata in the app, which is then used by XDG Desktop Portal to notify the app about plugs and unplugs, and eventually request the user for permission.

Using the USB portal, Flatpak apps are able to list the USB devices that they have permission to list (and only them). Actually accessing these USB devices triggers a permission request where the user can allow or deny the app from having access to the device.

Finally, it is possible to forcefully override these USB permissions locally with the --usb and --nousb command-line arguments.

This should make the USB access story fairly complete. App stores like Flathub are able to review the USB permissions ahead of time, before the app is published, and see if they make sense. The portal usage prevents apps from accessing devices behind the user’s back. And users are able to control these permissions locally even further.

Better Wayland integration

Flatpak 1.16 brings a handful of new features and improvements that should deepen its integration with Wayland.

Flatpak now creates a private Wayland socket with the security-context-v1 extension if available. This allows the Wayland compositor to properly identify connections from sandboxed apps as belonging to the sandbox.

Specifically, with this protocol, Flatpak is able to securely tell the Wayland compositor that (1) the app is a Flatpak-sandboxed app, (2) an immutable app id, and (3) the instance id of the app. None of these bits of information can be modified by apps themselves.

With this information, compositors can implement unique policies and have tight control over security.

Accessibility

Flatpak already exposes enough of the accessibility stack for most apps to be able to report their accessible contents. However, not all apps are equal, and some require rather challenging setups with the accessibility stack.

One big example here is the WebKit web engine. It basically pushes Flatpak and portals to their limit, since each tab is a separate process. Until now, apps that use WebKit – such as GNOME Web and Newsflash – were not able to have the contents of the web pages properly exposed to the accessibility stack. That means things like screen readers wouldn’t work there, which is pretty disappointing.

Fortunately a lot of work was put on this front, and now Flatpak has all the pieces of the puzzle to make such apps accessible. These improvements also allow apps to detect when screen readers are active, and optimize for that.

WebKit is already adapted to use these new features when they’re available. I’ll be writing about this in more details in a future series of blog posts.

Progress Reporting

When installing Flatpak apps through the command-line utility, it already shows a nice fancy progress bar with block characters. It looks nice and gets the job done.

However terminals may have support for an OSC escape sequence to report progress. Christian Hergert wrote about it here. Christian also went ahead and introduced support to emitting the progress escape sequence in Flatpak. Here’s an example:

Screenshot of the terminal app Ptyxis with a progress bar

Unfortunately, right before the release, it was reported that this new feature was spamming some terminal emulators with notifications. These terminals (kitty and foot) have since been patched, but older LTS distributions probably won’t upgrade. That forced us to make it opt-in for now, through the FLATPAK_TTY_PROGRESS environment variable.

Ptyxis (the terminal app above) automatically sets this environment variable so it should work out of the box. Users can set this variable on their session to enable the feature. For the next stable release (Flatpak 1.18), assuming terminals cooperate on supporting this feature, the plan is to enable it by default and use the variable for opting out.

Honorable Mentions

I simply cannot overstate how many bugs were fixed in Flatpak in all these releases.

We had 13 unstable releases (the 1.15.X series) until we finally released 1.16 as a stable release. A variety of small memory leaks and build warnings were fixed.

The gssproxy socket is now shared with apps, which acts like a portal for Kerberos authentication. This lets apps use Kerberos authentication without needing a sandbox hole.

Flatpak now tries to pick languages from the AccountsService service, making it easier to configure extra languages.

Obsolete driver versions and other autopruned refs are now automatically removed, which should help keeping things tight and clean, and reducing the installed size.

If the timezone is set through the TZDIR environment variable, Flatpak takes timezone information from there. This should fix apps with the wrong timezone in NixOS systems.

More environment variables are now documented in the man pages.

This is the first stable release of Flatpak that can only be built with Meson. Autotools served us honorably for the past decades, but it was time to move to something more modern. Meson has been a great option for a long time now. Flatpak 1.16 limits itself to require a fairly old version of Meson, which should make it easy to distribute on old LTS distributions.

Finally, the 1.10 and 1.12 series have now reached their end of life, and users and distributions are encouraged to upgrade to 1.16 as soon as possible. During this development cycle, four CVEs were found and fixed, all of these fixes were backported to the 1.14 series, but not all were backported to versions older than that. So if you’re using Flatpak 1.10 or 1.12, be aware that you’re on your own risk.

Future

The next milestone for the platform is a stable XDG Desktop Portal release. This will ship with the aforementioned USB portal, as well as other niceties for apps. Once that’s done, and after a period of receiving bug reports and fixing them, we can start thinking about the next goals for these projects.

These are important parts of the platform, and are always in need of contributors. If you’re interested in helping out with development, issue management, coordination, developer outreach, and/or translations, please reach out to us in the following Matrix rooms:

Acknowledgements

Thanks to all contributors, volunteers, issue reporters, and translators that helped make this release a reality. In particular, I’d like to thank Simon McVittie for all the continuous maintenance, housekeeping, reviews, and coordination done on Flatpak and adjacent projects.