Milan Crha has been grinding through and fixing multiple smaller issues and papercuts in gnome-software in the last few weeks, adding polish to the upcoming 48.0 release
thanks to work by Om Thorat and Qiu Wenbo on the UI side, and many months of refactoring from all the maintainers, it is now possible to use different styles when highlighting text in PDF documents in Papers! This was one of the most requested features, and one we are very excited to share!
Phosh now detects captive Wi-Fi portals and shows a notification that takes you to your favorite browser when activated to log into the portal. When taking screenshots we now save thumbnails right away for file choosers and other apps to display. The compositor switched to wlroots 0.18 and things like debug log domains, damage tracking debugging or touch point debugging can now be configured at run time.
This week Keypunch was accepted into Circle! It’s an elegant little typing tutor to help you learn touch typing or improve your typing skills. Congratulations!
Following the major redesign of www.gnome.org last week, the developer portal at http://developer.gnome.org has also been refreshed with a new look. The update brings it in line with GNOME’s modern design while keeping key developer resources easily accessible.
Shell Extensions
Weather O’Clock
Display the current weather inside the pill next to the clock.
With thousands of apps and billions of downloads, Flathub has a responsibility to help ensure the safety of our millions of active users. We take this responsibility very seriously with a layered, in-depth approach including sandboxing, permissions, transparency, policy, human review, automation, reproducibility, auditability, verification, and user interface.
Apps and updates can be fairly quickly published to Flathub, but behind the scenes each one takes a long journey full of safety nets to get from a developer’s source code to being used on someone’s device. While information about this process is available between various documentation pages and the Flathub source code, I thought it could be helpful to share a comprehensive look at that journey all in one place.
Each app on Flathub is distributed as a Flatpak. This app packaging format was specifically designed with security and safety at its core, and has been continuously improved over the past decade. It has received endorsements, development, and wide adoption from organizations such as Bambu Lab, Bitwig, CodeThink, Collabora, Discord, The Document Foundation, elementary, Endless, GDevelop, KiCad, Kodi, GNOME, Intel, KDE, LibreOffice, Mozilla, OBS Studio, Plex, Prusa Research, Purism, Red Hat, System76, Telegram, Valve, and many more.
From a technical perspective, Flatpak does not require elevated privileges to install apps, isolates apps from one another, and limits app access to the host environment. It makes deep use of existing Linux security technologies such as cgroups, namespaces, bind mounts, and seccomp as well as Bubblewrap for sandboxing.
Flatpak apps are also built from a declarative manifest, which defines the exact sources and environment to build from to enable auditability and as much reproducibility as possible.
Due to Flatpak’s sandboxing, apps don’t have permission to access many aspects of the host OS or user data they might need. To get that access, apps must either request it using Portals or use static permissions.
Most permissions can be requested and granted on demand via an API
called Portals.
These permissions do not need to be given ahead of time, as desktop
environments provide the mechanisms to give user consent and control
over them e.g. by indicating their use, directly prompting the user
before the permission is granted, and allowing revocation.
Portals include APIs for handling auto-start and background activity;
access to the camera, clipboard, documents, files, location, screen
casting, screenshots, secrets like passwords, trash, and USB devices;
setting global shortcuts; inhibiting suspend or shut down; capturing
input; monitoring memory, network, or power profiles; sending
notifications; printing; setting a wallpaper; and more. In each case,
the user’s desktop environment (like GNOME or KDE) manages if and how a
user is notified or prompted for permissions—and if the permission is
not granted, the app must handle it gracefully.
Some permissions are not covered by Portals, such as basic and generally safe
resources for which dynamic permissions wouldn’t make sense. In these cases—or
if a Portal does not yet exist or is not widely adopted for a certain
permission—developers may use static permissions.
These are set by the developer at build time in the public build manifest.
Static permissions are intended to be as narrowly-scoped as possible and are
unchanging for the life of each release of an app. They are not generally
designed to be modified by an end user except in cases of development,
debugging, or reducing permissions.
Due to this, Flatpak always prefers apps to use Portals over static permissions
whenever possible.
Every app is built against a Flatpak runtime hosted by Flathub. The runtimes provide basic dependencies, are well-maintained by the Linux community, and are organized according to various platforms a developer may target; for example, GNOME, KDE, or a generic FreeDesktop SDK. This means many apps—especially those targeting a platform like GNOME or KDE and using its developer libraries—don’t need to pull in external dependencies for critical components.
Runtimes are automatically installed with apps that require them, and are updated separately by the user’s OS, app store, or CLI when needed. When a dependency in a runtime is updated, e.g. for a critical security update, it rolls out as an update to all users of apps that use that runtime.
In some cases there are commonly-used libraries not provided directly by one of the available runtimes. Flathub provides shared modules for these libraries to centralize the maintenance. If an app needs to bundle other dependencies, they must be defined in the manifest. We also provide tooling to automatically suggest updates to app dependencies.
Once an app is developed, it must be submitted to Flathub for consideration to be hosted and distributed. At this stage, human Flathub reviewers will review the app to ensure it follows the requirements. Of note:
Apps must be sandboxed with as narrow permissions as possible while still functioning, including using appropriate runtime permissions instead of broad static permissions when possible. All broad
static permissions need to be justified by the submitter during review.
Apps must not be misleading or malicious, which covers impersonating other apps or including outright malicious code or functionality.
App IDs must accurately reflect the developer’s domain name or code hosting location; e.g. if an app is submitted that purports to be Lutris, its ID must be obviously associated with that app (in this case, Lutris.net).
The app’s Flatpak manifest is reviewed, including all static permissions. Each of the documented requirements are checked—and if a reviewer finds something out of place they request changes to the submission, ask for rationale, or reject it completely.
In addition to human review, Flathub also makes use of automated testing for a number of quality and safety checks. For example, our automated tests block unsafe or outright wrong permissions, such as apps requesting access to whole session or system buses or unsafe bus names. Our automated tests also help ensure reproducible builds by disallowing pointing at bare git branches without a specific commit.
Once an app has been approved and passes initial tests, it is built using the open source and publicly-available flatpak-builder utility from the approved public manifest, on Flathub’s infrastructure, and without network access. Sources for the app are validated against the documented checksums, and the build fails if they do not match.
For further auditability, we specify the git commit of the manifest repo used for the build in the Flatpak build subject. The build itself is signed by Flathub’s key, and Flatpak/OSTree verify these signatures when installing and updating apps.
We mirror the exact sources each app is built against in case the original source goes down or there is some other issue, and anyone can build the Flatpak back from those mirrored sources to reproduce or audit the build. The manifest used to build the app is hosted on Flathub’s GitHub org, plus distributed to every user in the app’s sandbox at /app/manifest.json—both of which can be compared, inspected, and used to rebuild the app exactly as it was built by Flathub.
Apps can be verified on Flathub; this process confirms that an app is published by the original developer or an authorized party by proving ownership of the app ID. While all apps are held to the same high standards of safety and review on Flathub, this extra layer helps users confirm that the app they are getting is also provided or authorized by its developer.
Over half of the apps on Flathub so far are verified, with the number regularly increasing.
Once an app is developed, submitted, tested, approved, built, and distributed, it appears in app store clients like Flathub.org, KDE Discover, GNOME Software, and elementary AppCenter—as well as the Flatpak CLI. While exact implementations vary and the presentation is up to the specific app store client, generally each will show:
Static permissions and their impact on safety
Open Age Rating Service rating and details
If an app uses outdated runtimes
Release notes for each release
If static permissions increase between releases
Flathub.org and GNOME Software also display the app’s verified status.
There are a few special cases to some of the points above which I would be remiss not to mention.
Flathub has granted a select group of trusted partners, including Mozilla and OBS Studio, the ability to directly upload their builds from their own infrastructure. These projects have an entire CI pipeline which validates the state of their app, and they perform QA before tagging the release and pushing it to Flathub. Even for these few cases of direct uploads, we require a public manifest and build pipeline to enable similar reproducibility and auditability as outlined above. We also require the apps to be verified, and still run automated tests such as our linter against them.
Lastly, some apps (around 6%) use extra-data to instruct Flatpak to download and unpack an existing package (e.g. a Debian package) during installation. This process runs in a tight unprivileged Flatpak sandbox that does not allow host filesystem or network access, and the sandbox cannot be modified by app developers. These are largely proprietary apps that cannot be built on Flathub’s infrastructure, or apps using complex toolchains that require network access during build. This is discouraged since it does not enable the same level of auditability nor multi-architecture support that building from source does. As a result, this is heavily scrutinized during human review and only accepted as a last resort.
Even with the above, the vast majority of apps are built reproducibly from
source on Flathub’s infrastructure. The handful of apps that aren’t still
greatly benefit from the transparency and auditability built into all of the
other layers.
While we expect to catch the vast majority of safety issues with the above, we are also able to respond to anything that may have slipped through. For example, we have the ability to remove an app from the Flathub remote in case we find that it’s malicious. We can also revert, recall, or block broken or malicious app updates.
As you can see, Flathub takes safety very seriously. We’ve worked with the greater Linux and FreeDesktop ecosystem for over a decade on efforts such as Flatpak, OSTree, Portals, and even desktop environments and app store clients to help build the best app distribution experience—for both users and app developers—with safety as a core requirement. We believe our in-depth, multi-layered approach to safety has set a high bar that few others have met—and we will continue to raise it.
Thank you to all contributors to Flatpak, Flathub, and the technologies
our ecosystem relies on. Thanks to the thousands of developers for
trusting us with app distribution, and to bbhtt, Jordan, and Sonny for
reviewing this post. And as always, thank you to the millions of users
trusting Flathub as your source of apps on Linux. ♥
Today I woke up to a link of an interview from the current Fedora Project Leader, Matthew Miller. Brodie who conducted the interview mentioned that Miller was the one that reached out to him. The background of this video was the currently ongoing issue regarding OBS, Bottles and the Fedora project, which Niccolò made an excellent video explaining and summarizing the situation. You can also find the article over at thelibre.news. “Impressive” as this story is, it’s for another time.
What I want to talk in this post, is the outrageous, smearing and straight up slanderous statements about Flathub that the Fedora Project Leader made during the interview..
I am not directly involved with the Flathub project (A lot of my friends are), however I am a maintainer of the GNOME Flatpak Runtime, and a contributor to the Freedesktop-sdk and ElementaryOS Runtimes. I also maintain applications that get published on Flathub directly. So you can say I am someone invested in the project and that has put a lot of time into it. It was extremely frustrating to hear what would only qualify as reddit-level completely made up arguments with no base in reality coming directly from Matthew Miller.
Below is a transcript, slightly edited for brevity, of all the times Flathub and Flatpak was mentioned. You can refer to the original video as well as there were many more interesting things Miller talked about.
It starts off with an introduction and some history and around the 10-minute mark, the conversation starts to involve Flathub.
Miller: [..] long way of saying I think for something like OBS we’re not really providing anything by packaging that. Miller: I think there is an overall place for the Fedora Flatpaks, because Flathub part of the reason its so popular (there’s a double edged sword), (its) because the rules are fairly lax about what can go into Flathub and the idea is we want to make it as easy for developers to get their things to users, but there is not really much of a review
This is not the main reason why Flathub is popular, its a lot more involved and interesting in practice. I will go into this in a separate post hopefully soon.
Claiming that Flathub does not have any review process or inclusion policies is straight up wrong and incredibly damaging. It’s the kind of thing we’ve heard ad nauseam from Flathub haters, but never from a person in charge of one of the most popular distributions and that should have really really known better.
You can find the Requirements in the Flathub documentation if you spend 30 seconds to google for them, along with the submission guidelines for developers. If those documents qualify as a wild west and free for all, I can’t possibly take you seriously.
I haven’t maintained a linux distribution package myself so I won’t go to comparisons between Flathub and other distros, however you can find people, with red hats even, that do so and talked about it. Of course this is one off examples and social bias from my part. But it proves how laughable of a claim is that things are not reviewed. Additionally, the most popular story I hear from developers is how Flathub requirements are often stricter and sometimes cause annoyances.
Screenshot of the post from this link: https://social.vivaldi.net/@sesivany/114030210735848325
Additionally, Flathub has been the driving force behind encouraging applications to update their metadata, completely reworking the User Experience and handling off permissions and made them prominent to the user. (To the point where even network access is marked as potentially-unsafe).
Miller: [..] the thing that says verified just says that it’s verified from the developer themselves.
No, verified does not mean that the developer signed off into it. Let’s take another 30 seconds to look into the Flathub documentation page about exactly this.
A verified app on Flathub is one whose developer has confirmed their ownership of the app ID […]. This usually also may mean that either the app is maintained directly by the developer or a party authorized or approved by them.
It still went through the review process and all the rest of requirements and policies apply. The verified program is basically a badge to tell users this is a supported application by the upstream developers, rather than the free for all that exists currently where you may or may not get an application released from years ago depending on how stable your distribution is.
Sidenote, did you know that 1483/3003 applications on Flathub are verified as of the writing of this post? As opposed to maybe a dozen of them at best in the distributions. You can check for yourself
Miller: .. and it doesn’t necessarily verify that it was build with good practices, maybe it was built in a coffee shop on some laptop or whatever which could be infected with malware or whatever could happen
Again if Miller had done the bare minimum effort, he would have come across the Requirements page which describes exactly how an Application in Flathub is built, instead of further spreading made up takes about the infrastructure. I can’t stress enough how damaging it has been throughout the years to claim that “Flathub may be potential Malware”. Why it’s malware? Because I don’t like its vibes and I just assume so..
I am sure If I did the same about Fedora in a very very public medium with thousand of listeners I would probably end up with a Layers letter from Redhat.
Now Applications in Flathub are all built without a network access, in Flathub’s build servers, using flatpak-builder and Flatpak Manifests which are a declarative format, which means all the sources required to build the application are known, validated/checksumed, the build is reproducible to the extend possible, you can easily inspect the resulting binaries and the manifest itself used to build the application ends up in /app/manifest.json which you can also inspect with the following command and use it to rebuild the application yourself exactly like how it’s done in Flathub.
The exception to this, are proprietary applications naturally, and a handful of applications (under an OSI approved license) where Flathub developers helped the upstream projects integrate a direct publishing workflow into their Deployment pipelines. I am aware of Firefox and OBS as the main examples, both of which publish in Flathub through their Continues Deployment (CI/CD) pipeline the same way they generate their builds for other platforms they support and the code for how it happens is available on their repos.
If you have issues trusting Mozilla’s infrastructure, then how are you trusting Firefox in the first place and good luck auditing gecko to make sure it does not start to ship malware. Surely distribution packagers audit every single change that happens from release to release for each package they maintain and can verify no malicious code ever gets merged. The xz backdoor was very recent, and it was identified by pure chance, none of this prevented it.
Then Miller proceeds to describe the Fedora build infrastructure and afterward we get into the following:
Miller: I will give an example of something I installed in Flathub, I was trying to get some nice gui thing that would show me like my system Hardware stats […] one of them ones I picked seemed to do nothing, and turns out what it was actually doing, there was no graphical application it was just a script, it was running that script in the background and that script uploaded my system stats to a server somewhere.
Firstly we don’t really have many details to be able to identify which application it was, I would be very curious to know. Now speculating on my part, the most popular application matching that description it’s Hardware Probe and it absolutely has a GUI, no matter how minimal. It also asks you before uploading.
Maybe there is a org.upload.MySystem application that I don’t know about, and it ended up doing what was in the description, again I would love to know more and update the post if you could recall!
Miller: No one is checking for things like that and there’s no necessarily even agreement that that was was bad.
Second time! Again with the “There is no review and inclusion process in Flathub” narrative. There absolutely is, and these are the kinds of things that get brought up during it.
Miller: I am not trying to be down on Flathub because I think it is a great resource
Yes, I can see that, however in your ignorance you were something much worse than “Down”. This is pure slander and defamation, coming from the current “Fedora Project Leader”, the “Technically Voice of Fedora” (direct quote from a couple seconds later). All the statements made above are manufactured and inaccurate. Myths that you’d hear from people that never asked, looked or cared about any of these cause the moment you do you its obvious how laughable all these claims are.
Miller: And in a lot of ways Flathub is a competing distribution to Fedora’s packaging of all applications.
Precisely, he is spot on here, and I believe this is what kept Miller willfully ignorant and caused him to happily pick the first anit-flatpak/anti-flathub arguments he came across on reddit and repeat the verbatim without putting any thought into it. I do not believe Miller is malicious on purpose, I do truly believe he means well and does not know better.
However, we can’t ignore the conflict that arises from his current job position as an big influence to why incidents like this happened. Nor the influence and damage this causes when it comes from a person of Matthew Miller’s position.
Moving on:
Miller: One of the other things I wanted to talk about Flatpak, is the security and sandboxing around it. Miller: Like I said the stuff in the Flathub are not really reviewed in detail and it can do a lot of things:
Third time with the no review theme. I was fuming when I first heard this, and I am very very angry about still, If you can’t tell. Not only is this an incredibly damaging lie as covered above, it gets repeated over and over again.
With Flatpak basically the developer defines what the permissions are. So there is a sandbox, but the sandbox is what the person who put it there is, and one can imagine that if you were to put malware in there you might make your sandboxing pretty loose.
Brodie: One of the things you can say is “I want full file system access, and then you can do anything”
No, again it’s stated in the Flathub documentation, permissions are very carefully reviewed and updates get blocked when permissions change until another review has happened.
Miller: Android and Apple have pretty strong leverage against application developers to make applications work in their sandbox
Brodie: the model is the other way around where they request permissions and then the user grants them whereas Flatpak, they get the permission and then you could reject them later
This is partially correct, the first part about leverage will talk about in a bit, but here’s a primer on how permissions work in Flatpak and how it compares to the sandboxing technologies in iOS and Android.
In all of them we have a separation between Static and Dynamic permissions. Static are the ones the application always has access to, for example the network, or the ability to send you notifications. These are always there and are mentioned at install time usually. Dynamic permissions are the ones where the application has to ask the user before being able to access a resource. For example opening a file chooser dialog so the user can upload a file, the application the only gets access to the file the user consented or none. Another example is using the camera on the device and capturing photos/video from it.
Brodie here gets a bit confused and only mentions static permissions. If I had to guess it would be cause we usually refer to the dynamic permissions system in the Flatpak world as “Portals”.
Miller: it didn’t used to be that way and and in fact um Android had much weaker sandboxing like you could know read the whole file system from one app and things like that […] they slowly tightened it and then app developers had to adjust Miller: I think with the Linux ecosystem we don’t really have the way to tighten that kind of thing on app developers … Flatpak actually has that kind of functionality […] with portals […] but there’s no not really a strong incentive for developers to do that because, you know well, first of all of course my software is not going to be bad so why should I you know work on sandboxing it, it’s kind of extra work and I I don’t know I don’t know how to solve that. I would like to get to the utopian world where we have that same security for applications and it would be nice to be able to install things from completely untrusted places and know that they can’t do anything to harm your system and that’s not the case with it right now
As with any technology and adoption, we don’t get to perfection from day 1. Static permissions are necessary to provide a migration path for existing applications and until you have developed the appropriate and much more complex dynamic permissions mechanisms that are needed. For example up until iOS 18 it wasn’t possible to give applications access to a subset of your contacts list. Think of it like having to give access your entire filesystem instead of the specific files you want. Similarly partial-only access to your photos library arrived couple years ago in IOS and Android.
In an ideal world all permissions are dynamic, but this takes time and resources and adaptation for the needs of applications and the platform as development progresses.
Now about the leverage part.
I do agree that “the Linux ecosystem” as a whole does not have any leverage on applications developers. This is cause Miller is looking at the wrong place for it. There is no Linux ecosystem but rather Platforms developers target.
GNOME and KDE, as they distribute all their applications on Flathub absolutely have leverage. Similarly Flathub itself has leverage by changing the publishing requirements and inclusion guidelines. Which I kept being told they don’t exist.. Every other application that wants to publish also has to adhere by the rules on Flathub. ElementaryOS and their Appcenter has leverage on developers. Canonical does have the same pull as well with the Snapstore. Fedora on the other hand doesn’t have any leverage cause the Fedora Flatpak repository is irrelevant, broken and nobody wants to use it.
[..] The xz backdoor gets brought up when discussing dependencies and how software gets composed together.
Miller: we try to keep all of those things up to date and make sure everything is patched across the dist even when it’s even when it’s difficult. I think that really is one of the best ways to keep your system secure and because the sandboxing isn’t very strong that can really be a problem, you know like the XZ thing that happened before. If XZ is just one place it’s not that hard of an update but if you’ve got a 100 Flatpaks from different places […] and no consistency to it it’s pretty hard to manage that
I am not going to get in depth about this problem domain and the arguments over it. In fact I have been writing another blog post for a while. I hope to publish shortly. Till then I can not recommend high enough Emmanuele’s and Lennart’s blog posts, as well as one of the very early posts from Alex when Flatpak was in early design phase on the shortcomings of the current distribution model.
Now about bundled dependencies. The concept of Runtimes has served us well so far, and we have been doing a pretty decent job providing most of the things applications need but would not want to bundle themselves. This makes the Runtimes a single place for most of the high profile dependencies (curl, openssl, webkitgtk and so on) that you’d frequently update for security vulnerabilities and once it’s done they roll out to everyone without needing to do anything manual to update the applications or even rebuilt them.
Applications only need to bundle their direct dependencies,and as mentioned above, the flatpak manifest includes the exact definition of all of them. They are available to anyone to inspect and there’s tooling that can scan them and hopefully in the future alert us.
If the Docker/OCI model where you end bundling the entire toolchain, runtime, and now you have to maintain it and keep up with updates and rebuild your containers is good enough for all those enterprise distributions, then the Flatpak model which is much more efficient, streamlined and thought out and much much much less maintenance intensive, it is probably fine.
Miller: part of the idea of having a distro was to keep all those things consistent so that it’s easier for everyone, including the developers
As mentioned above, nothing that fundamentally differs from the leverage that Flathub and the Platform Developers have.
Brodie: took us 20 minutes to get to an explanation [..] but the tldr Fedora Flatpak is basically it is built off of the Fedora RPM build system and because that it is more well tested and sort of intended, even if not entirely for the Enterprise, designed in a way as if an Enterprise user was going to use it the idea is this is more well tested and more secure in a lot of cases not every case.
Miller: Yea that’s basically it
This is a question/conclusion that Brodie reaches with after the previous statements and by far the most enraging thing in this interview. This is also an excellent example of the damage Matthew Miller caused today and if I was a Flathub developer I would stop on nothing sort of a public apology from the Fedora project itself. Hell I want this just being an application developer that publishes on it. The interview has been basically shitting on both the Developers of Flathub and the people that choose to publish in it. And if that’s not enough there should be an apology just out of decency. Dear god..
Brodie: how should Fedora handle upstreams that don’t want to be packaged like the OBS case here where they did not want there to be a package in Fedora Flatpak or another example is obviously bottles which has made a lot of noise about the packaging
Lastly I want to touch on this closing question in light of recent events.
Miller: I think we probably shouldn’t do it. We should respect people’s wishes there. At least when it is an open source project working in good faith there. There maybe some other cases where the software, say theoretically there’s somebody who has commercial interests in some thing and they only want to release it from their thing even though it’s open source. We might want to actually like, well it’s open source we can provide things, we in that case we might end up you having a different name or something but yeah I can imagine situations where it makes sense to have it packaged in Fedora still but in general especially and when it’s a you know friendly successful open source project we should be friendly yeah. The name thing is something people forget history like that’s happened before with Mozilla with Firefox and Debian.
This is an excellent idea! But it gets better:
Miller: so I understand why they strict about that but it was kind of frustrating um you know we in Fedora have basically the same rules if you want to take Fedora Linux and do something out of it, make your own thing out of it, put your own software on whatever, you can do that but we ask you not to call it Fedora if it’s a fedora remix brand you can use in some cases otherwise pick your own name it’s all open source but you know the name is ours. yeah and I the Upstream as well it make totally makes sense.
Brodie: yeah no the name is completely understandable especially if you do have a trademark to already even if you don’t like it’s it’s common courtesy to not name the thing the exact same thing
Miller: yeah I mean and depending on the legalities like you don’t necessarily have to register a trademark to have the trademark kind of protections under things so hopefully lawyers you can stay out of the whole thing because that always makes the situations a lot more complicated, and we can just get along talking like human beings who care about making good software and getting it to users.
And I completely agree with all of these, all of it. But let’s break it down a bit because no matter how nice the words and intentions it hasn’t been working out this way with the Fedora community so far.
First, Miller agrees the Fedora project should be respecting of application developer’s wishes to not have their application distributed by fedora but rather it be a renamed version if Fedora wishes to keep distributing it.
However, every single time a developer has asked for this, they have been ridiculed, laughed at and straight up bullied by Fedora packagers and the rest of the Fedora community. It has been a similar response from other distribution projects and companies as well, it’s not just Fedora. You can look at Bottle’s story for the most recent example. It is very nice to hear Miller’s intentions but means nothing in practice.
Then Miller proceeds to assure us why he understand that naming and branding is such a big deal to those projects (unlike the rest of the Fedora community again). He further informs us how Fedora has the exact same policies and asks from people that want to fork Fedora. Which makes the treatment that every single application developer has received when asking about the same exact thing ever more outrageous.
What I didn’t know is that in certain cases you don’t even need to have a trademark yet to be covered by some of the protections, depending on jurisdiction and all.
And last we come into lawyers. Neither Fedora nor application developers would want it to ever come to this, and it was stated multiple times by Bottles developers that they don’t want to have to file for a trademark so they can be taken seriously. Similarly, OBS developers said how resorting to legal action would be the last thing they would want to do and would rather have the issue resolved before that. But it took until OBS, a project of a high enough profile, with the resources required to acquire a trademark and to threaten legal action before the Fedora Leadership cared to treat application developers like human beings and get the Fedora packagers and community members to comply. (Something which they had stated multiple times they simply couldn’t do).
I hate all of this. Fedora and all the other distributions need to do better. They all claim to care about their users but happily keep shipping broken and miss configured software to them over the upstream version, just cause it’s what aligns with their current interests. In this case is the promotion of Fedora tooling and Fedora Flatpaks over the application in Flathub they have no control over. In previous incidents it was about branding applications like the rest of the system even though it was making them unusable. And I can find you and list you with a bunch of examples from other distributions just as easily.
They don’t care about their users, they care about their bottom line first and foremost. Any civil attempts at fixing issues get ignored and laughed at, up until there is a threat of a legal action or a big enough PR damage, drama and shitshow that they can’t ignore it anymore and have to backtrack on them.
This is my two angry cents. Overall I am not exactly sure how Matthew Miller managed in a rushed and desperate attempt at damage control for the OBS drama, to not only to make it worse, but to piss off the entire Flathub community at the same time. But what’s done is done, let’s see what we can do to address the issues that have festered and persisted for years now.
Happily I have survived the intense storms of January 2025, and the biting February temperatures, and I’m here with another status update post.
I made it to FOSDEM 2025, which was its usual self, a unique gathering of people who care about making ethical software, sharing cool technology, and eating delicious waffles. In the end, Jack Dorsey didn’t do a talk (see last month’s post for why I don’t think he’d have fit in very well); the FOSDEM organisers did meet the main protest organiser and had what seems to be a useful discussion on what happened.
Upstream QA
I did do a talk, two in fact, one titled “How to push your testing upstream”, which you can watch on the FOSDEM website (or you can just read the slides — I’ll wait). I got some good feedback, but it didn’t spark many conversations, and I don’t get the impression that my dream will become a reality any time in the near future. I’ll keep chipping away at this interesting problem in the occasional moments of downtime that I can dedicate to it.
If you also think this is an interesting problem, then please take a look at the talk slides and tell me what you think. If this project is going to move beyond a prototype then it will require several people pushing.
Two people offered help providing infrastructure where GNOME can run builds and testsuite, which is much appreciated. I had hoped this would mean some dedicated machines for QA testing; however, GNOME’s Equinix-sponsored ARM build farm is disappearing (for the same reason as Freedesktop.org), so we now need new sponsorship ,to maintain support for ARM devices.
I still consider the openQA infrastructure “beta quality”; and it’ll remain that way until at least 3 people are committed to ongoing maintenance of the infrastructure. I’m still the only person doing that right now.
Currently all openQA testing in GNOME is broken, apparently because the Gitlab runners are too overloaded to run tests.
The GNOME booth at FOSDEM
Huge round of applause to Maria and Carlos for making sure there was a GNOME booth this year. I spent some time helping out, and it seemed we had a very small pool of volunteers. Shout out as well to Pablo and to camelCaseNick and anyone else who I didn’t see.
The booth is an interesting place as it poses questions such as: Is GNOME interesting?
Why is GNOME interesting?
Besides selling hats and T-shirts, we had a laptop running GNOME OS courtesy of Abderrahim, and a phone running postmarketOS + GNOME thanks to Pablo. Many people were drawn straight to the phone.
It suggests to me that GNOME on mobile is very interesting at the moment, which makes sense as it’s something new and shiny and not yet working very well; while GNOME on the desktop is less interesting, which also makes sense as it’s solid and useful and is designed to “get out of the way”.
I gave another talk entitled “Automated testing for mobile images using GNOME” (link) based on Adrien’s investigation into mobile testing. I showed this slide, with the main open source mobile platforms that I’m aware of:
I asked how many people are using a “G” based phone and four or five folk in the audience raised their hands. More folk than I expected!
GNOME is a project that depends on volunteer effort, so we need to be conscious of what’s interesting. People only have so much energy to spend on things we don’t find interesting. Credit goes to everyone who has worked on making the platform better for mobile use cases!
A phone running GNOME is cool but there’s only so much you can do on a device with no media and no apps installed. To make booth demos more interesting in future I would propose that we curate some media that can be preinstalled on a device. Please let me know if you have stuff to share!
What is GNOME?
This is another question that the booth raises. Are we building an operating system? The new gnome.org website begins: “An independent computing platform for everyone” which seems a nice way to explain it. Lets see how it goes in practice next time I’m trying to tell someone what they’re looking at.
Hey all, quick post today to mention that I added tracing support to the
Whippet GC library. If the support
library for LTTng is available when Whippet is
compiled, Whippet embedders can visualize the GC process. Like this!
Click above for a full-scale screenshot of the
Perfetto trace explorer processing the
nboyer
microbenchmark
with the parallel copying collector on a 2.5x heap. Of course no image
will have all the information; the nice thing about trace visualizers
like is that you can zoom in to sub-microsecond spans to see exactly
what is happening, have nice mouseovers and clicky-clickies. Fun times!
Annoyingly, this header file you write needs to be in one of the -I directories;
it can’t be just in the the source directory, because lttng includes
it seven times (!!) using computed
includes
(!!!) and because the LTTng file header that does all the computed
including isn’t in your directory, GCC won’t find it. It’s pretty ugly.
Ugliest part, I would say. But, grit your teeth, because it’s worth it.
Finally you pepper your source with tracepoints, which probably you wrap
in some
macro
so that you don’t have to require LTTng, and so you can switch to other
tracepoint libraries, and so on.
By which I mean, so close to having to write a Python script to make
graphs! Because LTTng writes its logs in so-called Common Trace Format,
which as you might guess is not very common. I have a colleague who
swears by it, that for him it is the lowest-overhead system, and indeed
in my case it has no measurable overhead when trace data is not being
collected, but his group uses custom scripts to convert the CTF data
that he collects to... GTKWave
(?!?!?!!).
In my case I wanted to use Perfetto’s UI, so I found a
script
to convert from CTF to the JSON-based tracing format that Chrome
profiling used to
use.
But, it uses an old version of Babeltrace that wasn’t available on my
system, so I had to write a new
script
(!!?!?!?!!), probably the most Python I have written in the last 20
years.
is it worth it?
Yes. God I love blinkenlights. As long as it’s low-maintenance going
forward, I am satisfied with the tradeoffs. Even the fact that I had to
write a script to process the logs isn’t so bad, because it let me get
nice nested events, which most stock tracing tools don’t allow you to
do.
I think the only thing that would be better is if tracepoints were a
part of Linux system ABIs – that there would be header files to emit
tracepoint metadata in all binaries, that you wouldn’t have to link to
any library, and the actual tracing tools would be intermediated by that
ABI in such a way that you wouldn’t depend on those tools at build-time
or distribution-time. But until then, I will take what I can get.
Happy tracing!
Today, just in time for this edition of This Week in GNOME and after 5 years, more than a thousand review comments, and multiple massive refactorings and rewrites, the legendary merge request mutter!1441 was merged.
This merge requests introduces an additional render buffer when Mutter is not able to keep up with the frames.
The technique commonly known as dynamic triple buffering can help in situations where the total time to generate a frame - including CPU and GPU work - is longer than one refresh cycle. This improves the concurrency capabilities of Mutter by letting the compositor start working on the next frame as early as possible, even when the previous frame isn’t displayed.
In practice, this kind of situation can happen with sudden burst of activity in the compositor. For example, when the GNOME Shell overview is opened after a period of low activity.
This should improve the perceived smoothness of GNOME, with less skipped frames and more fluid animations.
GNOME Shell
Core system user interface for things like launching apps, switching windows, system search, and more.
The long awaited notification grouping was merged this week into GNOME Shell, just in time for GNOME 48. This was a huge effort by multiple parties, especially by Florian Müllner who spend countless hours reviewing code changes. This is probably one of the most visible features added to GNOME thanks to the STF grant.
SemantiK got two releases last week: 1.4.0 and 1.5.0. They both bring new improvements, code refactoring, more translation work (thanks to @johnpetersa19 for the Brazilian Portuguese translation), and a revamped language selector!
The next big step would be to create more Language Pack, if you want to help with that, feel free to contact me via Matrix!
Also, last week, I’ve been hard at work fixing bugs throughout all my apps, and making them fully responsive on small screens, making them perfect for Mobile Linux ! 🎉📱
Hex Colordle got some bug fixes and small improvements to message when you lose.
GirCore verion 0.6.2 was released. It features support for .NET 9 and modernized the internal binding code resulting in better garbage collector integration and the removal of reflection based code. As a result there are several breaking changes.
A new beginner friendly tutorial was contributed and can be found on the homepage. Please see the release notes for more details.
Due to a couple of unfortunate but important regressions in Fractal 10, we are releasing Fractal 10.1 so our users don’t have to wait too long for them to be addressed. This minor version fixes the following issues:
Some rooms were stuck in an unread state, even after reading them or marking them as read.
If you want to help us avoid regressions like that in the future, you could use Fractal Nightly! Or even better, you could pick up one of our issues and become part of the problem solution.
This past week volunteers working with the GNOME design and engagement teams debuted a brand new GNOME.org website—one that was met largely with one of two reactions:
It’s beautiful and modern, nice work! and
Where is the foot‽
You see, the site didn’t[^logo update] feature the GNOME logo at the top of the page—it just had the word GNOME, with the actual logo relegated to the footer. Admittedly, some folks reacted both ways (it’s pretty, but where’s the foot?). To me, it seems that the latter reaction was mostly the sentiment of a handful of long-time contributors who have understandably grown very cozy with the current GNOME logo:
[^logo update]: 2024-02-14: I wrote a quick merge request to use the logo on the website yesterday since I figured someone else would, anyway. I wanted to demonstrate what it would look like (and do it “right” if it was going to happen). That change has since been merged.
Why the foot?
The current GNOME logo is a four-toed foot that is sort of supposed to look like a letter G. According to legend (read: my conversations with designers and contributors who have been working with GNOME for more years than I have fingers and toes), it is basically a story of happenstance: an early wallpaper featured footprints in the sand, that was modified into an icon for the menu, that was turned into a sort of logo while being modified to look like the letter G, and then that version was flattened and cleaned up a bit and successfully trademarked by the GNOME Foundation.
So, why do people like it? My understanding (and please drop a comment if I’m wrong) is that it often boils down to one or more of:
It’s always been this way; as long as GNOME has had an official logo, it’s been a variation of the foot.
It’s a trademark so it’s not feasible to change it from a legal or financial perspective.
It has personality, and anything new would run the risk of being bland.
It has wide recognition at least within the open source enthusiast and developer space, so changing it would be detrimental to the brand equity.
What’s the problem?
I’m the first to admit that I don’t find the foot to be a particularly good logo. Over time, I’ve narrowed down my thoughts (and the feedback I’ve heard from others) into a few recurring reasons:
It doesn’t convey anything about the name or project which by itself may be fine—many logos don’t directly. But it feels odd to have such a bold logo choice that doesn’t directly related to the name “GNOME,” or to any specific aspect of the project.
It’s an awkward shape that doesn’t fit cleanly into a square or circle, especially at smaller sizes (e.g. for a social media avatar or favicon). It’s much taller than it is wide, and it’s lopsided weight-wise. This leads to frustrations from designers when trying to fit the logo into a square or circle space, leading to excessive amounts of whitespace and/or error-prone manual alignment compared to other elements.
It is actively off-putting and unappealing to at least some folks including much of the GNOME design team, newer contributors, people outside the open source bubble—and apparently potentially entire cultures (which has been raised multipletimes over the past 20+ years). Anecdotally, almost everyone new I’ve introduced GNOME to has turned their nose up at the “weird foot,” whether it’s when showing the website or rocking a tee or sticker to support the project. It doesn’t exactly set a great first impression for a community and modern computing platform. And yes, there are a bunch of dumb memes out there about GNOME devs all being foot fetishists which—while I’m not one to shame what people are into—is not exactly the brand image you want for your global, inclusive open source project.
It raises the question of what the role of the design team is: if the design team cannot be allowed to effectively lead the design of the project, what are we even doing? I think this is why the topic feels so existential to me as a member of the design team. User experience design includes the moment someone first interacts with the brand of a product through them actually using it day-to-day—and right now, the design team’s hands are tied for the first half of that journey.
The imbalance and complexity make for non-ideal situations
So what can we do?
While there are some folks that would push for a complete rebrand of GNOME—name included, I feel like there’s a softer approach we could take to the issue. I would also point out that the vast majority of people using GNOME—those on Ubuntu, RHEL, Fedora, Endless OS, Debian, etc.—are not seeing the foot anywhere. They’re seeing their distro’s logo, and for many, are using using e.g. “Ubuntu” and may not even be aware they’re using GNOME.
Given all of the above, I propose that a path forward would be to:
Phase the foot out from any remaining user-facing spaces since it’s hard to work with in all of the contexts we need to use a logo, and it’s not particularly attractive to new users or welcoming to potential contributors—something we need to keep in mind as an aging open source project. This has been an unspoken natural phenomenon as members of the GNOME design team have soured a bit on trying to make designs look nice while accommodating the foot; as a result we have started to see less prominent usage of the foot e.g. on release notes, GNOME Circle, This Week in GNOME, the GNOME Handbook, the new website (before it was re-added), and in other spaces where the people doing the design work aren’t the most fond of it.
Commission a new brand logo to represent GNOME to the outside world; this would be the logo you’d expect to see at GNOME.org, on user-facing social media profiles, on event banners, on merch, etc. We’ve been mulling ideas over in the design team for literal years at this point, but it’s been difficult to pursue anything seriously without attracting very loud negative feedback from a handful of folks—perhaps if it is part of a longer-term plan explicitly including the above steps, it could be something we’d be able to pursue. And it could still be something quirky, cute, and whimsical! I personally don’t love the idea of something super generic as a logo—I think something that connects to “gnomes,” our history, and/or our modern illustration style would be great here. But importantly, it would need to be designed with the intent of its modern usage in mind, e.g. working well at small sizes, in social media avatars, etc.
Refresh the official GNOME brand guidelines by explicitly including our modern use of color, animation, illustrations, and recurring motifs (like the amazing wallpapers from Jakub!). This is something that has sort of started happening naturally, e.g. with the web team’s newer web designs and as the design team made the decision to move to Inter-based Adwaita Sans for the user interface—and this push continues to receive positive feedback from the community. But much of these efforts have not been reflected in the official project brand guidelines, causing an awkward disconnect between what we say the brand is and how it’s actually widely used and perceived.
Immortalize the foot as a mascot, something to be used in developer documentation, as an easter egg, and perhaps in contributor-facing spaces. It’s much easier to tell newcomers, “oh this is a goofy icon that used to be our logo—we love it, even if it’s kind of silly” without it having to represent the whole project from the outside. It remains a symbol for those “in the know” within the contributor community while avoiding it necessarily representing the entire GNOME brand.
Stretch goal: title-case Gnome as a brand name. We’ve long moved past GNOME being an acronym (GNU Network Object Model Environment?)—with a bit of a soft rebrand, I feel we could officially say that it’s spelled “Gnome,” especially if done so in an official logotype. As we know, much like the pronunciation of GNOME itself, folks will do what they want—and they’re free to!—but this would be more about how the brand name is used/styled in an official capacity. I don’t feel super strongly about this one, but it is awkward to have to explain why it’s called GNOME and all caps but not actually an acronym but it used to be—and why the logo is a foot—any time I tell someone what I contribute to. ;)
What do you think?
I genuinely think GNOME as a project and community is in a good place to move forward with modernizing our outward image a bit. Members of the design team like Jamie, kramo, Brage, Jakub, Tobias, Sam, and Allan and other contributors across the project like Alice, Sophie, and probably half a dozen more I am forgetting have been working hard at modernizing our UI and image when it comes to software—I think it’s time we caught up with the outward brand itself.
Hit me up on Mastodon or any of the links in the footer to tell me if you think I’m right, or if I’ve gotten this all terribly wrong. :)
As promised, I wanted to write a blog post about this application.
Enter TeX is a
TeX /
LaTeX
text editor previously named LaTeXila and then GNOME LaTeX. It is based on the
same libraries as gedit.
Renames
LaTeXila was a fun name that I picked up when I was a student back in 2009.
Then the project was renamed to GNOME LaTeX in 2017 but it was not a great
choice because of the GNOME trademark. Now it is called Enter TeX.
By having "TeX" in the name is more future-proof than "LaTeX", because there
is also Plain TeX, ConTeXt and
GNU Texinfo. Only LaTeX is
currently well supported by Enter TeX, but the support for other variants
would be a welcome addition.
Note that the settings and configuration files are automatically migrated from
LaTeXila or GNOME LaTeX to Enter TeX.
There is another rename: the namespace for the C code has been changed from
"Latexila" to "Gtex", to have a shorter and better name.
Other news
If you're curious, you can read the top of the
NEWS file,
it has all the details.
If I look at the
achievements file,
there is also the port from Autotools to Meson that was done recently and is
worth mentioning.
Known issue for the icons
Enter TeX unfortunately suffers from a combination of changes in
adwaita-icon-theme and GThemedIcon (part of GIO). Link to the
issue on GitLab.
Compare the two screenshots and choose the one you prefer:
As an interim solution, what I do is to install adwaita-icon-theme 41.0 in the
same prefix as where Enter TeX is installed (not a system-wide prefix).
To summarize
LaTeXila -> GNOME LaTeX -> Enter TeX
C code namespace: Latexila -> Gtex
Build system: Autotools -> Meson
An old version of adwaita-icon-theme is necessary.
This article was written by Sébastien Wilmet, currently the main developer
behind Enter TeX.
Previously, the gedit homepage was
on the GNOME wiki,
but the wiki has been retired. So a new website has been set up.
Some work on the website is still necessary, especially to better support
mobile devices (responsive web design), and also for printing the pages. If
you are a seasoned web developer and want to contribute, don't hesitate to get
in touch!
Wrapping-up statistics for 2024
The total number of commits in gedit and gedit-related git repositories in
2024 is: 1042. More precisely:
It counts all contributions, translation updates included.
The list contains two apps, gedit and
Enter TeX.
The rest are shared libraries (re-usable code available to create other text
editors).
Enter TeX is a TeX/LaTeX editor previously named LaTeXila and GNOME LaTeX.
It depends on
Gedit Technology
and drives some of its development. So it makes sense to include it alongside
gedit. A blog post about Enter TeX will most probably be written, to shed some
light on this project that started in 2009.
Onwards to 2025
The development continues! To get the latest news, you can follow this blog
or, alternatively, you can
follow me on Mastodon.
This article was written by Sébastien Wilmet, currently the main developer
behind gedit.
For at least the last 15 years, the translations of GNOME into Czech have been in excellent condition. With each release, I would only report that everything was translated, and for the last few years, this was also true the vast majority of the documentation. However, last year things started to falter. Contributors who had been carrying this for many years left, and there is no one to take over after them. Therefore, we have decided to admit it publicly: GNOME currently has no Czech translators, and unless someone new takes over, the translations will gradually decline.
Personally, I started working on GNOME translations in 2008 when I began translating my favorite groupware client – Evolution. At that time, the leadership of the translation team was taken over by Petr Kovář, who was later joined by Marek Černocký who maintained the translations for many years and did an enormous amount of work. Thanks to him, GNOME was almost 100% translated into Czech, including the documentation. However, both have completely withdrawn from the translations. For a while, they were replaced by Vojtěch Perník and Daniel Rusek, but the former has also left, and Dan has now come to the conclusion that he can no longer carry on the translations alone.
I suggested to Dan that instead of trying to appeal to those who the GNOME translations have relied on for nearly two decades—who have already contributed a lot and are probably facing some form of burnout or have simply moved on to something else after so many years—it would be better to reach out to the broader community to see if there is someone from a new generation who would be willing and energetic enough to take over the translations. Just as we did nearly two decades ago.
It may turn out that an essential part of this process will be that the GNOME translations into Czech will decline for some time.Because the same people have been doing the job for so many years, the community has gotten used to taking excellent translations for granted. But it is not. Someone has to do the work. As more and more English terms appear in the GNOME interface, perhaps dissatisfaction will motivate someone to do something about it. After all, that was the motivation for the previous generation to get involved.
If someone like that comes forward, Dan and I are willing to help them with training and gradually hand over the project. We may both continue to contribute in a limited capacity, but the project needs someone new, ideally not just one person, but several, because carrying it alone is a path to burnout. Interested parties can contact us in the mailing list of the Czech translation team at diskuze-l10n-cz@lists.openalt.org.
I ended the talk with some puzzling results around generational
collection, which prompted yesterday’s
post.
I don’t have a firm answer yet. Or rather, perhaps for the splay
benchmark, it is to be expected that a generational GC is not great; but
there are other benchmarks that also show suboptimal throughput in
generational configurations. Surely it is some tuning issue; I’ll be
looking into it.
In an earlier blog post I wrote about a potential way of speeding up C++ compilations (or any language that has a big up-front cost). The basic idea is to have a process that reads in all stdlib header code that is suspended. Compilations are done by sending the actual source file + flags to this process, which then forks and resumes compilation. Basically this is a way to persist the state of the compiler without writing (or executing) a single line of serialization code.
The obvious follow up question is what is the speedup of this scheme. That is difficult to say without actually implementing the system. There are way too many variables and uncertainties to make any sort of reasonable estimate.
So I implemented it.
Not in an actual compiler, heavens no, I don't have the chops for that. Instead I implemented a completely fake compiler that does the same steps a real compiler would need to take. It spawns the daemon process. It creates a Unix domain socket. It communicates with the running daemon. It produces output files. The main difference is that it does not actually do any compilation, instead it just sleeps to simulate work. Since all sleep durations are parameters, it is easy to test the "real" effect of various schemes.
The default durations were handwavy estimates based on past experience. In past measurements, stdlib includes take by far the largest chunk of the total compilation time. Thus I estimated that compilation without this scheme would take 5 seconds per file whereas compilations with it would take 1 second. If you disagree with these assumptions, feel free to run the test yourself with your own time estimates.
The end result was that on this laptop that has 22 cores a project with 100 source files took 26 seconds to compile without the daemon and 7 seconds with it. This means the daemon version finished in just a hair over 25% of a "regular" build.
Wouldn't you want your compilations to finish in a quarter of the time with zero code changes? I sure would.
(In reality the speedup is probably less than that. How much? No idea. Someone's got to implement that to find out.)
For fun, I decided to ask to "explain the themes in the Yocto Project release names". It went about as well as I expected, to be honest:
The Yocto Project follows a thematic naming convention for its release names. The themes have evolved over time:
So far so good.
Types of Birds (Older Releases)
Early Yocto Project releases were named after different bird species. Examples include:
Incorrect, that was not one of the themes.
Dizzy (a reference to Dizzy Gillespie, but also sounds like a bird name)
A jazz musician that "sounds like a bird name"? One hell of a stretch, and not the reference.
Fido (a breed of pigeon)
From what I can tell, Fido is not a breed of pigeon. Also not the reference.
Jethro (a reference to Jethro Tull, but also resembles a bird-like theme)
Jethro Tull the rock band? Not the reference.
I do wonder why it decided the theme was types of birds, then made up one bird breed and listed two musicians that "sound like" bird names.
Continuing on...
Muppets Characters (Mid-Series)
The Yocto Project later transitioned to naming releases after characters from The Muppets, a popular puppet show created by Jim Henson. Examples include:
No, it didn't.
Krogoth (a lesser-known Muppet)
Krogoth is so lesser-known that the Muppets Wiki doesn't mention it, so I think this is made-up.
Morty (another Muppet character)
According to the Muppets Wiki "Morty is Rick Moranis' agent. He is seen in Muppets Tonight episode 202 [in 1997]".
Considering how incorrect everything else is I actually think this is more likely to be a coincidence than an actual reference to reality. There's been a lot of characters over the decades of shows.
Rocko (potentially a reference to Rocko from Rocko’s Modern Life, though not strictly Muppet-related)
"not strictly"? A cartoon from the 1990s by some of the people that later gave us Spongebob Squarepants is not the reference for this release.
Sci-Fi and Fantasy References (Current Theme)
The most recent trend in Yocto release names leans toward science fiction and fantasy references. Examples include:
Surprisingly, this is not the current theme.
Langdale (a reference to a location, possibly inspired by British landscapes, but also fits a fantasy tone)
Oh you're close, go on you can do it!
Mickledore (a name with a Tolkien-esque or fantasy feel)
I can only imagine why that is...
Nanbield (continuing the theme of mysterious, fantasy-like names)
I give up.
Remember people: verify everything that LLMs say, as they are first-rate bullshit artists.
These are my notes for setting up a “deploy-to-remote-webserver” workflow with ssh and an
sftp-only chroot, in case someone might find it useful.
Compiled from various bits and pieces around the internet.
Setting up SSH
Add a group: groupadd -r remote-deploy. This will add a system group
Create a folder for authorized keys for users in that group: mkdir -p /etc/ssh/authorized
Modify ssh_config
AuthorizedKeysFile /etc/ssh/authorized/%u .ssh/authorized_keys
Match Group remote-deploy
ChrootDirectory %h
ForceCommand internal-sftp
AllowTcpForwarding no
X11Forwarding no
PasswordAuthentication no
Setting up the chroot jail
It is important that the path up until the home folder is owned by root:root. Below that, create
a folder that is owned by the user and any supplementary group you might need to access (e.g. www-data)
Obtain the host key frm the remote host. To be able to make the variable masked in
Gitlab later, it needs to be encoded: ssh-keyscan -q deploy-host.example.com 2>/dev/null | bas64 -w0
Log into your gitlab instance, go to Settings->CI/CD->Variables
Click on “Add variable”
Set “Masked and Hidden” and “Protect variable”
Add a comment like “SSH host key for deploy host”
Set name to SSH_HOST_KEY
Paste output of the keyscan command
Create another variable with the same Settings
Add a comment like “SSH private key for deploy user”
Set name to SSH_PRIVATE_KEY
Paste output of base64 -w0 deploy-user, where deploy-user is the private key generated above
In my previous post, I alluded to an exciting development for PipeWire. I’m now thrilled to officially announce that Asymptotic will be undertaking several important tasks for the project, thanks to funding from the Sovereign Tech Fund (now part of the Sovereign Tech Agency).
Some of you might be familiar with the Sovereign Tech Fund from their funding for GNOME, GStreamer and systemd – they have been investing in foundational open source technology, supporting the digital commons in key areas, a mission closely aligned with our own.
We will be tackling three key areas of work.
ASHA hearing aid support
I wrote a bit about our efforts on this front. We have already completed the PipeWire support for single ASHA hearing aids, and are actively working on support for stereo pairs.
Improvements to GStreamer elements
We have been working through the GStreamer+PipeWire todo list, fixing bugs and making it easier to build audio and video streaming pipelines on top of PipeWire. A number of usability improvements have already landed, and more work on this front continues
A Rust-based client library
While we have a pretty functional set of Rust bindings around the C-based libpipewire already, we will be creating a pure Rust implementation of a PipeWire client, and provide that via a C API as well.
There are a number of advantages to this: type and memory safety being foremost, but we can also leverage Rust macros to eliminate a lot of boilerplate (there are community efforts in this direction already that we may be able to build upon).
This is a large undertaking, and this funding will allow us to tackle a big chunk of it – we are excited, and deeply appreciative of the work the Sovereign Tech Agency is doing in supporting critical open source infrastructure.
If you use SteamOS and you like to install third-party tools or modify the system-wide configuration some of your changes might be lost after an OS update. Read on for details on why this happens and what to do about it.
As you all know SteamOS uses an immutable root filesystem and users are not expected to modify it because all changes are lost after an OS update.
However this does not include configuration files: the /etc directory is not part of the root filesystem itself. Instead, it’s a writable overlay and all modifications are actually stored under /var (together with all the usual contents that go in that filesystem such as logs, cached data, etc).
/etc contains important data that is specific to that particular machine like the configuration of known network connections, the password of the main user and the SSH keys. This configuration needs to be kept after an OS update so the system can keep working as expected. However the update process also needs to make sure that other changes to /etc don’t conflict with whatever is available in the new version of the OS, and there have been issues due to some modifications unexpectedly persisting after a system update.
SteamOS 3.6 introduced a new mechanism to decide what to to keep after an OS update, and the system now keeps a list of configuration files that are allowed to be kept in the new version. The idea is that only the modifications that are known to be important for the correct operation of the system are applied, and everything else is discarded1.
However, many users want to be able to keep additional configuration files after an OS update, either because the changes are important for them or because those files are needed for some third-party tool that they have installed. Fortunately the system provides a way to do that, and users (or developers of third-party tools) can add a configuration file to /etc/atomic-update.conf.d, listing the additional files that need to be kept.
There is an example in /etc/atomic-update.conf.d/example-additional-keep-list.conf that shows what this configuration looks like.
Sample configuration file for the SteamOS updater
Developers who are targeting SteamOS can also use this same method to make sure that their configuration files survive OS updates. As an example of an actual third-party project that makes use of this mechanism you can have a look at the DeterminateSystems Nix installer:
As usual, if you encounter issues with this or any other part of the system you can check the SteamOS issue tracker. Enjoy!
A copy is actually kept under /etc/previous to give the user the chance to recover files if necessary, and up to five previous snapshots are kept under /var/lib/steamos-atomupd/etc_backup︎
You want background playback? You get background playback! Shortwave 5.0 is now available and finally continues playback when you close the window, resolving the “most popular” issue on GitLab!
Shortwave uses the new Flatpak background portal for this, which means that the current playback status is now also displayed in the “Background Apps” menu.
The recording feature has also been overhauled. I have addressed a lot of user feedback here, e.g. you can now choose between 3 different modes:
Save All Tracks: Automatically save all recorded tracks
Decide for Each Track: Temporarily record tracks and save only the ones you want
Record Nothing: Stations are played without recording
In addition to that the directory for saving recorded tracks can be customized, and users can now configure the minimum and maximum duration of recordings.
There is a new dialog window with additional details and options for current or past played tracks. For example, you no longer need to worry about forgetting to save your favorite track when the recording is finished – you can now mark tracks directly during playback so that they are automatically saved when the recording is completed.
You don’t even need to open Shortwave for this, thanks to the improved notifications you can decide directly when a new track gets played whether you want to save it or not.
Of course the release also includes the usual number of bug fixes and improvements. For example, the volume can now be changed using the keyboard shortcut.
With apps made for different form factors, it can be hard to find what works for your specific device. For example, we know it can be a bit difficult to find great apps that are actually designed to be used on a mobile phone or tablet. To help solve this, we’re introducing a new collection: On the Go.
As the premier source of apps for Linux, Flathub serves a wide range of people across a huge variety of hardware: from ultra powerful developer workstations to thin and light tablets; from handheld gaming consoles to a growing number of mobile phones. Generally any app on Flathub will work on a desktop or laptop with a large display, keyboard, and mouse or trackpad. However, devices with only touch input and smaller screen sizes have more constraints.
Using existing data and open standards, we’re now highlighting apps on Flathub that report as being designed to work on these mobile form factors. This new On the Go collection uses existing device support data submitted by app developers in their MetaInfo, the same spec that is used to build those app’s listings for Flathub and other app store clients. The collection is featured on the Flathub.org home page for all devices.
Many of these apps are adaptive across screen sizes and input methods; you might be surprised to know that your favorite app on your desktop will also work great on a Linux phone, tablet, or Steam Deck’s touch screen. We aim to help reveal just how rich and well-rounded the app ecosystem already is for these devices—and to give app developers another place for their apps to shine and be discovered.
As of this writing there are over 150 apps in the collection, but we expect there are cases where app developers have not provided the requisite device support data.
If you’re the creator of an app that should work well on mobile form factors but isn’t featured in the collection, take a minute to double-check the documentation and your own apps’s MetaInfo to ensure it’s accurate. Device support data can also be used by native app store clients across form factors to determine what apps are displayed or how they are ranked, so it’s a good idea to ensure it’s up to date regardless of what your app supports.
In the past I may have spoken critically on Truetype fonts and their usage in PDF files. Recently I have come to the conclusion that it may have been too harsh and that Truetype fonts are actually somewhat nice. Why? Because I have had to add support for CFF fonts to CapyPDF. This is a font format that comes from Adobe. It encodes textual PostScript drawing operations into binary bytecode. Wikipedia does not give dates, but it seems to have been developed in the late 80s - early 90s. The name CFF is an abbeviation for "complicated font format".
Double-checks notes.
Compact font format. Yes, that is what I meant to write. Most people reading this have probably not ever even seen a CFF file so you might be asking why is supporting CFF fonts even a thing nowadays? It's all quite simple. Many of the Truetype (and especially OpenType) fonts you see are not actually Truetype fonts. Instead they are Transfontners, glyphs in disguise. It is entirely valid to have a Truetype font that is merely an envelope holding a CFF font. As an example the Noto CJK fonts are like this. Aggregation of different formats is common in font files, and the main reason OpenType fonts have like four different and mutually incompatible ways of specifying color emoji. None of the participating entities were willing to accept anyone else's format so the end result was to add all of them. If you want Asian language support, you have to dive into the bowels of the CFF rabid hole.
As most people probably do not have sufficient historical perspective, let's start by listing out some major computer science achievements that definitely existed when CFF was being designed.
File format magic numbers
Archive formats that specify both the offset and size of the elements within
Archive formats that afford access to their data in O(number of items in the archive) rather than O(number of bytes in the file)
Data compression
CFF chooses to not do any of this nonsense. It also does not believe in consistent offset types. Sometimes the offsets within data objects refer to other objects by their order in the index they are in. Sometimes they refer to number of bytes from the beginning of the file. Sometimes they refer to number of bytes from the beginning of the object the offset data is written in. Sometimes it refers to something else. One of the downsides of this is that while some of the data is neatly organized into index structures with specified offsets, a lot of it is just free floating in the file and needs the equivalent of three pointer dereferences to access.
Said offsets are stored with a variable width encoding like so:
This makes writing subset CFF font files a pain. In order to write an offset value at some location X, you first must serialize everything up to that point to know where the value would be written. To know the value to write you have to serialize the the entire font up to the point where that data is stored. Typically the data comes later in the file than its offset location. You know what that means? Yes, storing all these index locations and hotpatching them afterwards once you find out where the actual data pointed to ended up in. Be sure to compute your patching locations correctly lest you end up in lengthy debugging sessions where your subset font files do not render correctly. In fairness all of the incorrect writes were within the data array and thus 100% memory safe, and, really, isn't that the only thing that actually matters?
One of the main data structures in a CFF file is a font dictionary stored in, as the docs say, "key-value pairs". This is not true. The "key-value dictionary" is neither key-value nor is it a dictionary. The entries must come in a specific order (sometimes) so it is not a dictionary. The entries are not stored as key-value pairs but as value-key pairs. The more accurate description of "value-key somewhat ordered array" does lack some punch so it is understandable that they went with common terminology. The backwards ordering of elements to some people confusion bring might, but it perfect sense makes, as the designers of the format a long history with PostScript had. Unknown is whether some of them German were.
Anyhow, after staring directly into the morass of madness for a sufficient amount of time the following picture emerges.
Final words
The CFF specification document contains data needed to decipher CFF data streams in nice tabular format, which is easy to convert to an enum. Trying it fails with an error message saying that the file has prohibited copypasting. This is a bit rich coming from Adobe, whose current stance seems to be that they can take any document opened with their apps and use it for AI training. I'd like to conclude this blog post by sending the following message to the (assumed) middle manager who made the decision that publicly available specification documents should prohibit copypasting:
YOU GO IN THE CORNER AND THINK ABOUT WHAT YOU HAVE DONE! AND DON'T EVEN THINK ABOUT COMING BACK UNTIL YOU ARE READY TO APOLOGIZE TO EVERYONE FOR YOU ACTIONS!
At LaOficina we are currently working on a project to digitize family photographs and one of the challenges is the correct traceability of intellectual property. In this case we have encountered the difficulty of knowing the exact conditions of the received material, a situation that is not new and which is already addressed by the RightsStatements vocabulary, which includes 12 terms that are used, among others, by the Europeana community. Therefore, it is obvious that we need to add this vocabulary to our Wikibase Suite instance. By the way, as an exercise, I have taken the opportunity to compose it from scratch as an independent OWL ontology. It is very simple, but probably it has some conceptual flaws. If it is useful to someone, please use it without restrictions: righstatements-ontology.ttl
So a we are a little bit into the new year I hope everybody had a great break and a good start of 2025. Personally I had a blast having gotten the kids an air hockey table as a Yuletide present :). Anyway, wanted to put this blog post together talking about what we are looking at for the new year and to let you all know that we are hiring.
Artificial Intelligence
One big item on our list for the year is looking at ways Fedora Workstation can make use of artificial intelligence. Thanks to IBMs Granite effort we know have an AI engine that is available under proper open source licensing terms and which can be extended for many different usecases. Also the IBM Granite team has an aggressive plan for releasing updated versions of Granite, incorporating new features of special interest to developers, like making Granite a great engine to power IDEs and similar tools. We been brainstorming various ideas in the team for how we can make use of AI to provide improved or new features to users of GNOME and Fedora Workstation. This includes making sure Fedora Workstation users have access to great tools like RamaLama, that we make sure setting up accelerated AI inside Toolbx is simple, that we offer a good Code Assistant based on Granite and that we come up with other cool integration points.
Wayland
The Wayland community had some challenges last year with frustrations boiling over a few times due to new protocol development taking a long time. Some of it was simply the challenge of finding enough people across multiple projects having the time to follow up and help review while other parts are genuine disagreements of what kind of things should be Wayland protocols or not. That said I think that problem has been somewhat resolved with a general understanding now that we have the ‘ext’ namespace for a reason, to allow people to have a space to review and make protocols without an expectation that they will be universally implemented. This allows for protocols of interest only to a subset of the community going into ‘ext’ and thus allowing protocols that might not be of interest to GNOME and KDE for instance to still have a place to live.
The other more practical problem is that of having people available to help review protocols or providing reference implementations. In a space like Wayland where you need multiple people from multiple different projects it can be hard at times to get enough people involved at any given time to move things forward, as different projects have different priorities and of course the developers involved might be busy elsewhere. One thing we have done to try to help out there is to set up a small internal team, lead by Jonas Ådahl, to discuss in-progress Wayland protocols and assign people the responsibility to follow up on those protocols we have an interest in. This has been helpful both as a way for us to develop internal consensus on the best way forward, but also I think our contribution upstream has become more efficient due to this.
All that said I also believe Wayland protocols will fade a bit into the background going forward. We are currently at the last stage of a community ‘ramp up’ on Wayland and thus there is a lot of focus on it, but once we are over that phase we will probably see what we saw with X.org extensions over time, that for the most time new extensions are so niche that 95% of the community don’t pay attention or care. There will always be some new technology creating the need for important new protocols, but those are likely to come along a relatively slow cadence.
High Dynamic Range
HDR support in GNOME Control Center
As for concrete Wayland protocols the single biggest thing for us for a long while now has of course been the HDR support for Linux. And it was great to see the HDR protocol get merged just before the holidays. I also want to give a shout out to Xaver Hugl from the KWin project. As we where working to ramp up HDR support in both GNOME Shell and GTK+ we ended up working with Xaver and using Kwin for testing especially the GTK+ implementation. Xaver was very friendly and collaborative and I think HDR support in both GNOME and KDE is more solid thanks to that collaboration, so thank you Xaver!
PipeWire
I been sharing a lot of cool PipeWire news here in the last couple of years, but things might slow down a little as we go forward just because all the major features are basically working well now. The PulseAudio support is working well and we get very few bug reports now against it. The reports we are getting from the pro-audio community is that PipeWire works just as well or better as JACK for most people in terms of for instance latency, and when we do see issues with pro-audio it tends to be more often caused by driver issues triggered by PipeWire trying to use the device in ways that JACK didn’t. We been resolving those by adding more and more options to hardcode certain options in PipeWire, so that just as with JACK you can force PipeWire to not try things the driver has problems with. Of course fixing the drivers would be the best outcome, but for some of these pro-audio cards they are so niche that it is hard to find developers who wants to work on them or who has hardware to test with.
We are still maturing the video support although even that is getting very solid now. The screen capture support is considered fully mature, but the camera support is still a bit of a work in progress, partially because we are going to a generational change the camera landscape with UVC cameras being supplanted by MIPI cameras. Resolving that generational change isn’t just on PipeWire of course, but it does make the a more volatile landscape to mature something in. Of course an advantage here is that applications using PipeWire can easily switch between V4L2 UVC cameras and libcamera MIPI cameras, thus helping users have a smooth experience through this transition period.
But even with the challenges posed by this we are moving rapidly forward with Firefox PipeWire camera support being on by default in Fedora now, Chrome coming along quickly and OBS Studio having PipeWire support for some time already. And last but not least SDL3 is now out with PipeWire camera support.
MIPI camera support
Hans de Goede, Milan Zamazal and Kate Hsuan keeps working on making sure MIPI cameras work under Linux. MIPI cameras are a step forward in terms of technical capabilities, but at the moment a bit of a step backward in terms of open source as a lot of vendors believe they have ‘secret sauce’ in the MIPI camera stacks. Our works focuses mostly on getting the Intel MIPI stack fully working under Linux with the Lattice MIPI aggregator being the biggest hurdle currently for some laptops. Luckily Alan Stern, the USB kernel maintainer, is looking at this now as he got the hardware himself.
Flatpak
Some major improvements to the Flatpak stack has happened recently with the USB portal merged upstream. The USB portal came out of the Sovereign fund funding for GNOME and it gives us a more secure way to give sandboxed applications access to you USB devcices. In a somewhat related note we are still working on making system daemons installable through Flatpak, with the usecase being applications that has a system daemon to communicate with a specific piece of hardware for example (usually through USB). Christian Hergert got this on his todo list, but we are at the moment waiting for Lennart Poettering to merge some pre-requisite work into systemd that we want to base this on.
Accessibility
We are putting in a lot of effort towards accessibility these days. This includes working on portals and Wayland extensions to help facilitate accessibility, working on the ORCA screen reader and its dependencies to ensure it works great under Wayland. Working on GTK4 to ensure we got top notch accessibility support in the toolkit and more.
GNOME Software
Last year Milan Crha landed the support for signing the NVIDIA driver for use on secure boot. The main feature Milan he is looking at now is getting support for DNF5 into GNOME Software. Doing this will resolve one of the longest standing annoyances we had, which is that the dnf command line and GNOME Software would maintain two separate package caches. Once the DNF5 transition is done that should be a thing of the past and thus less risk of disk space being wasted on an extra set of cached packages.
Firefox
Martin Stransky and Jan Horak has been working hard at making Firefox ready for the future, with a lot of work going into making sure it supports the portals needed to function as a flatpak and by bringing HDR support to Firefox. In fact Martin just got his HDR patches for Firefox merged this week. So with the PipeWire camera support, Flatpak support and HDR support in place, Firefox will be ready for the future.
We are hiring! looking for 2 talented developers to join the Red Hat desktop team We are hiring! So we got 2 job openings on the Red Hat desktop team! So if you are interested in joining us in pushing the boundaries of desktop linux forward please take a look and apply. For these 2 positions we are open to remote workers across the globe and while the job adds list specific seniorities we are somewhat flexible on that front too for the right candidate. So be sure to check out the two job listings and get your application in! If you ever wanted to work fulltime on GNOME and related technologies this is your chance.
In talking with someone about “preferred form for modification” over the weekend at FOSDEM, the FSF (now sort-of-OSI?) four freedoms came up. They’re not bad, but they’re extremely developer-focused in their “use case”. This is not a new observation, of course, but the changed technical and social context of AI seems to be bringing it to the fore that different users have different variations on the values and why open is so important to them.
Here’s a very quick, rough cut of what I proposed instead as key freedoms, and who those matter for. These are not exclusive (eg in many cases developers also care about replication; businesses obviously frequently care about modification, etc.) but compared to the original four freedoms, convey that different stakeholders have different needs—all of which have been served, but not explicitly called out as metrics, by the FOSS movement over the years.
modification (foremost, for developers): We like to play with things, and often can make them better in the process. Enough said.
replication (foremost, for scientists): It’s not really science if you can’t replicate it, and you can’t replicate it if it isn’t really yours. High overlap with modification and transparency, but something other constituencies can often live without.
transparency (foremost, for governments): you can’t effectively regulate what you can’t understand, and it’s never OK for something that needs to be regulated to be opaque. (Obviously we allow this all the time but as we’re all reminded this week that leads to all kinds of malignancies.)
cost of re-use (foremost, for business): This is perhaps the least unique, but it’s important to remember that statistically businesses very rarely modify open source. They pick and choose what they use, and compose them in almost entirely unique ways, but that’s a property of architectural design and cost, not of ability to modify.
These of course get you to mostly the same place as the traditional four freedoms. But importantly for the discussion of “open in AI”, replication and transparency require a focus on data, while for many businesses (and certainly for most developers) weights are sufficient and may well be preferred in many/most cases.
One could imagine other use cases and priorities, of course. But I wanted to bang this out quick and there was a nice symmetry to having four. Leave more on the various socials :)
It’s that time again, and we are in our 7th year doing this conference (if you include LAS GNOME).
This year, LAS will be held in Tirana, Albania and we are going all out this year to make it the best conference representing apps on Linux.
For those who don’t know or have not heard of Linux App Summit, the idea is to have desktops work together to help enable application developers to build apps on the Linux platform. It’s a parallel effort to the Flathub and Snapstore app stores.
LAS is positioned to promote third party developers, inform the ecosystems are the advances on the desktop, and for developers, designers, and contributors working on the desktops to meet each other and discuss how to move the platform forward as a community.
LAS’s success depends on all of you. If you’re passionate about making Linux as a viable alternative to proprietary platform then we need your help! Linux enables local control of your technology that you can adapt to your needs. Build local ecosystems enabling a local economy. A community driven platform will protect your privacy, safety, and security without bowing to shareholders or to politicians. This is the place to tell us about it!
So, I ask all of you to attend LAS, help drive our numbers up. Have a great idea that you won’t to share with this ecosystem of developers? Implemented something on a phone, in an automobile, or something else? Have a great concept? Want to update all of us on what the next version of your app is going to do?
Through here, we can find out what is missing? What do we need to do move forward? What are the trends we should be looking at?
Feel free to reach out to our team and we’ll be happy to answer any questions at info@linuxappsummit.org.
You can submit a talk at https://iinuxappsummit.org/cfp. You can register for the conference at https://linuxappsummit.org/register.
Can’t come in person or give the talk in person? Not to worry! LAS is a hybrid conference and you can attend from remote even though we would love to meet you in person.
Finally, we are looking for sponsors. If you know of a company who would make a great sponsor please reach out. If you’re interested in sponsoring LAS, you can email us at sponsors@linuxappsummit.org. For more info at https://linuxappsummit.org/sponsor.
As my Outreachy internship with GNOME concludes, I’m reflecting on the journey, the effort, the progress, and, most importantly, the future I envision in tech.
The past few months as an intern have been both challenging and incredibly rewarding. From quickly learning a new programming language to meet project demands, to embracing test-driven development and tackling progressively complex tasks, every experience has been a stepping stone. Along the way, I’ve honed my collaboration and communication skills, expanded my professional network, and developed a deep appreciation for the power of community and open source.
This Outreachy internship has been a pivotal experience, solidifying these values and teaching me the importance of embracing challenges and continuously improving my technical and interpersonal skills, preparing me for the next stage of my engineering career. The supportive environment I found in the GNOME community has been instrumental to my growth. I’m incredibly grateful for my mentor, Federico, who exemplified what true mentorship should be. He showed me the importance of collaborative spirit, genuine understanding of team members, and even taking time for laughter – all of which made transitioning to a new environment seamless and comfortable. His guidance fostered open communication, ensuring seamless synchronization and accessibility. Just before writing this, I had a call with Federico, Felipe (the GNOME Internship Coordinator, an awesome person!), and Aryan to discuss my career post-internship.
While the career advice was invaluable, what truly stood out was their collaborative willingness to support my growth. This dedication to fostering progress is something I deeply admire and will strive to make a core part of my own engineering culture.
My journey from intern to engineer has been significantly shaped by the power of community, and I’m now ready to push myself further, take on new challenges, and build a solid, impactful, and reputable career in technology.
Skills
I possess a strong foundation in several key technologies essential for software and infrastructure engineering.
My primary languages are Golang and Rust, allowing me to build high-performance and reliable systems. I also have experience with Python. I’m a quick learner and eager to expand my skillset further.
Career Goals
My ultimate career aspiration is to secure a role that challenges me to grow as an engineer while contributing to impactful and innovative projects. I am particularly drawn to:
Cultivating a culture of creativity and structured development while optimizing myself to become the best engineer I can be—just like my Outreachy experience.
Developing and sustaining critical infrastructure that powers large-scale, globally utilized systems, ensuring reliability, security, and seamless operation.
Exploring opportunities at MANGA or other big tech companies to work on complex systems, bridging software engineering, security, and infrastructure.
motivations
While the challenge of growth is my primary motivation, the financial stability offered by these roles is also important, enabling me to further invest in my personal and professional development.
Relocation is a significant draw, offering the opportunity to experience different cultures, gain new perspectives and immerse myself in a global engineering community.
As an introverted and private person, I see this as a chance to push beyond my comfort zone, engage with a diverse range of collaborators, and build meaningful connections.
Job Search
I am actively seeking software engineering, infrastructure, and site reliability roles. I am particularly interested in opportunities at large tech companies, where I can contribute to complex systems and further develop my expertise in Golang and Rust after concluding my Outreachy internship with Gnome is concluded in march 2025.
exploring the opportunities
I’m eager to explore software engineering, open source, infrastructure, and site reliability roles. If your team is seeking someone with my skills and experience, I’d welcome a conversation. Connect with me via email or LinkedIn.
I’m excited about the future and ready to take the next step in my career. With the foundation I’ve built during this internship, I’m confident in my ability to make a meaningful impact in the tech industry
We just had a GTK hackfest at FOSDEM. A good time for an update on whats new and exciting in GTK, with an eye towards 4.18.
Requirements
You can no longer call gdk_display_get_default() or gdk_display_open() before gtk_init(). This was causing problems due to incomplete initialization, so we made it fail with a (hopefully clear) error message. If you are affected by this, the usual fix is to just call gtk_init() as early as possible.
On Windows, we have a hard requirement on Windows 10 now. All older versions are long unsupported, and having to deal with a maze of ifdefs and unavailable APIs makes development harder than it should be. Dropping support for very old versions also simplifies the code down the stack, in Pango and GLib.
The same idea applies to macOS, where we now require macOS 10.15.
Spring cleaning
The old GL renderer has been removed. This may be unwelcome news for people stuck on very old drivers and hardware. But we will continue to make the new renderers work as well as possible on the hardware that they can support.
The X11 and Broadway backends have been deprecated, as a clear signal that we intend to remove them in the GTK 5. In the meantime, they continue to be available. We have also deprecated GtkShortcutsWindow, since it needs a new design. The replacement will appear in libadwaita, hopefully next cycle.
It is worth reminding everybody that there is no need to act on deprecations until you are actively porting your app to the next major version of GTK, which is not on the horizon yet.
Incremental improvements
Widget layout and size allocation has received quite a bit of attention this cycle, with the goal of improving performance (by avoiding binary search as much as possible) and correctness. Nevertheless, these changes have some potential for breakage, so if you see wrong or suboptimal layouts in applications, please let us know.
GTK has had difficulties for a while getting its pointer sizes right with fractional scaling on Wayland, but this should all be solved in GTK 4.18. No more huge pointers. Fixing this also required changes on the mutter side.
New beginnings
Accessibility in GTK 4.18 is taking a major step forward, with the new AccessKit backend, which gives us accessibility on Windows and macOS, for the very first time. The at-spi backend is still the default on Linux, and has seen a number of improvements as well.
And, maybe the biggest news: We have an Android backend now. It is still experimental, so you should expect some rough edges and loose ends. For example, there is no GL renderer support yet. But it is exciting that you can just try gtk4-demo on your phone now, and have it mostly work.
Introduction Built on jj and fzf, jj-fzf offers a text-based user interface
(TUI) that simplifies complex versioning control operations like rebasing,
squashing, and merging commits. This post will guide you through integrating
jj-fzf into your Emacs workflow, allowing to switch between emacs and jj…
I finally got distracted enough to finish my website that has been saying "under construction" for over a year, since I set up this server for my Sharkey instance.
I've wanted to do this for a while - one, so that I actually have a home page, and two, so that I can move my blog here instead of using WordPress.
Setup
Initially I wanted to use a static generator like Hugo, but then I discovered that the web server I'm using (Caddy) can do templates. That's perfectly enough for a simple blog, so I don't actually need a separate generator. This very article is a markdown document, parsed and embedded into a nice-looking page and RSS feed using templates.
In addition, I get all the niceties I couldn't get before:
Using Markdown instead of HTML with WordPress-specific additions for e.g. image galleries.
Proper code listings with syntax highlighting (you'll have to view this on the original page though, not from Planet GNOME or your RSS reader):
Just simple niceties like smaller monospace font - I do this a lot and I don't particularly like the way the WordPress theme I was using presents it.
Finally, while migrating my old posts I had an opportunity to update broken links (such as to the old documentation) and add missing alt text as in quite a few images I set WordPress description instead of alt text and never noticed. If you really want the old version, it's still on the old website and each migrated article links to its original counterpart.
So yeah, so far this was fairly pleasant, I expected much worse. There are still a bunch of things I want to add (e.g. previewing images at full size on click), but it's not like my old blog had that either.
As I've done some times previous years, I thought it would be appropriate to give a bit of a status update on goings on with regards to Maps before heading for this year's FOSDEM
Refreshed Location Marker
One of the things that landed since the December update are the new revamped location markers
The marker now uses the system accent color, and sports a “torch” indicating the current heading (when known).
And the circle indicating approximate accuracy of the location now has an outer contour.
And on these notes, I would also like to take the opportunity to mention the BeaconDB project (https://beacondb.net/) with the goal of building a community-sourced wireless positioning database. It is compatible the now-defunct Mozilla Location Service (MLS) and works as a drop-in-replacement with GeoClue.
Improved Visuals for Public Transit Routes Lists
The “badges” showing line numbers/names for public transit journeys, and markers for shown on the map when selecting a trip has been improved to avoid some odd label alignments and better looking contours (on lower contrast against light or dark background). The labels are now drawn directly using GSK instead of piggy-backing on a GtkLabel, doing some Cairo drawing on top of that. One additional benefit here is that it also gets rid of some of the remaining usages of the GdkPixbuf APIs (which will be gone in a future GTK 5).
Transitous Move to MOTIS 2
On the subject of transit, Transitous has now migrated to the new MOTIS 2 API. And consequently the support in Maps has been updated to use the new API (this is also backported to the stable 47.3 release).
The new API is easier to use, and more in-line with the internal data types in Maps, so the code was also a bit simpler. Also now with the new API we get the walking instructions directly from MOTIS instead of using GraphHopper to compute walking “legs”. This has made searching for routes in Maps quite a bit faster as well.
FOSDEM
And when talking about FOSDEM, me, Felix Gündling, and Jonah Brüchert will host a talk about Transitous in the “Railways and Open Transport” devroom (K.6.401) on Sunday @ 16:30 CET
In October 2024 some members of our community gathered in Medelin, Colombia, for another edition of GNOME Latam. Some of us joined remotely for a schedule packed with talks about GNOME and its ecosystem.
Python 2 is really old, not maintained and should not be used by
anyone in any modern environment, but software is complex and python2
still exists in some modern Linux distributions like Tumbleweed.
The past week the request to delete Python 2 from Tumbleweed was
created and is going through the staging process.
The main package keeping Python 2 around for Tumbleweed was Gimp 2,
that doesn't depends directly on Python 2, but some of the plugins
depends on it. Now that we've Gimp 3 in Tumbleweed, we are able to
finally remove it.
Python 2
The first version of Python 2 was released around 2000, so it's now 25
years old. That's not true, because software is a living creature, so
as you may know, Python 2 grew during the following years with patch
and minor releases until 2020 that was the final release 2.7.18.
But even when it was maintained until 2020, it was deprecated for a
long time so everyone "should" have time to migrate to python 3.
Py3K
I started to write python code around the year 2006. I was bored
during a summer internship at my third year of computer science, and I
decided to learn something new. In the following months / years I
heard a lot about the futurist Python 3000, but I didn't worry too
much until it was officially released and the migration started to be
a thing.
If you have ever write python2 code you will know about some of the
main differences with python3:
print vs print()
raw_input() vs input()
unicode() vs str
...
Some tools appeared to make it easier to migrate from python2 to
python3, and even it was possible to have code compatible with both
versions at the same time using the __future__ module.
You should have heard about the six package, 2 * 3 = 6. Maybe the
name should be five instead of six, because it was a Python "2 and 3"
compatibility library.
Python in Linux command line
When python3 started to be the main python, there were some discussion
about how to handle that in different Linux distributions. The
/usr/bin/python binary was present and everyone expect that to be
python2, so almost everyone decided to keep that relation forever and
distribute python3 as /usr/bin/python3, so you can have both installed
without conflicts and there's no confusion.
But python is an interpreted language, and if you have python code,
you can't tell if it's python2 or python3. The shebang line in the
executable python scripts should point to the correct interpreter and
that should be enough like #!/usr/bin/python3 will use the python3
interpreter and #!/usr/bin/python will use python2.
But this is not always true, some distributions uses python3 in
/usr/bin/python like Archlinux or if you create a virtualenv with
python3, the python binary points to the python3 interpreter, so a
shebang like #!/usr/bin/python could be something valid for a
python3 script.
In any case, the recommended and safest way is to always use python3
binary because that way it'll work correctly "everywhere".
Goodbye
It's time to say goodbye to python2, at least we can remove it now
from Tumbleweed. It'll be around for some more time in Leap, but it's
the time to let it go.
The jj-fzf project has just seen a new release with version 0.25.0. This
brings some new features, several smaller improvements, and some important
changes to be aware of. For the uninitiated, jj-fzf is a feature-rich command-line
tool that integrates jj and fzf, offering fast commit navigation with…
Have you ever wondered how SVG files render complex text layouts with different styles and directions so seamlessly? At the core of this magic lies text layout algorithms—an essential component of SVG rendering that ensures text appears exactly as intended.
Text layout algorithms are vital for rendering SVGs that include styled or bidirectional text. However, before layout comes text extraction—the process of collecting and organizing text content and properties from the XML tree to enable accurate rendering.
The Extraction Process
SVGs, being XML-based formats, resemble a tree-like structure similar to HTML. To extract information programmatically, you navigate through nodes in this structure.
Each node in the XML tree holds critical details for implementing the SVG2 text layout algorithm, including:
Text content
Bidi-control properties (manage text directionality)
Styling attributes like font and spacing
Understanding Bidi-Control
Bidi-control refers to managing text direction (e.g., Left-to-Right or Right-to-Left) using special Unicode characters. This is crucial for accurately displaying mixed-direction text, such as combining English and Arabic.
A Basic Example
<text>
foo
<tspan>bar</tspan>
baz
</text>
The diagram and code sample shows the structure librsvg creates when it parses this XML tree.
Here, the <text> element has three children:
A text node containing the characters “foo”.
A <tspan> element with a single child text node containing “bar”.
Another text node containing “baz”.
When traversed programmatically, the extracted text from this structure would be “foobarbaz”.
To extract text from the XML tree:
Start traversing nodes from the <text> element.
Continue through each child until the final closing tag.
Concatenate character content into a single string.
While this example seems straightforward, real-world SVG2 files introduce additional complexities, such as bidi-control and styling, which must be handled during text extraction.
Handling Complex SVG Trees
Real-world examples often involve more than just plain text nodes. Let’s examine a more complex XML tree that includes styling and bidi-control:
In this example, the <text> element has four children:
A text node containing “Hello”.
A <tspan> element with font-style: bold, containing the text “bold”.
A <tspan> element with bidi-control set to RTL (Right-To-Left), containing Arabic text “مرحبا”.
Another <tspan> element with font-style: italic, containing “world”.
This structure introduces challenges, such as:
Styling: Managing diverse font styles (e.g., bold, italic).
Whitespace and Positioning: Handling spacing between nodes.
Bidirectional Control: Ensuring proper text flow for mixed-direction content.
Programmatically extracting text from such structures involves traversing nodes, identifying relevant attributes, and aggregating the text and bidi-control characters accurately.
Why Test-Driven Development Matters
One significant insight during development was the use of Test-Driven Development (TDD), thanks to my mentor Federico. Writing tests before implementation made it easier to visualize and address complex scenarios. This approach turned what initially seemed overwhelming into manageable steps, leading to robust and reliable solutions.
Conclusion
Text extraction is the foundational step in implementing the SVG2 text layout algorithm. By effectively handling complexities such as bidi-control and styling, we ensure that SVGs render text accurately and beautifully, regardless of direction or styling nuances.
If you’ve been following my articles and feel inspired to contribute to librsvg or open source projects, I’d love to hear from you! Drop a comment below to share your thoughts, ask questions, or offer insights. Your contributions—whether in the form of questions, ideas, or suggestions—are invaluable to both the development of librsvg and the ongoing discussion around SVG rendering.
In my next article, we’ll explore how these extracted elements are processed and integrated into the text layout algorithm. Stay tuned—there’s so much more to uncover!
Let’s talk about our journey of creating something from scratch (almost?) for our Electronics I final project. It wasn’t groundbreaking like a full-blown multi-featured DC power supply, but it was a fulfilling learning experience.
Spoiler alert: mistakes were made, lessons were learned, and yes, we had fun.
Design and Calculations
Everything began with brainstorming and sketching out ideas. This was our chance to put all the knowledge from our lectures to the test—from diode operating regions to voltage regulation. It was exciting but also a bit daunting.
The first decision was our power supply's specifications. We aimed for a 12V output—a solid middle ground between complexity and functionality. Plus, the 5V option was already claimed by another group. For rectification, we chose a full-wave bridge rectifier due to its efficiency compared to the half-wave alternative.
Calculations? Oh yes, there were plenty! Transformers, diodes, capacitors, regulators—everything had to line up perfectly on paper before moving to reality.
We started at the output, aiming for a stable 12V. To achieve this, we selected the LM7812 voltage regulator. It was an obvious choice: simple, reliable, and readily available. With an input range of 14.5 to 27V, it could easily provide the 12V we needed.
Since the LM7812 can handle a maximum input voltage of 27V, a 12-0-12V transformer would be perfect. However, only a 6-0-6V transformer was available, so we had to make do with that. Regarding with the diode, we used 1N4007 diodes as it is readily available and can handle our desired specifications.
Assuming the provided input voltage for the regulator is 15.5V, which is also the output of the rectifier $ V_{\text{p(rec)}} $, the output voltage of the secondary side of the transformer $ V_{\text{p(sec)}} $ must be:
$$ C = \frac{15.5V}{120Hz \times 240\Omega \times 0.03 \times 12V} = 1495 \mu F \approx 1.5 mF $$
Here is the final schematic diagram of our design based on the calculations:
Construction
Moving on, we had to put our design into action. This was where the real fun began. We had to source the components, breadboard the circuit, design the PCB, and 3D-print the enclosure.
Breadboarding
The breadboarding phase was a mix of excitement and confusion. We had to double-check every connection and component.
It was a tedious process, but the feeling when the 12V LED lit up? Priceless.
PCB Design, Etching and Soldering
For the PCB design, we used EasyEDA. It was our first time using it, but it was surprisingly intuitive. We just had first to recreate the schematic diagram, then layout the components and traces.
Tracing the components on the PCB was a bit tricky, but we managed to get it done. It is like playing connect-the-dots, except no overlapping lines are allowed since we only had a single-layer PCB.
At the end, it was satisfying to see the final design.
We had to print it on a sticker paper, transfer it to the copper board, cut it, drill it, etch it, and solder the components. It was a long process, but the result was worth it.
Did we also mention that we soldered the regulator in reverse for the first time? Oops. But hey, we learned from it.
Custom Enclosure
To make our project stand out, we decided to 3D-print a custom enclosure. Designing it on SketchUp was surprisingly fun.
It was also satisfying to see the once a software model come to life as a physical object.
Testing
Testing day was a rollercoaster. Smoke-free? Check. Output voltage stable? Mostly.
Line Regulation Via Varying Input Voltage
For the first table, we vary the input voltage, and we measured the input voltage, the transformer output, the filter output, the regulator output, and the percent voltage regulation.
Trial No.
Input Voltage ($ V_{\text{rms}} $)
Transformer Output ($ V_{\text{rms}} $)
Filter Output ($ V_{\text{DC}} $)
Regulator Output ($ V_{\text{DC}} $)
% Voltage Regulation
1
213
12.1
13.58
11.97
5
2
214
11.2
13.82
11.92
5
3
215
10.7
13.73
12.03
10
4
216
11.5
13.80
11.93
10
5
217
10.8
13.26
12.01
9
6
218
11.0
13.59
11.92
9
7
220
11.3
13.74
11.92
2
8
222
12.5
13.61
11.96
2
9
224
12.3
13.57
11.93
10
10
226
11.9
13.88
11.94
10
Average
-
11.53
13.67
11.953
5.5
Note: The load resistor is a 22Ω resistor.
Load Regulation Via Varying Load Resistance
For the second table, we vary the load resistance, and we measured the input voltage, the transformer output, the filter output, the regulator output, and the percent voltage regulation.
Trial No.
Load Resistance ($ \Omega $)
Transformer Output ($ V_{\text{rms}} $)
Filter Output ($ V_{\text{DC}} $)
Regulator Output ($ V_{\text{FL(DC)}} $)
% Voltage Regulation
1
220
10.6
11.96
10.22
16.4385
2
500
10.7
12.83
11.43
4.1120
3
1k
11.1
13.05
11.46
3.8394
4
2k
11.1
13.06
11.48
3.6585
5
5k
10.6
13.20
11.49
3.5683
6
6k
10.9
13.26
11.78
1.0187
7
10k
11.2
13.39
11.85
0.4219
8
11k
11.3
13.91
11.87
0.2527
9
20k
11.3
13.53
11.89
0.0841
10
22k
11.1
13.27
11.90
0
Average
-
10.99
13.15
11.54
3.3394
Note: The primary voltage applied to the transformer was 220V in RMS. The $ V_{\text{NL(DC)}} $ used in computing the % voltage regulation is 11.9 V.
Data Interpretation
Looking at the tables, the LM7812 did a great job keeping the output mostly steady at 12V, even when we threw in some wild input voltage swings—what a champ! That said, when the load resistance became too low, it struggled a bit, showing the limits of our trusty (but modest) 6-0-6V transformer. On the other hand, our filtering capacitors stepped in like unsung heroes, keeping the ripples under control and giving us a smooth DC output.
Closing Words
This DC power supply project was a fantastic learning experience—it brought classroom concepts to life and gave us hands-on insight into circuit design and testing. While it performed well for what it is, it’s important to note that this design isn’t meant for serious, high-stakes applications. Think of it more as a stepping stone than a professional-grade benchmark.
Overall, we learned a lot about troubleshooting, design limitations, and real-world performance. With a bit more fine-tuning, this could even inspire more advanced builds down the line. For now, it’s a win for learning and the satisfaction of making something work (mostly) as planned!
Special thanks to our professor for guiding us and to my amazing groupmates—Roneline, Rhaniel, Peejay, Aaron, and Rohn—for making this experience enjoyable and productive (ask them?). Cheers to teamwork and lessons learned!
If you have any questions or feedback, feel free to leave a comment below. We’d love to hear your thoughts or critiques. Until next time, happy tinkering!
I released Crosswords-0.3.14 this week. This is a checkpoint release—there are a number of experimental features that are still under development. However, I wanted to get a stable release out before changing things too much. Download the apps on flathub! (game, editor)
Almost all the work this cycle happened in the editor. As a result, this is the first version of the editor that’s somewhat close to my vision and that I’m not embarrassed giving to a crossword constructor to use. If you use it, I’d love feedback as to how it went.
Read on for more details.
Libipuz
Libipuz got a version bump to 0.5.0. Changes include:
Adding GObject-Introspection support to the library. This meant a bunch of API changes to fix methods that were C-only. Along the way, I took the time to standardize and clean up the API.
Documenting the library. It’s about 80% done, and has some tutorials and examples. The API docs are here.
Validating both the docs and introspections. As mentioned last post, Philip implemented a nonogram app on top of libipuz in Typescript. This work gave me confidence in the overall API approach.
Porting libipuz to rust. I worked with GSoC student Pranjal and Federico on this. We got many of the leaf structures ported and have an overall approach to the main class hierarchy. Progress continues.
The main goal for libipuz in 2025 is to get a 1.0 version released and available, with some API guarantees.
Autofill
I have struggled to implement the autofill functionality for the past few years. The simple algorithm I wrote would fill out 1/3 of the board, and then get stuck. Unexpectedly, Sebastian showed up and spent a few months developing a better approach. His betterfill algorithm is able to fill full grids a good chunk of the time. It’s built around failing fast in the search tree, and some clever heuristics to force that to happen. You can read more about it at his site.
NOTE: filling an arbitrary grid is NP-hard. It’s very possible to have grids that can’t be easily solved in a reasonable time. But as a practical matter, solving — and failing to solve — is faster now.
I also fixed an annoying issue with the Grid editor. Previously, there were subtabs that would switch between the autofill and edit modes. Tabs in tabs are a bad interface, and I found it particularly clunky to use. However, it let me have different interaction modes with the grid. I talked with Scott a bit about it and he made an off-the-cuff suggestion of merging the tabs together and adding grid selection to the main edit tab. So far it’s working quite nicely, though a tad under-discoverable.
Word Definitions and Substrings
The major visible addition to the Cluephase is the definition tab. They’re pulled from Wiktionary, and included in a custom word-list stored with the editor. I decided on a local copy because Wiktionary doesn’t have an API for pulling definitions and I wanted to keep all operations fast. I’m able to look up and render the definitions extremely quickly.
I also made progress on a big design goal for the editor: the ability to work with substrings in the clue phase. For those who are unfamiliar with cryptic crosswords, answers are frequently broken down into substrings which each have their own subclues to indicate them. The idea is to show possibilities for these indicators to provide ideas for puzzle constructors.
It’s a little confusing to explain, so perhaps an example would help. In this video the answers to some cryptic clues are broken down into their parts. The tabs show how they could have been constructed.
Next steps?
Testing: I’m really happy with how the cryptic authoring features are coming together, but I’m not convinced it’s useful yet. I want to try writing a couple of crosswords to be sure.
Acrostic editor: We’re going to land Tanmay’s acrostic editor early in the cycle so we have maximum time to get it working
Nonogram player: There are a few API changes needed for nonograms
Word score: I had a few great conversations with Erin about scoring words — time for a design doc.
Game cleanup: I’m over due for a cycle of cleaning up the game. I will go through the open bugs there and clean them up.
Thanks again to all supporters, translators, packagers, testers, and contributors!
As a new year’s resolution, I’ve decided to improve SEO for this blog, so from now on my posts will be in FAQ format.
What are Sam Thursfield’s favourite music releases of 2025?
Glad you asked. I posted my top 3 music releases here on Mastodon. (I also put them on Bluesky, because why not? If you’re curious, Christine Lemmer-Webber has a great technical comparison between Bluesky and the Fediverse).
That’s quite a boring question, but ok. I used FastAPI for the first time. It’s pretty good.
And I have been learning the theory behind the C4 model, which I like more and more. The trick with the C4 model is, it doesn’t claim solve your problems for you. It’s a tool to help you to think in a more structured way so that you have to solve them yourself. More on that in a future post.
Should Jack Dorsey be allowed to speak at FOSDEM 2025?
Now that is a very interesting question!
FOSDEM is a “free and non-commercial” event, organised “by the community for the community”. The community, in this case, being free and open source software developers. It’s the largest event of its kind, and organising such a beast for little to no money for 25 years running, is a huge achievement. We greatly appreciate the effort the organisers put in! I will be at FOSDEM ’25, talking about automated QA infrastructure, helping out at the GNOME booth, and wandering wherever fate leads me.
Jack Dorsey is a Silicon Valley billionaire, you might remember him from selling Twitter to Elon Musk, touting blockchains, and quitting the board of Bluesky because they added moderation features into the protocol. Many people rolled eyes at the announcement that he will be speaking at FOSDEM this year in a talk titled “Infusing Open Source Culture into Company DNA”.
Drew DeVault stepped forward to organise a protest against Dorsey speaking, announced under the heading “No Billionares at FOSDEM“. More than one person I’ve spoken to is interested in joining. Other people I know think it doesn’t make sense to protest one keynote speaker out of the 1000s who have stepped on the stage over the years.
Protests are most effective when they clearly articulate what is being protested and what we want to change. The world in 2025 is a complex, messy place though which is changing faster than I can keep up with. Here’s an attempt to think through why this is happening.
Firstly, the”Free and Open Source Software community” is a convenient fiction, and in reality it is made up of many overlapping groups, with an interest in technology being sometimes the only thing we have in common. I can’t explain here all of the nuance, but lets look at one particular axis, which we could call pro-corporate vs. anti-corporate sentiments.
What I mean by corporate here is quite specific but if you’re alive and reading the news in 2025 you probably have some idea what I mean. A corporation is a legal abstraction which has some of the same rights as a human — it can own property, pay tax, employ people, and participate in legal action — while not actually being a human. A corporation can’t feel guilt, shame, love or empathy. A publicly traded corporation must make a profit — if it doesn’t, another corporation will eat it. (Credit goes to Charlie Stross for this metaphor :-). This leads to corporations that can behave like psychopaths, without being held accountable in the way that a human would. Quoting Alexander Biener:
Elites avoiding accountability is nothing new, but in the last three decades corporate avoidance has reached new lows. Nobody in the military-industrial complex went to jail for lying about weapons of mass destruction in Iraq. Nobody at BP went to jail for the Deepwater oil spill. No traders or bankers (outside of Iceland) were incarcerated for the 2008 financial crash. No one in the Sackler family was punished after Purdue Pharma peddled the death of half a million Americans.
I could post some more articles but I know you have your own experiences of interacting with corporations. Abstractions are useful, powerful and dangerous. Corporations allowed huge changes and improvements in technology and society to take place. They have significant power over our lives. And they prioritize making money over all the things we as individual humans might prioritize, such as fairness, friendliness, and fun.
On the pro-corporate end at FOSDEM, you’ll find people who encourage use of open source in order to share effort between companies, to foster collaboration between teams in different locations and in different organisations, to reduce costs, to share knowledge, and to exploit volunteer labour. When these people are at work, they might advocate publishing code as open source to increase trust in a product, or in the hope that it’ll be widely adopted and become ubiquitous, which may give them a business advantage. These people will use the term “open source” or “FOSS” a lot, they probably have well-paid jobs or businesses in the software industry.
Topics on the pro-corporate side this year include: making a commercial product better (example), complying with legal regulations (example) or consuming open source in corporate software (example)
On the anti-corporate end, you’ll find people whose motivations are not financial (although they may still have a well-paid job in the software industry). They may be motivated by certain values and ethics or an interest in things which aren’t profitable. Their actions are sometimes at odds with the aims of for-profit corporations, such as fighting planned obsolescence, ensuring you have the right to repair a device you bought, and the right to use it however you want even when the manufacturer tries to impose safeguards (sometimes even when you’re using it to break a law). They might publish software under restrictive licenses such as the GNU GPL3, aiming to share it with volunteers working in the open while preventing corporations from using their code to make a profit. They might describe what they do as Free Software rather than “open source”.
Talks on the anti-corporate side might include: avoiding proprietary software (example, example), fighting Apple’s app store monopoly (example), fighting “Big Tech” (example), sidestepping a manufacturer’s restrictions on how you can use your device (example), or the hyper-corporate dystopia depicted in Snow Crash (example).
These are two ends of a spectrum. Neither end is hugely radical. The pro-corporate talks discuss complying with regulations, not lobbying to remove them. The anti-corporate talks are not suggesting we go back to living as hunter-gatherers. And most topics discussed at FOSDEM are somewhere between these poles: technology in a personal context (example), in an educational context (example), history lessons (example).
Many talks are “purely technical”, which puts them in the centre of this spectrum. It’s fun to talk about technology for its own sake and it can help you forget about the messiness of the real world for a while, and even give the illusion that software is a purely abstract pursuit, separate from politics, separate from corporate power, and separate from the experience of being a human.
But it’s not. All the software that we discuss at FOSDEM is developed by humans, for humans. Otherwise we wouldn’t sit in a stuffy room to talk about it would we?
The coexistence of the corporate and the anti-corporate worlds at FOSDEM is part of its character. Few of us are exclusively at the anti-corporate end: we all work on laptops built by corporate workers in a factory in China, and most of us have regular corporate jobs. And few of us are entirely at the pro-corporate end: the core principle of FOSS is sharing code and ideas for free rather than for profit.
There are many “open source” events that welcome pro-corporate speakers, but are hostile to anti-corporate talks. Events organised by the Linux Foundation rarely have talks about “fighting Big Tech”, and you need $700 in your pocket just to attend them. FOSDEM is is one of the largest events where folk on the anti-corporate end of the axis are welcome.
Now let’s go back to the talk proposed by Manik Surtani and Jack Dorsey titled “Infusing Open Source Culture into Company DNA”. We can assume it’s towards the pro-corporate end of the spectrum. You can argue that a man with a billion dollars to his name has opportunities to speak which the anti-corporate side of the Free Software community can only dream of, so why give him a slot that could go to someone more deserving?
I have no idea how the main track and keynote speakers at FOSDEM are selected. One of the goals of the protest explained here is “to improve the transparency of the talk selection process, sponsorship terms, and conflict of interest policies, so protests like ours are not necessary in the future.”
I suspect there may be something more at work too. The world in 2025 is a tense place — we’re living through a climate crisis, combined with a housing crisis in many countries, several wars, a political shift to the far-right, and ever increasing inequality around the world. Corporations, more powerful than most governments, are best placed to help if they wanted, but we see very little news about that happening. Instead, they burn methane gas to power new datacenters and recommend we “mainline AI into the veins of the nation“.
None of this is uniquely Jack Dorsey’s fault, but as the first Silicon Valley billionaire to step on the stage of a conference with a strong anti-corporate presence, it may be that he has more to learn from us than we do from him. I hope that, as a long time advocate of free speech, he is willing to listen.
I’ve just tagged fwupd 2.0.4 — with lots of nice new features, and most importantly with new protocol support to allow applying the latest dbx security update.
The big change to the uefi-dbx plugin is the switch to an ISO date as a dbx version number for the Microsoft KEK.
The original trick of ‘count the number of Microsoft-owned hashes‘ worked really well, just until Microsoft started removing hashes in the distributed signed dbx file. In 2023 we started ‘fixing up‘ the version based on the last-added checksum to make the device have an artificially lower version than in reality. This fails with the latest DBXUpdate-20241101 update, where frustratingly, more hashes were removed than added. We can’t allow fwupd to update to a version that’s lower than what we’ve got already, and it somewhat gave counting hashes idea the death blow.
Instead of trying to map the hash into a low-integer version, we now use the last-listed hash in the EFI signature list to map directly to an ISO date, e.g. 20250117. We’re providing the mapping in a local quirk file so that the offline machine still shows something sensible, but are mainly relying on the remote metadata from the LVFS that’s always up to date. There’s even more detail in the plugin README for the curious.
We also changed the update protocol from org.uefi.dbx to org.uefi.dbx2 to simplify the testing matrix — and because we never want version 371 upgrading to 20230314 automatically — as that would actually be a downgrade and difficult to explain.
If we see lots of dbx updates going out with 2.0.4 in the next few hours I’ll also backport the new protocol into 1_9_X for the soon-to-be-released 1.9.27 too.
I’m trying to blog quicker this year. I’m also sick with the flu. Forgive any mistakes caused by speed, brevity, or fever.
Monday brought two big announcements in the non-traditional (open? open-ish?) social network space, with Mastodon moving towards non-profit governance (asking for $5M in donations this year), and Free Our Feeds launching to do things around ATProto/Bluesky (asking for $30+M in donations).
It’s a little too early to fully understand what either group will do, and this post is not an endorsement of specifics of either group—people, strategies, etc.
Instead, I just want to say: they should be asking for millions.
There’s a lot of commentary like this one floating around:
I don’t mean this post as a critique of Jan or others. (I deliberately haven’t linked to the source, please don’t pile on Jan!) Their implicit question is very well-intentioned. People are used to very scrappy open source projects, so millions of dollars just feels wrong. But yes, millions is what this will take.
What could they do?
I saw a lot of comments this morning that boiled down to “well, people run Mastodon servers for free, what does anyone need millions for”? Putting aside that this ignores that any decently-sized Mastodon server has actual server costs (and great servers like botsin.space shut down regularly in part because of those), and treats the time and emotional trauma of moderation as free… what else could these orgs be doing?
Just off the top of my head:
Moderation, moderation, moderation, including:
moderation tools, which by all accounts are brutally badly needed in Masto and would need to be rebuilt from scratch by FoF. (Donate to IFTAS!)
multi-lingual and multi-cultural, so you avoid the Meta trap of having 80% of users outside the US/EU but 80% of moderation in the US/EU.
Jurisdictionally-distributed servers and staff
so that when US VP Musk comes after you, there’s still infrastructure and staff elsewhere
and lawyers for this scenario
Good governance
which, yes, again, lawyers, but also management, coordination, etc.
(the ongoing WordPress meltdown should be a great reminder that good governance is both important and not free)
Privacy compliance
Mention “GDPR compliance” and “Mastodon” in the same paragraph and lots of lawyers go pale; doing this well would be a fun project for a creative lawyer and motivated engineers, but a very time-consuming one.
Bluesky has similar challenges, which get even harder as soon as meaningfully mirrored.
And all that’s just to have the same level of service as currently.
If you actually want to improve the software in any way, well, congratulations: that’s hard for any open source software, and it’s really hard when you are doing open source software with millions of users. You need product managers, UX designers, etc. And those aren’t free. You can get some people at a slight discount if you’re selling them on a vision (especially a pro-democracy, anti-harassment one), but in the long run you either need to pay near-market or you get hammered badly by turnover, lack of relevant experience, etc.
What could that cost, $10?
So with all that in mind, some benchmarks to help frame the discussion. Again, this is not to say that an ATProto- or ActivityPub-based service aimed at achieving Twitter or Instagram-levels of users should necessarily cost exactly this much, but it’s helpful to have some numbers for comparison.
legal: $10.8M in 2023-2024 (and Wikipedia plays legal on easy mode in many respects relative to a social network—no DMs, deliberately factual content, sterling global brand)
hosting: $3.4M in 2023-2024 (that’s just hardware/bandwidth, doesn’t include operations personnel)
Python Package Index
$20M/year in bandwidth from Fastly in 2021 (source) (packages are big, but so is social media video, which is table stakes for a wide-reaching modern social network)
Twitter
operating expenses, not including staff, of around $2B/year in 2022 (source)
Hard to get useful information on this on a per company basis without a lot more work than I want to do right now, but the overall market is in the billions (source).
Worth noting that lots of the people leaving Meta properties right now are doing so in part because tens of thousands of content moderators, paid unconscionably low wages, are not enough.
You can handwave all you want about how you don’t like a given non-profit CEO’s salary, or you think you could reduce hosting costs by self-hosting, or what have you. Or you can pushing the high costs onto “volunteers”.
But the bottom line is that if you want there to be a large-scale social network, even “do it as cheap as humanly possible” is millions of costs borne by someone.
What this isn’t
This doesn’t mean “give the proposed new organizations a blank check”. As with any non-profit, there’s danger of over-paying execs, boards being too cozy with execs and not moving them on fast enough, etc. (Ask me about founder syndrome sometime!) Good governance is important.
This also doesn’t mean I endorse Bluesky’s VC funding; I understand why they feel they need money, but taking that money before the techno-social safeguards they say they want are in place is begging for problems. (And in fact it’s exactly because of that money that I think Free Our Feeds is intriguing—it potentially provides a non-VC source of money to build those safeguards.)
But we have to start with a realistic appraisal of the problem space. That is going to mean some high salaries to bring in talented people to devote themselves to tackling hard, long-term, often thankless problems, and lots of data storage and bandwidth.