- Up early, breakfast, Eurostar home; started catching up on E-mail.
February 06, 2024
I attended FOSDEM last weekend and had the pleasure to participate in the Flathub / Flatpak BOF on Saturday. A lot of the session was used up by an extensive discussion about the merits (or not) of allowing direct uploads versus building everything centrally on Flathub’s infrastructure, and related concerns such as automated security/dependency scanning.
My original motivation behind the idea was essentially two things. The first was to offer a simpler way forward for applications that use language-specific build tools that resolve and retrieve their own dependencies from the internet. Flathub doesn’t allow network access during builds, and so a lot of manual work and additional tooling is currently needed (see Python and Electron Flatpak guides). And the second was to offer a maybe more familiar flow to developers from other platforms who would just build something and then run another command to upload it to the store, without having to learn the syntax of a new build tool. There were many valid concerns raised in the room, and I think on reflection that this is still worth doing, but might not be as valuable a way forward for Flathub as I had initially hoped.
Of course, for a proprietary application where Flathub never sees the source or where it’s built, whether that binary is uploaded to us or downloaded by us doesn’t change much. But for an FLOSS application, a direct upload driven by the developer causes a regression on a number of fronts. We’re not getting too hung up on the “malicious developer inserts evil code in the binary” case because Flathub already works on the model of verifying the developer and the user makes a decision to trust that app – we don’t review the source after all. But we do lose other things such as our infrastructure building on multiple architectures, and visibility on whether the build environment or upload credentials have been compromised unbeknownst to the developer.
There is now a manual review process for when apps change their metadata such as name, icon, license and permissions – which would apply to any direct uploads as well. It was suggested that if only heavily sandboxed apps (eg no direct filesystem access without proper use of portals) were permitted to make direct uploads, the impact of such concerns might be somewhat mitigated by the sandboxing.
However, it was also pointed out that my go-to example of “Electron app developers can upload to Flathub with one command” was also a bit of a fiction. At present, none of them would pass that stricter sandboxing requirement. Almost all Electron apps run old versions of Chromium with less complete portal support, needing sandbox escapes to function correctly, and Electron (and Chromium’s) sandboxing still needs additional tooling/downstream patching to run inside a Flatpak. Buh-boh.
I think for established projects who already ship their own binaries from their own centralised/trusted infrastructure, and for developers who have understandable sensitivities about binary integrity such such as encryption, password or financial tools, it’s a definite improvement that we’re able to set up direct uploads with such projects with less manual work. There are already quite a few applications – including verified ones – where the build recipe simply fetches a binary built elsewhere and unpacks it, and if this already done centrally by the developer, repeating the exercise on Flathub’s server adds little value.
However for the individual developer experience, I think we need to zoom out a bit and think about how to improve this from a tools and infrastructure perspective as we grow Flathub, and as we seek to raise funds for different sources for these improvements. I took notes for everything that was mentioned as a tooling limitation during the BOF, along with a few ideas about how we could improve things, and hope to share these soon as part of an RFP/RFI (Request For Proposals/Request for Information) process. We don’t have funding yet but if we have some prospective collaborators to help refine the scope and estimate the cost/effort, we can use this to go and pursue funding opportunities.
February 05, 2024
- Up lateish, off to the LibreOffice hack-fest, caught up with Caolan & Skyler. Plugged away at a few fun hacking pieces, reviewed a patch or two, fun. Out for dinner with the board, staff, MC at chez Leon; up late, bid fond goodbyes to those I could find.
In fwupd 1.9.12 and earlier we had the following auto-quit behavior: Auto-quit on idle after 2 hours, unless:
- Any thunderbolt controller, thunderbolt retimer or synaptics-mst devices exist.
These devices are both super slow to query and also use battery power to query as you have to power on various hungry things and then power them down to query for the current firmware version.
In 19.13, due to be released in a few days time, we now: Auto-quit after 5 minutes, unless:
- Any thunderbolt controller, thunderbolt retimer or synaptics-mst devices exist.
- Any D-Bus client (that used or is using fwupd) is still alive, which includes gnome-software if it’s running in the background of the GNOME session
- The daemon took more than 500ms to start – on the logic it’s okay to wait 0.5 seconds on the CLI to get results to a query, but we don’t want to be waiting tens of seconds to check for updates on a deeply nested USB hub devices.
The tl;dr: is that most laptop and desktop machines have Thunderbolt or MST devices, and so they already had fwupd running all the time before, and continue to have it running all the time now. Trading 3.3MB of memory and an extra process for instant queries on a machine with GBs of memory is probably worthwhile. For embedded machines like IoT devices, and for containers (that are using fwupd to update things like the dbx) fwupd was probably starting and then quitting after 2h before, and now fwupd is only going to be alive for 5 minutes before quitting.
If any of the thresholds (500 ms) or timeouts (5 mins) are offensive to you then it’s all configurable, see man fwupd.conf for details. Comments welcome.
February 04, 2024
A fast update of thing happening to me in the Wikimedia Movement in the last weeks:
- We asked for a WM Rapid Grant for a new project we have conceptualized at LaOficina: SMALL GLAM SLAM Pilot 1, to low entry barriers of software and practices adoption for very small GLAM organizations.
- Also, I have been granted an scholarship to attend the Wikimedia Hackathon 2024 event next May in Tallinn.
- And, as part of my travel connection, I plan to attend the more informal GLAM + Commons + AI sauna in Helsinki. So, another Wikimedia GLAM overdose after the excellent Montevideo’s GLAM Wiki 2023 meeting.
I’ve draft the main tasks for the hackathon. They are very big in scope but they would be adjousted with the results of the Pilot 1. Here they are:
- SMALL GLAM SLAM Pilot 1 : Mediawiki/Wikabase curated lists
- SMALL GLAM SLAM Pilot 1 : curated lists of ontologies and data models
- SMALL GLAM SLAM Pilot 1 : Essence modeling with Wikibase experiment
Lot’s of fun to come!
Namaste Everyone!
Hi everyone, so it has been a while since the successful completion of GNOME Asia Summit 2023, but well, when you have back-to-back exams, it becomes hard to write up.
Last year GNOME Asia happened in Kathmandu Nepal from December 1 - 3. Like GNOME Asia 2022, it was an amazing experience, to say the least.
Nepal and India having really close cultural ties made it feel like we are at home without being there, the friendly people, scenic beauty, the superpower to use Hindi in situations where English wasn't recognised, or visiting our holiest sites, Nepal offered us every bit of amazingness we could have asked for.
If I get to talk about Nepal, it can be a really big separate blog post, but this is about GNOME Asia, so let's proceed in that regard.
Day 0 (Because indexing starts with 0 ;))
Before departing from India, I used my trimmer, you know, to look good, unfortunately, while hearing music I got so lost in it that I didn't secure the adjustable comb properly and trimmed with 0.5mm :D Lost my beard just before the trip, yipeeee.

We landed in Nepal on 29th December, and after having some struggle with getting a cab and taking lots of photos, we went on to our hotel. The hotel was really nice, so thanks GNOME for that :D

We freshened up a bit and went sightseeing, exploring Thamel, and ate momos (Still like Uttar Pradesh (North Indian state) ones more ;)) We also bought some really needed and mandatory fridge magnets, because if you can't flex it, did it really happen?

Day 1
Where do I even start lol, it was probably the most jam-packed day. Waking up in the morning was hard, to say the least, but I conquered it. After freshening up I went for breakfast and had the pleasure of meeting - Rosanna, Kristi, Anisa, Fenris, Jipmang, Matthias and others I met at the previous two conferences.
It was nice catching up again.
We briefly discussed about next GNOME Asia and UbuCon Asia with stuffed mouths, because that is important as well. Fun fact, we won the bid to host UbuCon Asia 2024 in India and are hoping to have a joint event with GNOME Asia as well.
After being the last to finish up (Eating slowly and enjoying fully:P) we proceeded to the venue of the conference. There I met Aaditya Singh (Local team lead). We conversed a few times online and it was great to meet in person. After looting some swags, we went to the conference rooms.

Holly (New GNOME ED) started the conference and then we proceeded to the Keynote by Justin, It was just awesome!!!! Probably something I'll recommend my peers to watch. Then we had a talk by Federico and Rosanna, again awesome and then we had the best talk of the conference, mine!!

After draining myself with the presentation and the brief Q&A (If the talk felt a bit small and fast-paced, time was the issue) I proceeded to resume my role as an attendee and judging others ;)

After attending many more awesome talks, we got to witness the Fedora release party, and even when we were literally on power saving mode, with a 5% charge, the party acted like a charger (Yeah.... not the best at having metaphors)

Due to the cake delay, we had a nice intro section, it acted like an icebreaker and made everyone know each other well. After the party I don't know what we did tbh, most probably crashed at the hotel hehe. Phew that was a lot for one day. Fedora and GNOME you are cool ;)
Day 2
We had to miss the Keynote on the second day to visit Pashupatinath temple in the morning when it is quiet and peaceful. It is one of our holiest sites and if you visit Kathmandu and don't go there (for Hindus) then you are missing out on the most peaceful place. Those memories still give me chills.

But, I did watch the Keynote in Enitrety later on, and I have to say, the only time I watched a video this long it was of Linus Torvalds haha. It was next-level and full of insights. I just wish I could have attended it. I had the pleasure of talking with Hempal sir multiple times, and he was one of the best personalities.
Then had the pleasure to listen to amazing talks including by Federico, Nasah, Khairul, and others. Ps from my experience of GUADEC and GNOME Asia, Anisa's would have been great as well. I presented a talk on accessibility at GUADEC, but the one by Federico made me learn so much, which also makes the point that we need better documentation in this area, as it was painful to find those.
We then visited Thamel again, because why not, bought some Pashmina shawls from the hotel, and called it a day!
Day 3
Day 3 was a social visit, which means the day when you become poor after buying too much stuff, tired after walking too much, and amazed after exploring the beauty of a place like Nepal.
The best part was explaining about our god Shiva and about Shivling to curious Rosanna and Fenris. It was a moment where we shared our cultures and did knowledge exchange. Have to say, I became this spiritual after quite a long time, but it was a nice change.
We also ate Newari traditional lunch, where I also tried Tongba, a native alcoholic drink made from fermented millet, boiled milk and herbs, it was the second time I touched Alcohol in my life, and only after getting assurance that the Alcoholic content was too little :)

It was some really nice food, and the Tongba was also surprisingly good. There me and Asmit also did some mischief with fire, which was well... childish and fun :)
Don't know what happened to me there, but it was probably the most I have ever networked, maybe because after two conferences I became more comfortable and familiar with the community at large.

Day 3 was more of an experience so I don't think there is much more to be said for that.
Day 4

Visited Monastries and Boudhnath Stupa, and then departed back home.
End

To end with I want to thank GNOME Foundation for sponsoring my visit and giving me the opportunity to witness the awesome talks in person and bond with the community.
Btw, looking forward to meeting many of them again at UbuCon Asia 2024 India and also hopefully GNOME Asia 2024 :D

February 03, 2024
As we often do, a few members of the GTK team and the wider GNOME community came together for a two-day hackfest before FOSDEM.
This year, we were aiming to make progress on the topics of accessibility and input. Here is a quick summary of what we’ve achieved.
Accessibility
- We agreed to merge the socket implementation for webkit accessiblity that Georges wrote
- We agreed that the accessible notification api that Lukáš suggested is fine
- We finished the GtkAccessibleText interface and moved our internal implementations over to that interface
- We discussed the possibility of an a11y backend based on AccessKit
Input
- Carlos reviewed the merge request for passing unhandled events back to the system (on macOS)
- We looked over the remnants of X11-style timestamps in our input apis and decided to provide alternatives taking an event
Wayland
- Carlos started to turn the private gtk-shell protocol into separate protocols
Thanks to the GNOME foundation for supporting this event. 
February 02, 2024

Python is a interpreted language, so the python code are just text
files with the .py extension. For simple scripts it's really easy to
have your files located, but when you starts to use dependencies and
different projects with different requirements the thing starts to get
more complex.
PYTHONPATH
The Python interpreter uses a list of paths to try to locate python modules, for example this is what you can get in a modern GNU/Linux distribution by default:
Python 3.11.7 (main, Dec 15 2023, 10:49:17) [GCC] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.path
['',
'/usr/lib64/python311.zip',
'/usr/lib64/python3.11',
'/usr/lib64/python3.11/lib-dynload',
'/usr/lib64/python3.11/site-packages',
'/usr/lib64/python3.11/_import_failed',
'/usr/lib/python3.11/site-packages']
These are the default paths where the python modules are installed. If
you install any python module using your linux packaging tool, the
python code will be placed inside the site-packages folder.
So system installed python modules can be located in:
/usr/lib/python3.11/site-packagesfor modules that are architecture independent (pure python, all.pyfiles)/usr/lib64/python3.11/site-packagesfor modules that depends on the arquitecture, that's something that uses low level libraries and needs to build so there are some.sofiles.
pip
When you need a new python dependency you can try to install from your
GNU/Linux distribution using the default package manager like
zypper, dnf or apt, and those python files will be placed in the
system paths that you can see above.
But distributions doesn't pack all the python modules and even if they do, you can require an specific version that's different from the one packaged in your favourite distribution, so in python it's common to install dependencies from the Python Package Index (PyPI).
Python has a tool to install and manage Python packages that looks for desired python modules in PyPI.
You can install new dependencies with pip just like:
$ pip install django
And that command looks for the django python module in the PyPI,
downloads and install it, in your user
$HOME/.local/lib/python3.11/site-packages folder if you
use --user, or in a global system path like /usr/local/lib or
/usr/lib if you run pip as root.
But the usage of pip directly in the system is something not
recommended today, and even it's disabled in some
distributions, like openSUSE Tumbleweed.
[danigm@localhost ~] $ pip install django
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try
zypper install python311-xyz, where xyz is the package
you are trying to install.
If you wish to install a non-rpm packaged Python package,
create a virtual environment using python3.11 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip.
If you wish to install a non-rpm packaged Python application,
it may be easiest to use `pipx install xyz`, which will manage a
virtual environment for you. Install pipx via `zypper install python311-pipx` .
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
virtualenvs
Following the current recommendation, the correct way of installing
third party python modules is to use virtualenvs.
The virtualenvs are just specific folders where you install your
python modules and some scripts that make's easy to use it in
combination with your system libraries so you don't need to modify the
PYTHONPATH manually.
So if you've a custom project and want to install python modules you can create your own virtualenv and use pip to install dependencies there:
[danigm@localhost tmp] $ python3 -m venv myenv
[danigm@localhost tmp] $ . ./myenv/bin/activate
(myenv) [danigm@localhost tmp] $ pip install django
Collecting django
...
Successfully installed asgiref-3.7.2 django-5.0.1 sqlparse-0.4.4
So all dependencies are installed in my new virtualenv folder and if I use the python from the virtualenv it's using those paths, so all the modules installed there are usable inside that virtualenv:
(myenv) [danigm@localhost tmp] $ ls myenv/lib/python3.11/site-packages/django/
apps contrib db forms __init__.py middleware shortcuts.py templatetags urls views
conf core dispatch http __main__.py __pycache__ template test utils
(myenv) [danigm@localhost tmp] $ python3 -c "import django; print(django.__version__)"
5.0.1
(myenv) [danigm@localhost tmp] $ deactivate
With virtualenvs you can have multiple python projects, with different dependencies, isolated, so you use different dependencies when you activate your desired virtualenv:
- activate
$ . ./myenv/bin/activate - deactivate
$ deactivate
High level tools to handle virtualenvs
The venv module is a default Python module and as you can see
above, it's really simple to use, but there are some tools that
provides some tooling around it, to make it easy for you, so usually
you don't need to use venv directly.
pipx
For final python tools, that you are not going to use as dependencies in your python code, the recommended tool to use is pipx.

The tool creates virtualenv automatically and links the binaries so
you don't need to worry about anything, just use as a way to install
third party python applications and update/uninstall using it. The
pipx won't mess your system libraries and each installation will use
a different virtualenv, so even tools with incompatible dependencies
will work nicely together in the same system.
Libraries, for Python developers
In the case of Python developers, when you need to manage dependencies for your project, there are a lot of nice high level tools for managing dependencies.
These tools provides different ways of managing dependencies, but all
of them relies in the use of venv, creating the virtualenv in
different locations and providing tools to enable/disable and manage
dependencies inside those virtualenvs.
For example, poetry creates virtualenvs by default inside the
.cache folder, in my case I can find all poetry created virtualenvs
in:
/home/danigm/.cache/pypoetry/virtualenvs/
Most of these tools add other utilities on top of the dependency
management. Just for installing python modules easily you can always
use default venv and pip modules, but for more complex projects
it's worth to investigate high level tools, because it'll make easy to
manage your project dependencies and virtualenvs.
Conclusion
There are a lot of python code inside any modern Linux distribution and if you're a python developer it's possible to have a lot of python code. Make sure to know the source of your modules and do not mix different environments to avoid future headaches.
As a final trick, if you don't know where's the actual code of some python module in your running python script, you can always ask:
>>> import django
>>> django.__file__
'/tmp/myenv/lib64/python3.11/site-packages/django/__init__.py'
This could be even more complicated if you start to use containers and different python versions, so keep you dependencies clean and up to date and make sue that you know where is your Python code.
I have spent the past few weeks working on a small personal project, because I do not know when to stop. I think Mingle is mostly feature complete at this point. It lets you play with Google's Emoji Kitchen, which is available on Android's GBoard, on GNOME. It provides a convenient way to find the perfect expression and paste it into any conversation online. It is a GTK4/Libadwaita application written in Vala/Blueprint. I have such a weird fascination with Vala. I have a Java/C# background so the syntax is quite familiar. Blueprint is the future and way more readable then XML. The application is still using a placeholder icon because I am not an artist and my own endeavors into Inkscape have not been too successful. Life has kinda been kicking my butt recently, so working on this app has been a small reprieve from how uncaring and mean existence can be.

Personal Favorite
Lazy Loading
I started this project under the assumption I would get it running under GNOME Shell Mobile one day. So, I had to implement lazy loading, something I have never done before, if I ever wanted to see this run on a mobile device; Until somewhat recently, when you selected an emoji it would just grab every relevant combination. This was not smart or efficient. Mingle would very quickly use up more memory than my web browser. Google's artists made a lot of emoji art, and loading that many combined emojis and populating a flow box asynchronously at the same time tanked performance. This made my XPS 13, with only 8 gigabytes of memory, scream in agony.
Currently, Mingle is loading combined emojis in batches and fetches more on an edge-overshot signal, so it loads more as your scroll. This works both with a scroll-wheel and touch-input (Thanks XPS). I am not sure if this is the best approach, but it prevents the app from being a stuttering mess. When you select a left and a right emoji we prepend that combination to our center flow box.
Style Switcher

A fairly common pattern I have seen in both GNOME and Elementary apps are these slick color/style selectors. I wanted to implement this pattern, because why not? It looks good and I get to learn how things work. These custom widgets are really just radio buttons that are heavily stylized with CSS. We then just have to add it as a custom child to our primary menu. Here is great example using JavaScript and Blueprint. Blackbox, my terminal of choice, also has a selector written in Vala and XML. These two projects and the beauty of open source software allowed me to create a solution I am happy with. If anybody is reading this and is interested you can check out the repo. I intend to polish things a bit more and then do a first release on Flathub.
Hello everyone!
I am here to terrorize your calendar by dropping the dates for two back to back hackfests we are organizing in the beautiful city of Thessaloniki, Greece (who doesn’t like coming to Greece on work time, right?).
May 27-29th we will be hosting the annual GStreamer Spring Hackfest. If multimedia is your thing, you know the drill. Newcomers are also welcome ofc!
May 31st-June 5th we will be hosting another edition of the GNOME
Rust Hackfest. First in person Rust hackfest ever since the pandemic started. From what I heard, half of Berlin will be coming for this one so we might change its scope to an all around GNOME one, but we will see. You are all welcome!
See the pages of each hackfest for more details.
We are in the final steps of booking the venue but it will most likely be in the city center and it should be safe to book accommodation and traveling tickets.
Additionally the venue we are looking at can accommodate around 40 people, so please please add yourself to the organizing pad of each hackfest you are interested in, in addition to any dietary restrictions you might have.
See you all IRL!
I’ve authored an article recently for Fedora Magazine on Performance Profiling in Fedora.
It covers both the basics on how to get started as well as the nitty-gritty details of how profilers work. I’d love for others to be more informed on that so I’m not the only person maintaining Sysprof.
Hopefully I was able to distill the information down a bit better than my typical blog posts. If you felt like those were maybe too difficult to follow, give this one a read.
Update on what happened across the GNOME project in the week from January 26 to February 02.
Events
Cassidy James Blaede reports
Going to FOSDEM this weekend? Meet up with GNOME and Flathub folks!
If you’re unable to make the Flathub BoF but still want to chat, you can catch the team around the GNOME and KDE stands.
See you there!
alatiera reports
We are organizing a GStreamer hackfest in Thessaloniki, Greece on May 27th-29th. In addition the #9 installment of the GNOME ♥️ Rust Hackfest series will take place in the days after 31st of May - 5th of July. For more details see the blogpost and the handbook hackfests page.
![]()
GNOME Core Apps and Libraries
GTK
Cross-platform widget toolkit for creating graphical user interfaces.
Emmanuele Bassi says
The GTK developers are having a hackfest in Brussels right before the FOSDEM weekend. Lots of work on rendering, media, input, and accessibility. Look forward to a full report on the GTK development blog.
GNOME Websites
Allan Day announces
The GNOME Project Handbook was officially announced! This new resource provides all the information people need to participate in the GNOME project, including how the project works, how to get accounts and use project infrastructure, the release cycle, issue management, and much more. Everyone is invited to help keep the handbook up to date and accurate. The handbook is part of an ongoing to effort to the retire the GNOME wiki. More announcements about this will be coming soon.
GNOME Circle Apps and Libraries
Paper Clip
Edit PDF document metadata.
Diego Iván M.E says
This week, Paper Clip v5.0 was released! This release brings a couple of quality-of-life improvements, enhancements and a shiny new feature:
- Paper Clip now supports editing encrypted documents, thanks to a brand new dialog that allows users to open files protected by a password.
- The DublinCore XMP metadata format is now synchronized with their PDF equivalents.
- Updated translations, including French, Basque, Russian, Italian, Occitan and Spanish. Thanks to rene-coty, Sergio Varela, Ser82-png, albanobattistella and Mejans for their contributions!
- Appdata improvements by Sabri Ünal
Get the latest release from Flathub!
![]()
Third Party Projects
SHuRiKeN says
Eeman app is now available on Flathub, made using GTK 4 and Libadwaita, this app lets you track and get notified of your Salah (prayer) timings, and lets you read the beautiful Quran.
ghani1990 says
A new version of Dosage app to Keep track of your treatments was released this week
this version (1.5.1) has many improvements, the most important of which are:
• It’s now possible to edit history entries • New preference to auto-clear history • New row style for select frequency • New row style for select date • New badge style for history confirmed item
![]()
Denaro
Manage your personal finances.
Nick announces
Denaro V2024.2.0 was released this week. This release contains some bug fixes to make your experience more stable!
Here’s the changelog:
- Improved importing of QIF files
- Fixed a bug where the app would crash when filtering transactions for certain dates (mainly leap years like this year 😅)
- Updated and added translations (Thanks to everyone on Weblate)!
![]()
Miscellaneous
Sophie (she/her) says
The feature freeze for GNOME 46 is closing in. Starting Feb 10 no changes to UI, features, and APIs are allowed without approval from the release team.
GNOME Foundation
Rosanna reports
This week has been very busy for the Foundation staff. I started the beginning of the week meeting with our bookkeepers and discussing our books as well as our budget. Midweek was spent travelling to Brussels, where our Executive Director Holly and I spent yesterday meeting with the GNOME Board of Directors and today meeting with our Advisory Board. This weekend is FOSDEM, where I am looking forward to seeing folks stopping by our booth as well as at GNOME Beers Saturday night. I’ve already had a lot of very productive conversations here in Brussels and am sure there will be many more to come.
That’s all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
January 31, 2024
It's only a couple of days left until that weekend of the week of the turn of month between January and February, and that means it's time again for FOSDEM, gathering FOSS entusiasts in Brussels, Belgium. And I will be there as well!
There has been some nice improvements in Maps since the end-of-year post before New Years as well.
James Westman has continued improving vector rendering support in libshumate and also implemented the ability to click on symbols, so you can finally click on labels and icons on the map instead of using the „What's here?” context menu item, doing a reverse geocoding. That always felt a bit like a „work around”. This seems like one of those cases of „a video says more than a thousand images”.
This is available when enabling the experimental vector map layer.
Another thing I have been working on is improving the experience of the favorites menu. Now instead of showing an insensitive, greye-out button in the headerbar when there are no places marked as favorites now the menu is always accessible, but instead shows an „empty state” when there are no places marked as favorites.
When there are places marked as favorites, there is now a close button next to the item allowing to remove it from favorites (rather than having to select the place and animate there to unmark the star icon in the popover showing).
When removing a favorite from menu a „toast” is displayed offering to undo this action (similar to when e.g. deleting files in Files).
Jakub Steiner has redesigned the map pin icon.
Compared to the old Tango-style icon that, while being a nice icon that has served us well looked a bit out-of-place in relation to the new UI style with a more 3D look.
Another new feature was thought up when looking for a cafe in a shopping mall in Riga during GUADEC last year is showing information about the floor location of places. There two established tags in OSM for this: level which represents the number of floors relative to the ground floor (or the lowest ground floor for buildings built in a souterrain fascion). In this case we now show this information in a spelled-out form for ground level or above, or below ground level (with provision for using localized plural forms).
The other tag is level:ref referring a literal floor designation as „written in the elevator“. This could be fully spelled-out named floors, or numbers with suffixes and so on. When this is available we'll refer to that one as this would directly correspond to actual writing on-site.
Lastly James has added support for showing descriptions in the info bubble when clicking on points in a GeoJSON shape files when present. It also now shows the name of the shape file in the info bubble (this shape file was from an old GUADEC in Strasbourg).
Maybe I forgot something, but I think those where the highlights of new stuff so far in 2024.
Maybe see you in Brussels in a couple of days!
A few months have passed since New Responsibilities was posted, so I thought I would provide an update.
Projects Maintenance
Of all the freedesktop projects I created and maintained, only one doesn't have a new maintainer, low-memory-monitor.
This daemon is what the GMemoryMonitor GLib API is based on, so it can't be replaced trivially. Efforts seem to be under way to replace it with systemd APIs.
As for the other daemons:
- switcheroo-control got picked up by Jonas Ådahl, one of the mutter maintainers. I'm looking forward to seeing this merge request fixed so we can have better menu items on dual-GPU systems
- iio-sensor-proxy added Dylan Van Assche to its maintenance team, assisting Guido Günther.
- power-profiles-daemon is now maintained by Marco Trevisan. It recently got support for separate system and CPU power profiles, and display power saving features are in the works.
(As an aside, there's posturing towards replacing power-profiles-daemon with tuned in Fedora. I would advise stakeholders to figure out whether having a large Python script in the boot hot path is a good idea, taking a look at bootcharts, and then thinking about whether hardware manufacturers would be able to help with supporting a tool with so many moving parts. Useful for tinkering, not for shipping in a product)
Updated responsibilities
Since mid-August, I've joined the Platform Enablement Team. Right now, I'm helping out with maintenance of the Bluetooth kernel stack in RHEL (and thus CentOS).
The goal is to eventually pivot to hardware enablement, which is likely to involve backporting and testing, more so than upstream enablement. This is currently dependent on attending some formal kernel development (and debugging) training sessions which should make it easier to see where my hodge-podge kernel knowledge stands.Blog backlog
Before being moved to a different project, and apart from the usual and very time-consuming bug triage, user support and project maintenance, I also worked on a few new features. I have a few posts planned that will lay that out.
FOSDEM 2024—a free event for software developers to meet, share ideas, and collaborate—is this coming weekend in Brussels, Belgium and we'll be there! Learn where you can find us or even sit down and discuss Flathub itself, metadata and build validation, upcoming changes, and anything else on your mind.
BoF: Saturday 16:00 CET
The FOSDEM website describes BoFs pretty well:
BOF stands for Birds Of a Feather who, as the saying goes, flock together. FOSDEM has three meeting rooms that may be booked in 30 or 60 minute blocks for discussions. All the meetings are public so anyone who is interested can attend if there is enough space.
We've reserved a BoF room for Saturday, February 3 at 16:00 local time (CET, UTC+1); seating is on a first-come, first-served basis, so arrive promptly! We'll be meeting to discuss recent developments around metadata and build validation, other upcoming changes, and anything else on the minds of the attendees. Check the event for the exact room details (and any scheduling changes).
We hope to see you there!
Stands
Various Flathub and Flatpak folks can also be found around associated stands:
- Kodi: building H, level 1, stand 4
- GNOME: building H, level 1, stand 5
- KDE: building H, level 1, stand 6
- Fedora Project: building AW, level 1, booth 4
- Linux on Mobile: building AW, level 1, booth 7
Find us to chat about Flathub, app submission and maintenance, Flatpak, and the Linux desktop app ecosystem.
January 30, 2024
There are a lot of tangible benefits in using toolbox containers for development to the point that I don’t want to use anything else anymore. Even with a bunch of tricks at our disposal, there are still downsides. The containers are not complete sessions but rather try to integrate with the host session. If you’re working on something that is part of a session it might be possible to run a test suite and even more elaborate setups but running it fully integrated often becomes a problem.
If the host is a traditional mutable system it’s possible to just not use toolbox. If you’re on an immutable system they often offer some way to make it mutable temporarily using some kind of overlay at which point they behave mostly like the traditional mutable systems. The unfortunate side effect is that you don’t get the benefits of toolbox anymore.
It’s also possible to develop in the toolbox and on the host system at the same time, depending on what specifically you’re working on right now to get the benefits of both systems. The drawback is that the toolbox container and the host are different systems. You’re setting up, compiling, etc. everything twice and run your project in different environments. Also not ideal.
We can do better. Toolbox can, in theory, use arbitrary OCI images. In practice there are assumptions from toolbox on how an image looks and behaves. Fedora Silverblue, or rather rpm-ostree, can also, in theory, boot arbitrary OCI images but also comes with its assumptions.
It turns out that in practice the unofficial OCI image variant of Fedora Silverblue can be used as a toolbox image and the images of such containers can be booted into with rpm-ostree.
$ toolbox create -i quay.io/fedora-ostree-desktops/silverblue:39 my-silverblue-toolbox
$ toolbox enter my-silverblue-toolbox
⬢ # install dnf to make it behave like a usual toolbox container
⬢ sudo rpm-ostree install -y dnf
⬢ sudo dnf update -y
⬢ # Let's install strace and gdb. Do whatever you want with your container!
⬢ sudo dnf install -y strace gdb
⬢ exit
$ # some magic to convert the running container into something rpm-ostree understands
$ # there are probably ways to do this with less copying (tell me if you know)
$ podman commit my-silverblue-toolbox my-silverblue-toolbox-image
$ sudo mkdir -p /var/lib/my-silverblue-toolbox-image
$ podman save --format=oci-archive "my-silverblue-toolbox-image" | sudo tar -x -C "/var/lib/my-silverblue-toolbox-image"
$ sudo rpm-ostree rebase "ostree-unverified-image:oci:/var/lib/my-silverblue-toolbox-image"
$ # boot into our toolbox
$ sudo systemctl reboot -i
One toolbox container to develop in and one reboot to test the changes in a full session on real hardware. This is all unsupported and might break in interesting ways but it shows the power of OCI based operating systems and toolbox.
I’m a firm believer in the importance of documentation for open source projects, particularly when it comes to onboarding new contributors. To attract and retain contributors, you need good docs.
Those docs aren’t just important for practical information on how to contribute (though that is important). They’re also important when it comes to understanding the more general aspects of a project, like how decisions are made, who the stakeholders are, what the history is, and so on. For new contributors, understanding these aspects of a project is essential to being able to participate. (They are also the aspects of a project that established contributors often take for granted.)
Of course, ensuring that you always have up to date project documentation isn’t easy. This is particularly true for large, long-running projects, where the tendency is for large amounts of documentation to get written and then eventually left to rot. As redundant and inaccurate docs accumulate, they increasingly misdirect and impede contributors.
This characterization has unfortunately has been true for GNOME for some time. For many years, the main source of project documentation has been the wiki and, for a long time, the vast majority of that wiki content has been either inaccurate or redundant. We can only assume that, with so much out of date information floating around, countless hours of have been lost, with existing contributors struggling to find the information they need, and new potential contributors being put off before they have even gotten started.
Enough with the preamble
The poor state of GNOME’s documentation has been something that I’ve wanted to tackle for some time, so it’s with great excitement that I’m happy to announce a new, completely rewritten documentation resource for the project: the GNOME Project Handbook (otherwise known as handbook.gnome.org).
The handbook is a new website whose goal is to provide accessible, well-maintained documentation about how to get stuff done within GNOME. It has a specific scope: it does not provide technical documentation for those using GNOME technologies, nor does it contain user documentation, nor does it attempt to provide public-facing home pages for apps and libraries. What it does contain is the information required to operate as a GNOME contributor.
The fact that handbook.gnome.org is able to have this relatively tight focus is thanks to a collection of other GNOME sites, each of which replaces a role previously played by the wiki. This includes apps.gnome.org, developer.gnome.org, and welcome.gnome.org. Thank you to the creators of those resources!
The handbook site itself is managed like any other GNOME project. There’s a repository that generates the site, issues can be reported, and changes can be proposed through merge requests. The hope is that this will avoid many of the maintenance issues that we previously had with the wiki.
Notable content
The handbook is composed of pages from the wiki, which have largely been rewritten, plus a decent amount of original content. There are some sections which I’m particularly excited about, and want to highlight.
Issue tracker guidelines
GNOME has had issue reporting and triage guidelines for almost two decades. However, it has been many years since they were actively maintained, and they were never updated when GNOME migrated from Bugzilla to GitLab. I think that a lot of contributors have forgotten that they even exist.
The handbook includes a fresh set of issue tracking guidelines, which have been updated for the modern era. They’re based on the old ones from many years ago, but have been substantially revised and expanded. The new guidelines cover how to report an issue, how to review issues for quality and relevance, and policies and best practices for maintainers. One exciting new aspect is guidelines for those who want to get started with issue review as a new contributor.
I’m hopeful that having clear processes and guidelines around issue tracking will have an enabling effect for contributors and maintainers, so they can be more forthright when it comes to issue management, and in so doing get our issues trackers into a better state.
Governance
The handbook has a page on governance! It describes how decisions are made in GNOME, the various roles in the project, who has authority, and how the project works. Us old hands tend to assume this stuff, but for new contributors it’s essential information, and we never documented it before.
How to submit a code change
Amazingly — incredibly! — until this day, GNOME has not documented how to submit a code change to the project. We just left people to figure it out by themselves. This is something that the handbook covers. If you’ve ever wanted to submit a change to GNOME and haven’t known how, give it a read over.
Infrastructure
The infrastructure pages aren’t new, but they were previously causing some confusion and so have been substantially rewritten. The new pages aim to make it really clear which services are available, how developer permissions are managed, and how to get access when you need it.
What next?
It’s still early days for the handbook. Most of the core content is in, but there will be issues and missing pieces. If you spot any problems, there’s an issue tracker. You can also submit merge requests or make suggestions (the project README has more information on this).
The plan is to retire the wiki. An exact time line for this has yet to be set (there will be an announcement when that happens). However, it’s encouraged to consult the handbook rather than the wiki from this point forward and, if you’re continuing to use the wiki for anything, to move that content elsewhere. There’s a migration guide with details about how to do this.
Many thanks to those who have helped with this project, including but not limited to: Jakub Steiner, Andrea Veri, Emmanuele Bassi, Michael Catanzaro, Brage Fuglseth, Florian Muellner, Alexandre Franke, Sebastian Wick, and Kolja Lampe.
January 29, 2024
This year’s GUADEC is going to be in the USA, making it difficult to attend both for Visa/border control reasons and because it’s not easy to get to from Europe without flying. Many of us want to avoid the massive emissions in particular (around 3 tons of CO2, which is half the yearly per-capita emissions in most European countries). If that’s you, too, you’re in luck because we’re doing yet another edition of the fabulous Berlin Mini GUADEC!
Berlin has one of the largest GNOME local groups in the world, and is relatively easy to get to from most of Europe by train (should be even easier now thanks to new night train options). At our last Mini GUADEC in 2022 we had people join from all over Europe, including Italy, Belgium, Czech Republic, and the UK.
The local Berlin community has grown steadily over the past few years, with regular events and local hackfests such as the Mobile and Local-first hackfests last year, including collaborations with other communities (such as postmarketOS and p2panda). We hope to make this year’s Mini GUADEC another opportunity for friends from outside the project to join and work on cool stuff with us.
We’re still in the process of figuring out the venue, but we can already announce that the 2024 Mini GUADEC will cover both the conference and BoF days (July 19-24), so you can already mark it on your calendars and start booking trains :)
If you already know you’re going to join, feel free to sign up by adding your name to this Hedgedoc, and join the Matrix room. If you don’t have a GNOME account, please email berlinminiguadec@mailup.net to let us know you’re coming.
See you in Berlin!
This is a follow-up from our Spam-label approach, but this time with MOAR EMOJIS because that's what the world is turning into.
Since March 2023 projects could apply the "Spam" label on any new issue and have a magic bot come in and purge the user account plus all issues they've filed, see the earlier post for details. This works quite well and gives every project member the ability to quickly purge spam. Alas, pesky spammers are using other approaches to trick google into indexing their pork [1] (because at this point I think all this crap is just SEO spam anyway). Such as commenting on issues and merge requests. We can't apply labels to comments, so we found a way to work around that: emojis!
In GitLab you can add "reactions" to issue/merge request/snippet comments and in recent GitLab versions you can register for a webhook to be notified when that happens. So what we've added to the gitlab.freedesktop.org instance is support for the :do_not_litter: (🚯) emoji [2] - if you set that on an comment the author of said comment will be blocked and the comment content will be removed. After some safety checks of course, so you can't just go around blocking everyone by shotgunning emojis into gitlab. Unlike the "Spam" label this does not currently work recursively so it's best to report the user so admins can purge them properly - ideally before setting the emoji so the abuse report contains the actual spam comment instead of the redacted one. Also note that there is a 30 second grace period to quickly undo the emoji if you happen to set it accidentally.
Note that for purging issues, the "Spam" label is still required, the emojis only work for comments.
Happy cleanup!
[1] or pork-ish
[2] Benjamin wanted to use :poop: but there's a chance that may get used for expressing disagreement with the comment in question
January 28, 2024
Recently, GTK gained not one, but two new renderers: one for GL and one for Vulkan.
Since naming is hard, we reused existing names and called them “ngl” and “vulkan”. They are built from the same sources, therefore we also call them “unified” renderers.
But what is exciting about them?
A single source
As mentioned already, the two renderers are built from the same source. It is modeled to follow Vulkan apis, with some abstractions to cover the differences between Vulkan and GL (more specifically, GL 3.3+ and GLES 3.0+). This lets us share much of the infrastructure for walking the scene graph, maintaining transforms and other state, caching textures and glyphs, and will make it easier to keep both renderers up-to-date and on-par.
Could this unified approach be extended further, to cover a Metal-based renderer on macOS or a DirectX-based one on Windows? Possibly. The advantage of the Vulkan/GL combination is that they share basically the same shader language (GLSL, with some variations). That isn’t the case for Metal or DirectX. For those platforms, we either need to duplicate the shaders or use a translation tool like SPIRV-Cross.
If that is the kind of thing that excites you, help is welcome.
Implementation details
The old GL renderer uses simple shaders for each rendernode type and frequently resorts to offscreen rendering for more complex content. The unified renderers have (more capable) per-node shaders too, but instead of relying on offscreens, they will also use a complex shader that interprets data from a buffer. In game programming, this approach is known as a ubershader.
The unified renderer implementation is less optimized than the old GL renderer, and has been written with a focus on correctness and maintainability. As a consequence, it can handle much more varied rendernode trees correctly.
Here is an harmless-looking example:
repeat {
bounds: 0 0 50 50;
child: border {
outline: 0 0 4.3 4.3;
widths: 1.3;
}
}


New capabilities
We wouldn’t have done all this work, if there wasn’t some tangible benefit. Of course, there’s new features and capabilities. Lets look at some:
Antialiasing. A big problem with the old GL renderer is that it will just lose fine details. If something is small enough to fall between the boundaries of a single line of pixels, it will simply disappear. In particular this can affect underlines, such as mnemonics. The unified renderers handle such cases better, by doing antialiasing. This helps not just for preserving fine detail, but also prevents jagged outlines of primitives.

Fractional scaling. Antialiasing is also the basis that lets us handle fractional scales properly. If your 1200 × 800 window is set to be scaled to 125 %, with the unified renderers, we will use a framebuffer of size 1500 × 1000 for it, instead of letting the compositor downscale a 2400 × 1600 image. Much less pixels, and a sharper image.
Arbitrary gradients. The old GL renderer handles linear, radial and conic gradients with up to 6 color stops. The unified renders allow an unlimited number of color stops. The new renderers also apply antialiasing to gradients, so sharp edges will have smooth lines.

Dmabufs. As a brief detour from the new renderers, we worked on dmabuf support and graphics offloading last fall. The new renderers support this and extend it to create dmabufs when asked to produce a texture via the render_texture api (currently, just the Vulkan renderer).
Any sharp edges?
As is often the case, with new capabilities comes the potential for new gotchas. Here are some things to be aware of, as an app developer:
No more glshader nodes. Yes, they made for some fancy demos for 4.0, but they are very much tied to the old GL renderer, since they make assumptions about the GLSL api exposed by that renderer. Therefore, the new renderers don’t support them.
You have been warned in the docs:
If there is a problem, this function returns FALSE and reports an error. You should use this function before relying on the shader for rendering and use a fallback with a simpler shader or without shaders if it fails.
Thankfully, many uses of the glshader node are no longer necessary, since GTK has gained new features since 4.0, such as mask nodes and support for straight-alpha textures.
Fractional positions. The old GL renderer is rounding things, so you could get away with handing it fractional positions. The new renderers will place things where you tell it. This can sometimes have unintended consequences, so should be on the lookout and make sure that your positions are where they should be.
In particular, look out for out for cairo-style drawing where you place lines at half-pixel positions so they fill out one row of pixels precisely.
Driver problems. The new renderers are using graphics drivers in new and different ways, so there is potential for triggering problems on that side.
Please file problems you see against GTK even if they look like driver issues, since it is useful for us to get an overview how well (or badly) the new code works with the variety of drivers and hardware out there.
But is it faster?
No, the new renderers are not faster (yet).
The old GL renderer is heavily optimized for speed. It also uses much simpler shaders, and does not do the math that is needed for features such as antialiasing. We want to make the new renderers faster eventually, but the new features and correctness make them very exciting, even before we reach that goal. All of the GPU-based renderers are more than fast enough to render todays GTK apps at 60 or 144 fps.
That being said, the Vulkan renderer comes close to matching and surpassing the old GL renderer in some unscientific benchmarks. The new GL renderer is slower for some reason that we have not tracked down yet.
New defaults
In the just-released 4.13.6 snapshot, we have made the ngl renderer the new default. This is a trial balloon — the renderers need wider testing with different apps too verify that they are ready for production. If significant problems appear, we can revert back to the gl renderer for 4.14.
We decided not make the Vulkan renderer the default yet, since it is behind the GL renderers in a few application integration aspects: the webkit GTK4 port works with GL, not with Vulkan, and GtkGLArea and GtkMediaStream currently both produce GL textures that the Vulkan renderer can’t directly import. All of these issues will hopefully be addressed in the not-too-distant future, and then we will revisit the default renderer decision.
If you are using GTK on very old hardware, you may be better off with the old GL renderer, since it makes fewer demands on the GPU. You can override the renderer selection using the GSK_RENDERER environment variable:
GSK_RENDERER=gl
Future plans and possibilities
The new renderers are a good foundation to implement things that we’ve wanted to have for a long time, such as
- Proper color handling (including HDR)
- Path rendering on the GPU
- Possibly including glyph rendering
- Off-the-main-thread rendering
- Performance (on old and less powerful devices)
Some of these will be a focus of our work in the near and medium-term future.
Summary
The new renderers have some exciting features, with more to come.
Please try them out, and let us know what works and what doesn’t work for you.
January 26, 2024
Both the hw interface of the ISP part of the IPU6 as well as the image processing algorithms used are considered a trade secret and so far the only Linux support for the IPU6 relies on an out of tree kernel driver with a proprietary userspace stack on top, which is currently available in rpmfusion.
Both Linaro and Red Hat have identified the missing ISP support for various ARM and X86 chips as a problem. Linaro has started a project to add a SoftwareISP component to libcamera to allow these cameras to work without needing proprietary software and Red Hat has joined Linaro in working on this.
FOSDEM talk
Bryan O'Donoghue (Linaro) and I are giving a talk about this at FOSDEM.
Fedora COPR repository
This work is at a point now where it is ready for wider testing. A Fedora COPR repository with a patched kernel and libcamera is now available for users to test, see the COPR page for install and test instructions.
This has been tested on the following devices:
- Lenovo ThinkPad X1 yoga gen 8 (should work on any ThinkPad with ov2740 sensor)
- Dell Latitude 9420 (ov01a1s sensor)
- HP Spectre x360 13.5 (2023 model, hi556 sensor)
Description of the stack
- Kernel driver for the camera sensor, for the ov2740 used on current Lenovo designs (excluding MTL) I have landed all necessary kernel changes for this upstream.
- Kernel support for the CSI receiver part of the IPU6 Intel is working on upstreaming this and has recently posted v3 of their patch series for this upstream and this is under active review.
- A FOSS Software ISP stack inside libcamera to replace the missing IPU6 ISP (processing-system/psys) support. Work on this is under way. I've recently send out v2 of the patch-series for this.
- Firefox pipewire camera support and support for the camera portal to get permission to access the camera. My colleague Jan Grulich has been working on this, see Jan's blogpost. Jan's work has landed in the just released Firefox 122.
GNOME is partipating in the December 2023 – February 2024 round of Outreachy. As part of this project, our interns Dorothy Kabarozi and Tanju Achaleke have extended our end-to-end tests to cover some of GNOME’s accessibility features.
End-to-end testing, also known as UI testing, involves simulating user interactions with GNOME’s UI. In this case we’re using a virtual machine which runs GNOME OS, so the tests run on the latest, in-development version of GNOME built from the gnome-build-meta integration repo. The tests send keyboard & mouse events to trigger events in the VM, and use fuzzy screenshot comparisons to assert correct behavior. We use a tool called openQA to develop and run the tests.
Some features are easier to test than others. So far we’ve added tests for the following accessibility features:
- High contrast theme
- Large text theme
- Always-visible scrollbars
- Audio over-amplification (boost volume above 100%)
- Visual alerts (flash screen when the error ‘bell’ sound plays)
- Text-to-speech using Speech Dispatcher
- Magnifier (zoom)
- On-screen keyboard
In this screenshot you can see some of the tests:

Here’s a link to the actual test run from the screenshot: https://openqa.gnome.org/tests/3058
These tests run every time the gnome-build-meta integration repo is updated, so we can very quickly detect if a code change in the ‘main’ branch of any GNOME module has unintentionally caused a regression in some accessibility feature.
- GNOME’s accessibility features are seeing some design and implementation improvements at the moment, thanks to several volunteer contributors, investments from the Sovereign Tech Fund and Igalia, and more. As improvements land, the tests will need updating too. Screenshots can be updated using openQA’s web UI, available at https://openqa.gnome.org, there are instructions available. The tests themselves live in openqa-tests.git and are simple Perl programs using openQA’s testapi. Of course merge requests to extend and improve the tests are very welcome.
One important omission from the testsuite today is Orca, the GNOME screen reader. Tanju spent a long time trying to get this to work, and we do have a test that verifies text-to-speech using Speech Dispatcher. Orca itself is more complicated and we’ll need to spend more time to figure out how best to set up end-to-end tests for screen reading.
If you have feedback on the tests, we’d love to hear from you over on the GNOME Discourse forum.
And if you did not know already, that Core app is Evince, now renamed to Papers and submitted to Incubation this week. But if you’re still interested after the spoiler, let’s start from the beginning.
From some hacky patches to 80+ Merge Requests
How it all started
So for anybody that does not know me, I have been until now foremost a postmarketOS maintainer. GNOME is my platform of choice, and we often encounter bugs downstream. However, postmarketOS’s policy is upstream first, so I ended up doing a lot of things in GNOME. Most of those were small patches for various projects, and many of the issues we had with sizing and Phone usage have been polished through the years with help from people all across the stack. A big chunk of those was fixed through GTK4 ports and implementing new libadwaita patterns and widgets. However, Evince seemed to have been stuck on porting to GTK4, and we were still carrying some ugly patches. So I tried to do my best, and more than 1 and a half year ago pushed my first MR. What followed was a roller-coaster of emotions.
How it all continued
Sending a first patch to a project is something I have learned to really enjoy. It has that mix of excitement for the unknown, looking forward to that this is great, thanks, and the knowledge that I will certainly learn something in the process. But since one patch would not get a massive GTK4 MR solved, I did not stop there, and continue pushing MRs. After some time, I learned some things:
- Germán was the only active maintainer. So sometimes, things would move fast and he would merge a bunch of my MRs altogether. And sometimes they would stay there without a comment for weeks. And certainly time to answer questions and get me on-boarded into decades-old Evince code was certainly sparse. The few times he was able to, I learned a bunch, but it was still too little for the time I had available.
- Evince code is old, for the good, and for the bad. For the good, it concentrates the wisdom of years of development creating a Document Viewer. The authors had taken and tested many design decisions, and overall created a very solid application. For the bad, there was code that probably already existed in pre-glib times. And many things that are now straight-forward to do, had been re-implemented in a custom way. I am still very proud of having removed some non-functional code that if alive would have the right to vote in most countries that do.
- Evince is not just an application, but two libraries,
libdocumentandlibview, which must remain stable. So many of the cleanups I would have wished to do would had to wait.
All these things created a dynamic where: I would have lots of time and motivation; hack in Evince to do some cleanups and forward-port stuff from the GTK4 MR; feel super happy about it and send some MRs; wait for some period that felt long to get any feedback; get totally demotivated about it; sometimes wait some more weeks until getting feedback, context-switching to something else; then get some feedback, but feel bad about it; finally get some motivation back, address the feedback, and back to the beginning. Of course, this wasn’t sustainable.
Getting help and reaching for help
In the middle of this process, FOSDEM 2023 happened. And it was a wonderful experience. I got to meet all these postmarketOS people I had been working with for more than a year, as well as many GNOME people from whom I knew, but that were certainly less aware of me. Despite that, everybody was super welcoming, and encouraging. And specially Tobias pushed me a bit and managed to get me apply for a grant (that I did not get) to improve Evince. In that process, we tried to approach Germán to talk, and see how could I help with Evince from a more “maintainership” role. Unfortunately, that did not work out. However, that did not solve the adaptability problems for Evince in mobile, so I had to continue trying.
Through all this process, Qiu Wenbo, the author of Evince’s GTK4 port, had continued to tirelessly rebase the GTK4 port on whichever changes we did. And we started to interact, as I moved from back-porting things to reviewing their MR more in-depth. But we still would not get where we wanted to be at full-speed with Germán’s limited time availability. So I reached to the Release Team at GUADEC, and then again some months later, and managed to schedule a very nice call with Germán. We talked about Evince, how would it be possible for me to help, and he created the evince-next branch. There it was (and still is) allowed to break the libraries’ API, remove deprecated functions and be less conservative. Of course, Qiu Wenbo and I sent a lot of MRs, and I tried to get myself entitled to merge stuff, but I was certainly not the main maintainer, and most things still needed Germán’s review. At some point, things to back-port run out. Any work I would like to do would require some hard and careful rebase of the GTK4 branch, and Qiu Wenbo had already context-switched to improvements after the port. It was clear, unfortunately, that the amount of human-power we were able to provide was well above Germán’s availability, and therefore, the changes we were dreaming of likely not fitting within Evince.
From a casual chat to Incubation in less than two months
Forking was an option that had been suggested to me since FOSDEM, and reiterated multiple times later. At first, I did not feel ready to take over the code-base. And when I did, I was terrified to do so alone. So mid-December, I asked Qiu Wenbo if he would be willing to fork and maintain the fork with me, with the goal of porting to GTK4, implement new mockups, and modernize the application. He was extremely enthusiastic about the idea, so we started to work on it, notifying both the Release Team and Germán. Our goal was getting the GTK4 MR in by the (gregorian) New Year, to then take care of the breakage later. And I have to admit, the results of the fork are already surpassing all my expectations. In less than 2 months, we’ve managed to merge close to 40 MRs, and already got some external contributors (Chris and FineFindus, yay!), with more people having showed interest. We also started the rebranding (still missing a slick icon!), and this week, applied with Papers for Incubation.
So the overall outcome, nowadays, is net positive. Although it will all depend a bit on the Incubation phase. At the same time, a fork is most of the times the result of some sort of failure. Probably not just of us individual contributors, but of the community as a whole. It is also a big shame to not have Germán next to us in this process. He holds extremely valuable knowledge about PDFs and documents in general, and certainly avoided me messing up more than once. At the same time, life is life, and we’re all volunteers doing our best. I can say, I am sincerely thankful from everything he has taught me, and wish him success in their endeavors. For us, we hope to see Papers flourish, and encourage others to jump on the train to make a modern, adaptive, touch, and multi-device-friendly document viewer for GNOME.
Happy Hacking!
Update on what happened across the GNOME project in the week from January 19 to January 26.
Sovereign Tech Fund
Sonny says
As part of the GNOME STF (Sovereign Tech Fund) project, a number of community members are working on infrastructure related projects.
Today we celebrate Sophie joining the team to work on Glycin to work on
- Improved sandboxing for image loaders
- GObject Introspection support to broaden interoperability with the GNOME platform
Accessibility
- Joanie added a system information presenter in Orca
- Joanie finished code clean-up and removal of pyatspi dependency for hypertext and hyperlink interfaces
- Joanie started code clean-up creation of AT-SPI2 utilities for Orca’s accessible-text related functionality issue
- Joanie made a proposal to facilitate text selection via ATK/AT-SPI2 across multiple objects at once (similar to what IAccessible2 created)
- Joanie made a proposal to have an attributes-changed signal for object attributes
- Joanie began converting Orca’s WebKitGtk support over to the generic web support currently shared by Chromium and Gecko.
- Matt pushed a partial Wayland protocol extension for accessibility consumers (screen readers and the like)
- Matt started implementation the accessibility extension as a proof of concept in Mutter
- Tobias investigated where we still use TreeViews and started an initiative to port to more accessible widgets (e.g. ListView, ColumnView)
- Evan landed gtk: Add AccessibleList to enable relations in bindings
- enables languages like GJS and Python to pass lists of Gtk widgets to accessibility relations like LABELLED_BY in GTK4
- started a GJS MR for it to apply a convenience override to automatically wrap JS arrays in Gtk.AccessibleList in the relevant APIs
- Georges is working on WebKitGTK accessibility
- experimented with a potential new GTK4 API to be consumed by WebKitGTK. The experiment was a success and it correctly bridged the web page DOM a11y tree with the rest of the program, which allows screen readers and other accessible technologies to read that. I’m currently cleaning up the code and discussing the approach with GTK developers.
- published and improved Aleveny a tool to inspect the accessible object tree of apps.
- Sonny helped with coordination efforts to land High Contrast hint on settings portal
- Hub will work on GNOME backend implementation
![]()
Platform
- Hub fixed a bug in flatpak-builder rename-appdata-file
- Julian landed using libadwaita Avatar in gnome-initial-setup
- Julian is working on extending the XDG portal notification API with sounds and images portal issue
- Sonny started a proof of concept for a GTK linter
- Stef joined the team and started working on GFileMonitor does not work with document portal
- Evan made progress on async/sync annotations support in introspection, there are currently 4 MRs for it
- Evan is investigating the amount of work needed to make WASM support in GJS production ready - currently evaluating the mergeability of multi-threaded promises and import maps
- Philip fixed the following in GLib
- Philip released GLib 2.78.4 and 2.79.1
- Alice finished and landed adaptive dialogs (see her individual update below)
Hardware support
- Dor continued iterating on VRR configuration UX in Settings
- Dor investigated and fixed a number of issues related to VRR
![]()
- Alice landed bottom sheets in libadwaita https://gitlab.gnome.org/GNOME/libadwaita/-/merge_requests/1018
- Documentation https://gnome.pages.gitlab.gnome.org/libadwaita/doc/main/class.Dialog.html
- There’s also a migration guide for porting apps to using it https://gnome.pages.gitlab.gnome.org/libadwaita/doc/main/migrating-to-adaptive-dialogs.html
- Jonas (Dreßler) is investigating remaining issues in Jonas (Ådahl) fractional scaling branch
Security:
- Dhanuka continued his work on implementing secret server/backend in oo7 https://github.com/bilelmoussaoui/oo7/pull/56
- implemented
CreateCollectionandSearchItemsonorg.freedesOnceCellktop.Secret.Serviceinterface- implemented
Deleteonorg.freedesktop.Secret.Iteminterface- updated
CreateItemonorg.freedesktop.Secret.Collectionto useoo7::dbus::api::properties::Properties- We are investigating and coordinating usage of systemd per-user encrypted credentials
GNOME Core Apps and Libraries
Libadwaita
Building blocks for modern GNOME apps using GTK4.
Alice (she/her) says
AdwDialoghas landed, along withAdwAlertDialog,AdwPreferencesDialogandAdwAboutDialog. There’s also a migration guide for all of the new widgets. The old widgets aren’t deprecated yet, but will be in GNOME 47
GTK
Cross-platform widget toolkit for creating graphical user interfaces.
Matthias Clasen says
The GTK 4.13.6 release out this week changes the default renderer to be the ngl renderer.
The intent of this change is to get wider testing and verify that the new renderers are production-ready. If significant problems show up, it may get reverted for the stable 4.14 release in March.
You can still override the renderer choice using the GSK_RENDERER environment variable.
Since ngl can handle fractional scaling much better than the old gl renderer, fractional scaling is now enabled by default with gl.
If you are using the old gl renderer (e.g. because your system is limited to GLES2), you can disable fractional scaling by setting the GDK_DEBUG environment variable to include the gl-no-fractional key.
Maps
Maps gives you quick access to maps all across the world.
mlundblad says
Maps now shows an empty state for the favorites menu, and also allows removing favorites directly from the popover (with an undo toast). Also James Westman has improved GeoJSON shapelayer rendering, show descriptions for marked places, and also shows the layer name in the bubbles
![]()
![]()
![]()
![]()
GNOME Circle Apps and Libraries
Fragments
Easy to use BitTorrent client.
Felix announces
Fragments now allows you to search for added torrents 🔎
Third Party Projects
Bilal Elmoussaoui announces
oo7, a Rust client library for interacting with the system keyring, received two new additions:
- A rewrite of secret-tool, a cli application to interact with the keyring
- A rewrite of the secret portal
On top of that Dhanuka Warusadura have been working on a server side implementation
![]()
badcel announces
I published the repository Maus containing an early stages Adwaita C# app which allows to configure a Microsoft Intellimouse Pro. Feedback is welcome.
Miscellaneous
Cassidy James reports
Flathub, the app store developed by KDE, GNOME, and independent contributors, has announced over one million active users! This means when you bring your app to Flathub—either independently or as a part of GNOME Circle—you’re reaching a potential audience of over a million Linux users.
Dorothy K reports
As Outreachy Interns,for the past couple of weeks Tanjuate and I have been working on implementing end-to-end testing for GNOME with openQA for Outreachy and our focus in the last few weeks has been a11y tests for GNOME OS.We have written tests for accessibility features ie, High contrast,Large text,Overlay scrollbars, Screen reader, Zoom, Over amplification,Visual alerts and On Screen Keyboard features.
Take a look at some of the tests we have added with a prefix “a11y-” here and this post for more context
![]()
That’s all for this week!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
Earlier this month we shared new app metadata guidelines in response to the growth and maturity of Flathub. Today we're proud to share about that growth in a bit more detail including a huge milestone, how we calculate stats, and what we believe is driving that growth.
Milestones

To date, Flathub has served a total of 1.6 billion downloads of over 2,400 apps and their updates.
Since introducing verified apps last year, over 850 apps have been verified by their original authors; more than one third of all apps, with that number constantly increasing.
And finally, we're thrilled to announce that Flathub has surpassed one million active users. 🎉️
How We Measure Growth
We have a public dashboard that shares some basic statistics, powered by an open source script. We don't track individual metrics from end user machines; instead, we look at downloads of specific artifacts from the Flathub infrastructure itself. This gives us a count of installs and updates over time, downloads per country (based on the origin of requests), and some derivative data like downloads of apps in different categories.
But how can we measure how many active users we have?
Since Flathub necessarily serves downloads of Flatpak runtimes (common platforms on which apps are built), we can estimate active users fairly well. For example, when installing or updating many apps, Flatpak will automatically install the base FreeDesktop SDK runtime on which the KDE and GNOME runtimes (and most other apps) are built. We can look at the number of downloads of updates to a recent release of this runtime to estimate how many active installs are out there getting Flatpaks from Flathub.
This methodology reveals that over the past few months there have been over a million updates of each of the latest FreeDesktop SDK runtime releases, meaning we've passed the one million active user mark.
If anything, we believe this to be a conservative estimate as some users may be using apps that do not use the specific runtime measured.
If you'd like to explore more of this data yourself, there's this handy dashboard by Kristian Klausen (source), or you can always chat with us in #flathub on Matrix.
What's Driving Growth
We attribute the growth of Flathub to a few developments over the years, and especially the last several months:
-
Popular app availability: Since we started Flathub, we've seen apps like Firefox, Google Chrome, Discord, VLC, Spotify, Telegram, Microsoft Edge, Steam, OBS Studio, Zoom, Thunderbird, and many more make their way into the store. These are many of the apps people want and expect, especially if they're coming to a Linux desktop for the first time. Each time another popular app hits Flathub, it makes both Flathub and Linux that much more compelling to users.
-
Verified apps: We've heard that some folks have held off installing specific apps or even using Flathub altogether because they didn't want a third-party maintainer redistributing an app they rely on; verifying apps solves this by assuring users their favorite app is actually coming from its developer. With an ever-increasing number of apps choosing to get verified—including both big names and many newer indie apps—the trust and adoption of Flathub increases.
-
Steam Deck: It's not just a meme; we suspect Flathub being included as the default app source for the Steam Deck's desktop mode has had a large positive effect on the usage of Flathub. Just look at some of the most popular apps: retro game emulators, game compatibility tools, gaming-oriented chat services, alternate game launchers—all the kinds of apps you'd expect Steam Deck users to want. And in fact, "game" is the single most common category of app on Flathub. It turns out selling "multiple millions" of devices has an impact on the ecosystem!
-
Linux distro adoption: In addition to shipping on Steam Deck, Flathub now comes included out of the box on at least Clear Linux, Endless OS, KDE Neon, Linux Mint, Pop!_OS, and Zorin OS—and as of Fedora 38, Flathub is available in its entirety when enabling third-party software sources. Flathub is the preferred app store for Linux, and its grassroots adoption across the Linux desktop ecosystem proves that.
-
New apps: Last but definitely not least, we've entered a thrilling chapter of Flathub as an ecosystem. It's not just another source to get popular, big-name apps you'd expect to be able to get anywhere else; Flathub has become the preferred app store for a growing number of indie, open source developers. Developers are increasingly submitting their own new apps and directing their users to Flathub; from all 50+ GNOME Circle apps to apps like Endless Key, Librum, Lightwave Explorer, Live Captions, and Planify, Flathub is the best way to get apps into the hands of Linux users.
And of course a large part of Flathub's continued growth has been due to the incredible work of the Flathub admins, contributors, and volunteers who have helped ensure Flathub remains the trustworthy and reliable service it has always been known as.
Here's to What's Next!
It's easy to pat ourselves on the back for these milestones (and it's well deserved!), but equally important is to look forward. Over the coming months we're excited to continue tackling the roadmap we laid out last year, including continued work towards payments and the new organizational structure for Flathub.
We're also increasing our engagement with third-party developers and ISVs to share why they should bring their apps to the million-plus Flathub users on Linux, continuing to improve documentation, and working towards curation and improved presentation of apps. There's a lot more coming and always more to do; if you want to get involved, chat with us in #flathub on Matrix.
Thank you for reading, and happy downloading!
January 24, 2024
The Outreachy organizers have approved GNOME to participate in the current round of Outreachy!
The GNOME Foundation is interested in sponsoring 3 internship projects for the May to August cohort.
@mentors If you are interested in mentoring, please discuss project ideas in our Project Ideas repository.
- Feb. 23, 2024 is the deadline for mentors to submit new projects.
@interns Initial applications for the Outreachy May 2024 to Aug 2024 internships are due on Jan 29 at 4pm UTC: https://www.outreachy.org/apply/
The number of self-identified thoughtful-people-in-tech whose social media energy still goes primarily to Twitter doesn’t bode well for, well, anything. I’m genuinely not judgy of people whose thing is Just Being Online, but if part of the self-image you like to project is “oh, yes, I really dislike fascism and care about other people” then… yeah, I’m judging you for still being active on Twitter.
A few tips that have helped me move to the fediverse:
- If you were very dependent on your X following and don’t want to abandon that, that’s understandable. If so, follow the Xlast strategy: yes, continue to post on Twitter when that’s important for work, but do it a bit slowly and less interactively.
- Don’t treat Fedi like late-period twitter, when you were picky about who you followed and humble about who you interacted with. Treat it like early-period twitter, when we all followed quite a few randos, and said things like “hi” and “thanks for sharing that!” It’s a small town, not the big city.
- Filter aggressively: Mastodon has great filtering capabilities, which makes following a lot of people easier. And also makes my 2020 advice on social media in an election year very relevant.
- If at all possible, find a server that’s specific to your place (like sfba.social or cosocial.ca) or your interests (like hci.social). The server functionality isn’t great (it is telling that the default UX calls it a server and not a community) but it can help you get your footing.
- Use an alternate client. Phanpy.social is genuinely excellent, with the one exception that it doesn’t do RTs. If you need RTs and can’t be bothered to copy-paste, Mona is very nice.
- I’m also on Bluesky but (perhaps because of which of my follows migrated there instead of Fediverse) it has trended hyper-US-political, which I have not enjoyed.
If you want a better world, you can’t wait for others to build it for you. To borrow a favorite “urgent optimist” phrase, you’re working in the early days of a better nation. Or, if you prefer: you are the one you’ve been waiting for. That’s not necessarily easy, but I at least am having quite a bit of fun on the Fediverse. Come on in, the water is fine!
January 22, 2024
7 years since my last post (2017!), so many exciting things to talk about: the AV1 video codec was released, we at Two Orioles wrote an AV1 encoder (Eve) and decoder (dav1d). Covid happened. I became a US citizen. Happily married for 15 years. My two boys are now teenagers (and behaving like it). So, let’s talk about … taxes?
In 2017, Congress enacted the Tax Cuts and Jobs Act, which (very briefly) reduces business (permanent) and personal (expires after 2025) tax rates by a few percent points, along with some “magic” to pay for it; some might remember the changes to SALT deductions. In 2023, during tax filing for 2022, more of that magic became clear: the 2017 law included 5-year-deferred changes to section-174, which covers expenses related to “Research & Experimentation” (R&E). The change requires all R&E expenses to be amortized over 5 (domestic) or 15 (foreign) years, and explicitly adds software development as a section-174 expense for tax purposes. This might sound innocent, but it hits small businesses hard.
Imagine a small software development business with $1M in revenue. It might employ 5 software developers ($150k/yr each – all at 50% effective tax rate in NY) plus legal/accounting/bookkeeping ($25k/yr) and office expenses ($25k/yr), leaving $1M - 5x $150k - $25k - $25k = $200k as business profit or taxable income. The profit flows through to the LLC/S-corp owners, who pay 50% tax over that also. Total taxes (federal + state + local) paid: 50% * (5x $150k + $200k) = $475k.
Starting 2022, initial-midyear amortizations means it can deduct only 10% of its R&E expenses (including salaries and office), which means the effective taxable income flowing through to the LLC/S-corp owners is now $1M - 10% * (5x $150k + $25k) - $25k = $897.5k, taxed at 50%, resulting in total taxes increasing to 50% * (5x $150k + 897.5k) = $823.75k. Of course, this money doesn’t actually exist, so the business will go broke unless it lays off some of its employees or the owners take out loans (bonus points for today’s high interest rates). This supposedly slowly resolves itself after 5 years, as long as the business doesn’t grow (i.e. hire new employees) and salaries of existing employees don’t go up: a pretty crazy economic policy to encourage innovation. It’s worse if you hire people outside the US, because now the amortization period is 15 years.
Congress should fix this, and there has been some (late, slow) progress in this area. Unfortunately, the media hasn’t caught on. The NYT blasted the proposed legislation as a “corporate giveaway” that “would benefit profitable companies” who “already pay too little in taxes”. The WSJ called it “at best a modest boost to investment incentives”. Maybe these journalists should read Michele Hansen’s tweet, which lists individual stories from small businesses (like mine) who were hurt by this change in tax law: tax increases by 500%, effective tax rates of 500%, hiring freezes, layoffs, loans, debt – and speaking from personal experience: lots of tears.
Dear Congress: small businesses need you – please fix section 174.
January 21, 2024
Version 0.8.0 of the CapyPDF library has been released. The main new feature is support for form XObjects and printer's mark annotations.
Printer's marks are things like color bars, crop marks and registration marks (also known as "bullseye marks") that high end printers need for quality control. Here is a very simple document.
An experienced print operator would use the black lines to set up paper trimming. Traditionally these marks were drawn in the page's graphics stream. This is problematic because nowadays printers prefer to use their own custom marks instead of ones created by the document author. PDF solves this issue by moving these graphics operations to separate draw contexts (specifically "form XObjects", which are not actually forms, though they are XObjects) that can then be "glued" on top of the page. These annotations are shown in PDF viewer applications but they are not printed. I have no experience with high end RIP software, but presumably the operator can choose to either print the document's annotations or replace them with their custom QA marks.
As usual, to find out what features CapyPDF has and how to use them, look up either the public C header or the Python code used in unit tests.
January 20, 2024
Following my announcement a few days ago, in probably what has been record time, the Ko-Fi goal for the Elgato Stream Deck+ has been reached!
I’ve acquired the device and it’s already on its way, expected to reach me by the end of January ~ early February due to being an international purchase.
Thanks everyone for your your overwhelming support! This is really exciting.
One of the more recent design trends in GNOME has been the use of sidebars. It looks great, it’s functional, and it gives separation of content from hierarchy.
Builder, on the other hand, has been stuck a bit closer to the old-hat design of IDEs where the hierarchy falls strictly from the headerbar. This is simply because libpanel was designed before that design trend. Some attempt was made in Builder to make it look somewhat sidebar’ish, but that was the extent of it given available time.
Last week I had a moment of inspiration on a novel way we could solve it without uprooting the applications which use libpanel. You can now insert edge widgets in PanelDockChild which are always visible even when the child is not. Combining that with being able to place a headerbar inside a PanelDockChild along with your PanelFrames means you can get something that looks more familiar in modern GNOME.
If you’d like to improve things further, you know where to find the code.
… and why did it take so long for that to happen?
The year 2023 was a busy one in Toolbx land. We had two big releases, and one of the important things we did was to start offering built-in support for Arch Linux and Ubuntu.
What does that mean?
It means that if you have the latest Toolbx release, then a simple toolbox create on Arch Linux and Ubuntu hosts will create matching Toolbx containers, instead of a fallback Fedora container. If you are using some other host operating system, then you can get the same with:
$ toolbox create --distro arch
$ toolbox create --distro ubuntu --release 22.04
It also means that we will do our best to treat Arch Linux and Ubuntu on equal terms with Fedora and avoid regressions.
Now, that last sentence has a lot more to it than may seem at first. So, I am going to try to unwrap it to show why I think this one of the important things we did in 2023. You may find it interesting if you want the set of supported Linux distributions to expand further. You can also skip the details and go straight to the end to get the jist of it.

Why go beyond Fedora?
Even though Toolbx was created for the immediate needs of Fedora Silverblue, it has long since exceeded the bounds of Fedora and OSTree based operating systems. At the same time, Toolbx only had built-in support for Fedora and Red Hat Enterprise Linux. This disconnect led to a lot of people eagerly asking for better support for Linux distributions beyond the Fedora family.
From the outside, it looks like just a matter of putting together a Containerfile that layers on some extra content on top of a base image, and then publishing it on some OCI registry somewhere. However, it’s not that simple.
Why did it take so long?
Toolbx is a glue. Given an existing host operating system and a desired command line environment, it will create and set up an OCI container with the desired CLI environment that has seamless access to the user’s home directory, the Wayland and X11 sockets, networking (including Avahi), removable devices (like USB sticks), systemd journal, SSH agent, D-Bus, ulimits, /dev and the udev database, etc..
The closest analogy I can think of is libosinfo generating scripts to perform automated installations of various operating systems as virtual machines. Except, Toolbx containers aren’t expected to be as cleanly or formally separated from the host as VMs are, and the world of OCI containers is designed for deploying server-side applications and services, not for the persistent interactive use that Toolbx offers. This means that Toolbx has to carefully make each host OS version match the container, and make the container fit for interactive use. For example, libosinfo doesn’t have to worry about making the VM’s command line shell prompt and /etc/resolv.conf work with that of the host’s, nor does it have to convert the cloud image of an OS into a variant suited for a desktop or laptop.
This means that Toolbx has to very carefully set up the container to work with the particular host operating system version in use. For example, there can be subtle differences between running a Fedora 39 container on a Fedora 38 host and vice versa. If there are three different Fedora versions that we care about (such as Fedoras 38, 39 and 40), then that’s nine different combinations of container and host OSes to support. Sometimes, the number of relevant versions goes from three to four, and the number of combinations jumps to sixteen. Now, add Red Hat Enterprise Linux to the mix. Assuming that we only care about the latest point-releases (such as RHELs 8.9 and 9.3), we are now looking at twenty-five to thirty-six combinations. In reality, it’s a lot more, because we care about more than just the latest RHEL point-releases.
You can see where this is going.
With this addition of Arch Linux and Ubuntu, we have added at least seven new versions that we care about. That’s a total of approximately one hundred and fifty combinations!
I can assure you that this isn’t a theoretical concern. Here’s a bug about domain name resolution being completely broken in Toolbx containers for Fedoras 39 and 40 on Red Hat Enterprise Linux 9 hosts.
Tests
So, to promise with a straight face that we will do our best to treat Arch Linux and Ubuntu on equal terms with Fedora and avoid regressions, we need to automate this. There’s no way a test matrix of this size can be tackled manually. Hence, tests.
However, it’s easier said than done, because it’s not just about having the tests. We also need to run them on as many different host operating system versions as possible. Unfortunately, most continuous integration systems out there either only offer containers, which are useless because Toolbx needs to be tested on the host OS, or they offer Ubuntu hosts. One exception is Software Factory, which runs an instance of Zuul CI, and offers Fedora and CentOS Stream hosts.
Currently, we have a test suite with more than three hundred tests that covers a good chunk of all supported OCI containers and images. All these tests are run separately on hosts representing all active Fedora versions, with subsets being run on CentOS Stream 9 and Ubuntu 22.04 hosts. We are working on ensuring that the entire test suite gets run on CentOS Stream 9 and Ubuntu 22.04, and are hoping to introduce CentOS Stream 8 and Ubuntu 24.04 hosts to the mix.
Plus, these aren’t just simple smoke tests. They don’t just create and start the containers, and check for a successful exit code. They comprehensively poke at various attributes of the containers’ runtime environment and error conditions to ensure that things really do work as advertised.
I think this is a pretty impressive set-up. It took time to build it, and it’s still not done, but I think it was worth it.
We have also been busy in Fedora, pushing the quality of the Toolbx stack up a notch, but that’s a topic for another post.
So, if you want to see built-in support for your favourite Linux distribution, then please help us by providing some test runners. Right now we could really use one for Arch Linux to maintain our existing support for it, and one for Debian because we want to include it in the near future.
Maintainers
Finally, since Toolbx is a glue, sometimes we need to drive changes into the Linux distributions that we claim to support. For example, changing the sysctl(8) configuration to make ping(8) work, fixes to the start-up scripts for Bash and Z shell, etc.. This means that we need maintainers who will own the work that’s specific to a particular Linux distribution. As a Fedora contributor, I can take care of Fedora, but I cannot sign up to take care of every single distribution that’s out there.
In that sense, I am delighted that we have a bunch of dedicated folks taking care of the Arch Linux and Ubuntu support. Namely, Morten Linderud for Arch Linux, and Andrej Shadura and Ievgen Popovych for Ubuntu.
Conclusion
Two things need to happen for us to add built-in support for a new Linux distribution.
First, we need test runners that let us run our upstream test suite on the host operating system, and not inside a container. Right now we could really use one for Arch Linux to maintain our existing support for it, and one for Debian because we want to include it in the near future.
Second, we need someone to step up to drive changes into the Linux distribution in question, and own the work that’s specific to it.
I am also going to add this to the Toolbx documentation, so that it’s easy to find in future.
January 18, 2024
Today, a middle-aged note: when you are young, unless you been failed by The System, you enjoy a radiant confidence: everything you say burns with rightness and righteousness, that the world Actually Is This Way, You See, and if you think about it, it Actually Should Be This Other Specific Way. This is how you get the fervent young communists and Scala enthusiasts and ecologists and Ayn Randians. The ideas are so right that you become an evangelist, a prophet, a truth-speaker; a youtuber, perhaps.
Then, with luck, you meet the world: you build, you organize, you invest, you double down. And in that doubling, the ideas waver, tremble, resonate, imperceptibly at first, reinforced in some ways, impeded in others. The world works in specific ways, too, and you don’t really know them in the beginning: not in the bones, anyway. The unknowns become known, enumerate themselves, dragons everywhere; and in the end, what can you say about them? Do you stand in a spot that can see anything at all? Report, observe, yes; analyze, maybe, eventually; prophesize, never. Not any more.
And then, years later, you are still here. The things you see, the things you know, other people don’t: they can’t. They weren’t here. They aren’t here. They hear (and retell) stories, back-stories, back-back-stories, a whole cinematic universe of narrative, and you know that it’s powerful and generative and yet unhinged, essentially unmoored and distinct from reality, right and even righteous in some ways, but wrong in others. This happen in all domains: macroeconomics, programming languages, landscape design, whatever. But you see. You see through stories, their construction and relation to the past, on a meta level, in a way that was not apparent when you were young.
I tell this story (everything is story) as an inexorable progression, a Hegelian triptych of thesis-antithesis-synthesis; a conceit. But there are structures that can to get you to synthesis more efficiently. PhD programs try: they break you down to allow you to build. They do it too quickly, perhaps; you probably have to do it again in your next phase, academia or industry, though I imagine it’s easier the second time around. Some corporate hierarchies also manage to do this, in which when you become Staff Engineer, you become the prophet.
Of course, synthesis is not inexorable; you can stop turning the crank anywhere. Perhaps you never move from ideal to real. Perhaps, unmoored, you drift, painter rippling the waters. But what do you do when the crank comes around? Where to next?
Anyway, all this is to say that I have lately been backing away from bashfulness in a professional context: there are some perspectives that I see that can’t be seen or expressed by others. It feel very strange to write it, but I am even trying to avoid self-deprecation and hedging; true, I might not possess the authoritative truth on, I don’t know, WebAssembly, or Scheme language development, but nobody else does either, and I might as well just say what I think as if it’s true.
* * *
Getting old is not so bad. You say very cheesy things, you feel cheesy, but it is a kind of new youth too, reclaiming a birthday-right of being earnest. I am doubling down on Dad energy. (Yes, there is a similar kind of known-tenuous confidence necessary to raise kids. I probably would have forced into this position earlier if I had kids younger. But, I don’t mean to take the metaphor fa(r)ther; responsible community care for the young is by far not the sole province of the family man.)
So, for the near future, I embrace the cheese. And then, where to? I suspect excessive smarm. But if I manage to succeed in avoiding that, I look forward to writing about ignorance in another 5 years. Until then, happy hacking to all, and thank you for your forbearance!
Dear friends, comrades, partners in crime and moderate profit! I am pleased to announce the immediate availability of Chafa 1.14.0. There are release notes, but who's got time for that? All the fun stuff is in this post. You should be reading it instead.
Pixel perfection
Images can now be padded instead of stretched to fit their cell extent exactly –

– so I'll no longer have to explain why our 50kLOC monstrosity couldn't do pixel-perfect output, while sixcat (300LOC) did so effortlessly. Naturally, I had to make this as hard as possible for myself by splicing in padding before and after each row as they're processed for channel reordering and such, but on the plus side, this maximizes cache friendliness and parallelism behind a nice and homogeneous internal API.
You can get the old behavior back with --exact-size off. It defaults to auto, which will do the right thing (i.e. pad if image fits in viewport without scaling and scaling wasn't explicitly requested).
Another improvement in this vein is that sRGB is properly linearized in scaling operations now. This is pipelined along with everything else, and should be suitably fast.
Multiplexer passthrough
Previously, it was impossible to do better than character art inside multiplexers like tmux and GNU Screen. This is no longer the case; kitty has a new trick that allows for persistent raster image placement in these, which we implement. The above mentioned multiplexers have slight differences in their passthrough protocols; we support both with the new --passthrough argument.
We also support sixel passthrough. Sixels will be wrecked by multiplexer updates, so this is off by default. You can enable it with e.g. -f sixel --passthrough screen. tmux tends to dispose of the image immediately after we exit, so it may be a good idea to use --passthrough tmux -d 5 there so you get a chance to look at it first.
To my knowledge, Chafa is the only terminal graphics toolkit to offer passthrough for all four combinations of sixel/kitty and tmux/screen. I think the iTerm2 protocol would be doable too, if only.
MS Windows compatibility
It runs pretty well on Windows. You can use it in PowerShell. @oshaboy added support for ConHost, so it can technically be used on very old Windows versions – although this hasn't gotten much testing yet.
Python bindings

Erica Ferrua Edwardsdóttir's amazing Python bindings have been around for a while now, yet nary a peep from me on my blag. This shameful deficit stands in contrast to the stunning professionalism of her work. You need to do this:
pip install chafa.py
Do it now. Everything's well structured, the documentation is entertaining and well written, and there's even a tutorial. It's rare to see a project that simultaneously delivers and channels the spirit of F/OSS this well. You'll also find it on GitHub.
JavaScript enablement
Héctor Molinero Fernández started doing WebAssembly builds, which means you can now use the Chafa API from JavaScript. Like so:
npm install chafa-wasm
He also made a cool web app to show off all the bells and whistles!
Cheerleading aside, I've had no hand in either of these projects. The glory belongs to their respective maintainers, along with all of the praise, stars, code submissions and issue reports, heh heh.
Art!
@clort81 sent me this picture:

What's special about it? Well, it's character art!

You have to zoom in a bit before it becomes obvious; there're only two colors per character cell.
The glyphs are something else, though. You see, @clort81 wasn't happy with the limited offering of block drawing symbols in Unicode, and set out to create Blapinus, a 6125-glyph (!) hand-pixeled 6×12 PCF font with all the shapes you could ever want.

Plug that into your terminal (and Chafa), et voilà:
sudo cp blapinus.pcf.gz /usr/share/fonts/misc/
xterm -font -blap-*
Note that your fonts may live somewhere else, so relocate accordingly. Then inside XTerm, run:
chafa -f symbols --glyph-file blapinus.pcf.gz --symbols imported hello.png
If you see strange symbols in your output, you can try excluding some wide and ambiguous code points, e.g. --symbols imported-wide-ued00..uffff.
You can probably set this up in other terminals and display servers too, but know that it could be a long and winding path you're going down. Traveler beware. Or push the pedal to the metal and write a trip report. I'd love to read it.
If your terminal renders the font correctly, this can even have somewhat practical qualities:
First, it integrates perfectly with tmux and GNU Screen. Redraw, scrollback and editing just works, no passthrough tricks required.
Second, albeit lossy, the compression ratio is surprisingly good. Assuming four bytes per pixel, a 6×12 cell is 288 bytes uncompressed. We turn this into 39 bytes at most (a maximum of 36 bytes for the direct color sequence plus 3 bytes for the UTF-8 character), or an 86% reduction. Not bad, considering the compression dictionary is a 100kB font file.
Deflate will further compress this kind of data by 3/4, so if you're running this in a compressed ssh session, you can expect the total gain to be about 95%.
Prior art!
I've mentioned Mo Zhou's work before, but I'd be remiss not to bring it up here; they focused their considerable ML skills on a generator that takes a lot of the pain out of the font creation process. Just point it at an image collection, and by the magic of k-means clustering and a minimal increase in your carbon footprint, out pops a new font brimming with delectable puzzle pieces. You get scalable TTF, with SVG as an intermediate format, which is more agreeable with modern rendering stacks. Here it is in VTE:

This ML-generated font makes for a more organic look. There are unfortunately still some artifacts caused – presumably – by VTE's cell padding. The generator has offset hacks to work around it, but it's hard to make custom connective glyphs look perfect in every terminal.
You can read more about it in one of our longest-running GitHub issues. We're taking it all the way.
Cool applications
Chafa's found its way into the nooks and crannies of many a sweet application by now. I'd especially like to mention these three here:

ANSIWAVE BBS, the brainchild of Zach Oakes, is written in Nim and contains an embedded build of Chafa for character graphics generation. As a one-time (and sometimes) BBS sysop and denizen, this hits me right in the feels.
Felix is a nice file browser by Kyohei Uto written in Rust. It uses Chafa as an external image previewer.
kew, a terminal music player by the mysterious @ravachol, is written in C and uses the native Chafa API to generate cover previews. Development on this has been moving very quickly.
A laid-back place to chat about all this stuff
Issue trackers are formal and supposedly high-SNR. If you'd like a more relaxed place to chat about Chafa, your own programs, terminal graphics (modern or ancient), graphics programming in general or artistic expression related to any of these, drop by our secret business Matrix channel, #chafa:matrix.org. We'll be waiting.

I'm also enjoying Mastodon these days. Occasional announcements and amusements go there. It's good.
Thanks
Last, but not least: a big thank you to everyone who wrote code, reported bugs, filed suggestions and helped test this time around. And an extra big thanks to all the packagers and distributors who make the F/OSS world turn. You're the best.
January 16, 2024
Hello there ! Welcome back!.
As I reach the midpoint of my internship, I’m thrilled to share my journey into the intricate world of end-to-end testing for GNOME OS using the powerful tool, openQA. From initial bafflement to mastering complex tests, it’s been a roller coaster of learning and discovery. Here are a few things i have learned so far along this journey as it unfolds as an Outreachy Intern!
Key Takeaways
- End-to-End Testing Mastery: I’ve gained an in-depth understanding of end-to-end testing, a critical component in ensuring a smooth user experience in software development.
- Problem-Solving Skills: Tackling complex technical issues head-on has honed my problem-solving skills, a vital asset in any tech career.
- Continuous Learning: The tech field is ever-evolving, and this experience has instilled in me the importance of continual learning and adaptation.
- Learning to to collaborate and effectively communicate: I can not stress this enough! Do not be shy on this as silence eats you up more or even worse the uncomfortable feeling of not finishing the task and being stuck for long.
- Learning Git: There are some Git commands that i had seen but not used but while working on this project i had to master,still cant believe I can
git rebasefaster and excitingly without breaking things! Don’t forget i cangit rebase -i HEAD~3and squash multiple commits into 1 and even when nano editor played me a bit i did crack it in the end. Lastly I learned how to cherry pick commits and put them on on another branch and am till learning more. My mentors shared some resources that helped me along the way thanks to Sonny Piers and Sam Thursfield who were very quick to help. Checkout one of the resources that i looked at here :Handbook , some others are PDF’s I cannot share here.
As I continue this exciting journey, I’m reminded that mastering technology is as much about understanding its complexities as it is about embracing the learning process. To those embarking on a similar path, remember: every challenge is an opportunity to grow. Dive in, stay curious, and enjoy the ride in the world of tech testing!
Happy testing, and keep exploring! 

Happy new year everyone! For better or worse we’re now well into the 2020s.
Here are some things I’ve been doing recently.
A post on the GNOME openQA tests: Looking back to 2023 and forwards to 2024. I won’t repeat the post here, go read it and see what I hope to see in 2024 with regards to QA testing.
I’m still mentoring Dorothy and Tanjuate to work on the tests and we’re about at the half way point of the Outreachy internship. Our initial goal was to test GNOME’s accessibility features and this work is nearly done, hopefully we’ll be showing it off this week. Here’s a sneak preview of the new gnome_accessibility testsuite…

These features are sometimes forgotten (at least by me) and I hope this testsuite can serve as a good demo of what accessibility features GNOME has, in addition to guarding against regressions.
It’s hard work to remotely mentor 2 interns on top of 35hrs/wk customer work and various internal work duties, so it’s great that Codethink are covering my time as a mentor, and it also helps that it’s winter and the outside world is hidden behind clouds most of the time. If only it was a face-to-face internship and I could hang out in Uganda or Cameroon for a while. Tanju and Dorothy are great to work with, if you’re looking to hire QA engineers then I can give a good reference for both.
I expect my open source side project for 2024 will still be the GNOME openQA tests, it ties in with a lot of activity around GNOME OS and it feels like we are pushing through useful workflow improvements for all of GNOME. Not that I’ve lost interest in other important stuff like desktop search, music recommendations and playing OpenTTD, but there is only so much you can do with a couple of hours a week.
What else? As usual I’ve been reviewing various speedups and SPARQL improvements from Carlos Garnacho in the Tracker search engine. I’m making plans to go to FOSDEM, so hopefully will see you there (especially if you want to talk about openQA). And I listed out my top 5 albums of 2023 which you should go listen to before doing anything else.
January 15, 2024
Following up on my previous blog post, due to popular demand, I started a small fundraiser campaign to acquire an Elgato Stream Deck Plus.
The goal is US$ 500, which should cover the costs of acquiring the device, and should pay for a few hours of reverse engineering work Boatswain development. Naturally, I’ll also document the USB format that these devices use, so that other developers out there can implement support in their own apps.
This particular model will be more complicated than other Stream Deck devices because it features a touch screen, and 4 activatable knobs. It will require a rather substantial rework of Boatswain so that it can support separate button grids, different button types, finger detection, and more.

I did reach out to Elgato, but they don’t seem interested in giving away one of their devices, or even a devboard, for me to hack on.
Whether or not this goal is reached, I’d like to thank all of the people that supported me on Ko-Fi and GitHub so far. Your support is truly humbling and it allowed me to write Boatswain in the first place, and much more!
Edit: since writing this article, I was let known that there is one other project that already reverse engineered parts of this device. I’ve adjusted the description to mention “Boatswain development” instead of “reverse engineering”.
As of the version 39 of Fedora Silverblue all the basic code is merged to support a composefs-based root filesystem.
To try it, do:
-
-
- Update to the latest version (I tested 39.20240115.0)
- Configure ostree to create and use composefs images:
$ sudo ostree config set ex-integrity.composefs yes - Trigger a manual (re)deploy of the current version:
$ sudo ostree admin deploy fedora/39/x86_64/silverblue - Reboot into the new deploy
- If using ext4 filesystem for rootfs (not needed for btrfs), enable “verity” feature on it:
$ sudo tune2fs -O verity /dev/vda3 # Change to right root disk - Enable fs-verity on all pre-existing ostree repo files:
$ sudo ostree admin post-copy
-
At this point, the rootfs should be be a composefs mount. You can verify it by looking at the mount, which should look like this:
$ findmnt / TARGET SOURCE FSTYPE OPTIONS / overlay overlay ro,relatime,seclabel,lowerdir=/run/ostree/.private/cfsroot-lower::/sysroot/ostree/repo/objects,redirect_dir=on,metacopy=on
So, what does this mean?
First of all, it means the rootfs is truly read-only:
# touch /usr/new_file touch: cannot touch '/usr/new_file': Read-only file system
The above error message happens also with regular ostree, but in that case it is only a read-only mount flag, and a root user can re-mount it read-write to modify it (or modify the backing directories in /ostree). However, when using composefs, the root filesystem is a combination of a erofs mount (from /ostree/deploy/fedora/deploy/*/.ostree.cfs) and an overlayfs with no writable directories, and neither of these have any ability to write to disk.
In addition, the system is set up to validate all file accesses, as the composefs image has recorded the expected fs-verity checksums for all files and overlayfs can validate them on use.
To fully complete the validation, Silverblue will just need a few additions (which I hope will be done soon):
- Each build should generate a one-use signature keypair
- The ostree commit should be signed with the private key
- Add public key as /etc/ostree/initramfs-root-binding.key
- Add /usr/lib/ostree/prepare-root.conf with this content:
[composefs] enabled=yes signed=yes
These files will be copied into the initrd, and during boot the public key will be used to validate the composefs image, which in turn guarantee that all file accesses give the correct, unchanged data.
To further improve security, the initramfs and the kernel can be combined into a Unified Kernel Image and signed. Then SecureBoot can guarantee that your system will not boot any other initramfs, and thus no other userspace.
January 12, 2024
For the past few years I have been managing GNOME’s participation in the Google Summer of Code and Outreachy internship programs. As a former alumni myself more than a decade ago, I believe these programs are a fundamental tool to onboard new contributors to our community and to provide opportunities for contributors to learn and join a thriving open source community. While I enjoy part of this management role, I am still a developer, and some of the internship activities are really energy/time consuming. So I have been looking for ways to improve that.
During my term as a Board member, the Board established the concept of committees, to extend the Board’s responsibilities and to solidify the Board’s position of governance and oversight. See https://wiki.gnome.org/Foundation/Committees
Before my Board term ended, I proposed the creation of yet another committee: the Internship Committee. My goal was to increase the visibility of our internship efforts within the Board so that committee members have the resources and support they need to coordinate the programs. See https://gitlab.gnome.org/Teams/Board/-/issues/239
Now the Board has voted to approve the creation of the committee! This means that now the Board will always have a liason member dedicated to facilitate the communication between the Board and the internship administrators. This also means that now the Internship Committee has more formal responsibilities, such as the ones defined on the committee charter. The committee already has multiple community members and is working towards improving our processeses.
Another step I wanted to take was to produce a documentation for the internship administration processeses so that we eliminate the bus factor and have also a easy time onboarding new admins.
I just pushed the initial version of the Internship Admin guide, containing also my personal collection of templates for communication with interns, mentors, program organizations, etc… This allows for community members to improve the processes themselves all in once place. A lot of the templates I wrote need update and rewording (contributions are welcome).
And, while we are at it, don’t forget that we are gathering ideas for GSoC and Outreachy internships for 2024. Visit https://gitlab.gnome.org/Teams/Engagement/internship-project-ideas to learn more.
January 11, 2024
This post is in part a response to an aspect of Nate’s post “Does Wayland really break everything?“, but also my reflection on discussing Wayland protocol additions, a unique pleasure that I have been involved with for the past months1.
Some facts
Before I start I want to make a few things clear: The Linux desktop will be moving to Wayland2 – this is a fact at this point (and has been for a while), sticking to X11 makes no sense for future projects. From reading Wayland protocols and working with it at a much lower level than I ever wanted to, it is also very clear to me that Wayland is an exceptionally well-designed core protocol, and so are the additional extension protocols (xdg-shell & Co.). The modularity of Wayland is great, it gives it incredible flexibility and will for sure turn out to be good for the long-term viability of this project (and also provides a path to correct protocol issues in future, if one is found). In other words: Wayland is an amazing foundation to build on, and a lot of its design decisions make a lot of sense!
The shift towards people seeing “Linux” more as an application developer platform, and taking PipeWire and XDG Portals into account when designing for Wayland is also an amazing development and I love to see this – this holistic approach is something I always wanted!
Furthermore, I think Wayland removes a lot of functionality that shouldn’t exist in a modern compositor – and that’s a good thing too! Some of X11’s features and design decisions had clear drawbacks that we shouldn’t replicate. I highly recommend to read Nate’s blog post, it’s very good and goes into more detail. And due to all of this, I firmly believe that any advancement in the Wayland space must come from within the project.
But!
But! Of course there was a “but” coming
– I think while developing Wayland-as-an-ecosystem we are now entrenched into narrow concepts of how a desktop should work. While discussing Wayland protocol additions, a lot of concepts clash, people from different desktops with different design philosophies debate the merits of those over and over again never reaching any conclusion (just as you will never get an answer out of humans whether sushi or pizza is the clearly superior food, or whether CSD or SSD is better). Some people want to use Wayland as a vehicle to force applications to submit to their desktop’s design philosophies, others prefer the smallest and leanest protocol possible, other developers want the most elegant behavior possible. To be clear, I think those are all very valid approaches.
But this also creates problems: By switching to Wayland compositors, we are already forcing a lot of porting work onto toolkit developers and application developers. This is annoying, but just work that has to be done. It becomes frustrating though if Wayland provides toolkits with absolutely no way to reach their goal in any reasonable way. For Nate’s Photoshop analogy: Of course Linux does not break Photoshop, it is Adobe’s responsibility to port it. But what if Linux was missing a crucial syscall that Photoshop needed for proper functionality and Adobe couldn’t port it without that? In that case it becomes much less clear on who is to blame for Photoshop not being available.
A lot of Wayland protocol work is focused on the environment and design, while applications and work to port them often is considered less. I think this happens because the overlap between application developers and developers of the desktop environments is not necessarily large, and the overlap with people willing to engage with Wayland upstream is even smaller. The combination of Windows developers porting apps to Linux and having involvement with toolkits or Wayland is pretty much nonexistent. So they have less of a voice.
A quick detour through the neuroscience research lab
I have been involved with Freedesktop, GNOME and KDE for an incredibly long time now (more than a decade), but my actual job (besides consulting for Purism) is that of a PhD candidate in a neuroscience research lab (working on the morphology of biological neurons and its relation to behavior). I am mostly involved with three research groups in our institute, which is about 35 people. Most of us do all our data analysis on powerful servers which we connect to using RDP (with KDE Plasma as desktop). Since I joined, I have been pushing the envelope a bit to extend Linux usage to data acquisition and regular clients, and to have our data acquisition hardware interface well with it. Linux brings some unique advantages for use in research, besides the obvious one of having every step of your data management platform introspectable with no black boxes left, a goal I value very highly in research (but this would be its own blogpost).
In terms of operating system usage though, most systems are still Windows-based. Windows is what companies develop for, and what people use by default and are familiar with. The choice of operating system is very strongly driven by application availability, and WSL being really good makes this somewhat worse, as it removes the need for people to switch to a real Linux system entirely if there is the occasional software requiring it. Yet, we have a lot more Linux users than before, and use it in many places where it makes sense. I also developed a novel data acquisition software that even runs on Linux-only and uses the abilities of the platform to its fullest extent. All of this resulted in me asking existing software and hardware vendors for Linux support a lot more often. Vendor-customer relationship in science is usually pretty good, and vendors do usually want to help out. Same for open source projects, especially if you offer to do Linux porting work for them… But overall, the ease of use and availability of required applications and their usability rules supreme. Most people are not technically knowledgeable and just want to get their research done in the best way possible, getting the best results with the least amount of friction.

Back to the point
The point of that story is this: GNOME, KDE, RHEL, Debian or Ubuntu: They all do not matter if the necessary applications are not available for them. And as soon as they are, the easiest-to-use solution wins. There are many facets of “easiest”: In many cases this is RHEL due to Red Hat support contracts being available, in many other cases it is Ubuntu due to its mindshare and ease of use. KDE Plasma is also frequently seen, as it is perceived a bit easier to onboard Windows users with it (among other benefits). Ultimately, it comes down to applications and 3rd-party support though.
Here’s a dirty secret: In many cases, porting an application to Linux is not that difficult. The thing that companies (and FLOSS projects too!) struggle with and will calculate the merits of carefully in advance is whether it is worth the support cost as well as continuous QA/testing. Their staff will have to do all of that work, and they could spend that time on other tasks after all.
So if they learn that “porting to Linux” not only means added testing and support, but also means to choose between the legacy X11 display server that allows for 1:1 porting from Windows or the “new” Wayland compositors that do not support the same features they need, they will quickly consider it not worth the effort at all. I have seen this happen.
Of course many apps use a cross-platform toolkit like Qt, which greatly simplifies porting. But this just moves the issue one layer down, as now the toolkit needs to abstract Windows, macOS and Wayland. And Wayland does not contain features to do certain things or does them very differently from e.g. Windows, so toolkits have no way to actually implement the existing functionality in a way that works on all platforms. So in Qt’s documentation you will often find texts like “works everywhere except for on Wayland compositors or mobile”4.
Many missing bits or altered behavior are just papercuts, but those add up. And if users will have a worse experience, this will translate to more support work, or people not wanting to use the software on the respective platform.
What’s missing?
Window positioning
SDI applications with multiple windows are very popular in the scientific world. For data acquisition (for example with microscopes) we often have one monitor with control elements and one larger one with the recorded image. There is also other configurations where multiple signal modalities are acquired, and the experimenter aligns windows exactly in the way they want and expects the layout to be stored and to be loaded upon reopening the application. Even in the image from Adlershof Technology Park above you can see this style of UI design, at mega-scale. Being able to pop-out elements as windows from a single-window application to move them around freely is another frequently used paradigm, and immensely useful with these complex apps.
It is important to note that this is not a legacy design, but in many cases an intentional choice – these kinds of apps work incredibly well on larger screens or many screens and are very flexible (you can have any window configuration you want, and switch between them using the (usually) great window management abilities of your desktop).
Of course, these apps will work terribly on tablets and small form factors, but that is not the purpose they were designed for and nobody would use them that way.
I assumed for sure these features would be implemented at some point, but when it became clear that that would not happen, I created the ext-placement protocol which had some good discussion but was ultimately rejected from the xdg namespace. I then tried another solution based on feedback, which turned out not to work for most apps, and now proposed xdg-placement (v2) in an attempt to maybe still get some protocol done that we can agree on, exploring more options before pushing the existing protocol for inclusion into the ext Wayland protocol namespace. Meanwhile though, we can not port any application that needs this feature, while at the same time we are switching desktops and distributions to Wayland by default.
Window position restoration
Similarly, a protocol to save & restore window positions was already proposed in 2018, 6 years ago now, but it has still not been agreed upon, and may not even help multiwindow apps in its current form. The absence of this protocol means that applications can not restore their former window positions, and the user has to move them to their previous place again and again.
Meanwhile, toolkits can not adopt these protocols and applications can not use them and can not be ported to Wayland without introducing papercuts.
Window icons
Similarly, individual windows can not set their own icons, and not-installed applications can not have an icon at all because there is no desktop-entry file to load the icon from and no icon in the theme for them. You would think this is a niche issue, but for applications that create many windows, providing icons for them so the user can find them is fairly important. Of course it’s not the end of the world if every window has the same icon, but it’s one of those papercuts that make the software slightly less user-friendly. Even applications with fewer windows like LibrePCB are affected, so much so that they rather run their app through Xwayland for now.
I decided to address this after I was working on data analysis of image data in a Python virtualenv, where my code and the Python libraries used created lots of windows all with the default yellow “W” icon, making it impossible to distinguish them at a glance. This is xdg-toplevel-icon now, but of course it is an uphill battle where the very premise of needing this is questioned. So applications can not use it yet.
Limited window abilities requiring specialized protocols
Firefox has a picture-in-picture feature, allowing it to pop out media from a mediaplayer as separate floating window so the user can watch the media while doing other things. On X11 this is easily realized, but on Wayland the restrictions posed on windows necessitate a different solution. The xdg-pip protocol was proposed for this specialized usecase, but it is also not merged yet. So this feature does not work as well on Wayland.
Automated GUI testing / accessibility / automation
Automation of GUI tasks is a powerful feature, so is the ability to auto-test GUIs. This is being worked on, with libei and wlheadless-run (and stuff like ydotool exists too), but we’re not fully there yet.
Wayland is frustrating for (some) application authors
As you see, there is valid applications and valid usecases that can not be ported yet to Wayland with the same feature range they enjoyed on X11, Windows or macOS. So, from an application author’s perspective, Wayland does break things quite significantly, because things that worked before can no longer work and Wayland (the whole stack) does not provide any avenue to achieve the same result.
Wayland does “break” screen sharing, global hotkeys, gaming latency (via “no tearing”) etc, however for all of these there are solutions available that application authors can port to. And most developers will gladly do that work, especially since the newer APIs are usually a lot better and more robust. But if you give application authors no path forward except “use Xwayland and be on emulation as second-class citizen forever”, it just results in very frustrated application developers.
For some application developers, switching to a Wayland compositor is like buying a canvas from the Linux shop that forces your brush to only draw triangles. But maybe for your avant-garde art, you need to draw a circle. You can approximate one with triangles, but it will never be as good as the artwork of your friends who got their canvases from the Windows or macOS art supply shop and have more freedom to create their art.
Triangles are proven to be the best shape! If you are drawing circles you are creating bad art!
Wayland, via its protocol limitations, forces a certain way to build application UX – often for the better, but also sometimes to the detriment of users and applications. The protocols are often fairly opinionated, a result of the lessons learned from X11. In any case though, it is the odd one out – Windows and macOS do not pose the same limitations (for better or worse!), and the effort to port to Wayland is orders of magnitude bigger, or sometimes in case of the multiwindow UI paradigm impossible to achieve to the same level of polish. Desktop environments of course have a design philosophy that they want to push, and want applications to integrate as much as possible (same as macOS and Windows!). However, there are many applications out there, and pushing a design via protocol limitations will likely just result in fewer apps.
The porting dilemma
I spent probably way too much time looking into how to get applications cross-platform and running on Linux, often talking to vendors (FLOSS and proprietary) as well. Wayland limitations aren’t the biggest issue by far, but they do start to come come up now, especially in the scientific space with Ubuntu having switched to Wayland by default. For application authors there is often no way to address these issues. Many scientists do not even understand why their Python script that creates some GUIs suddenly behaves weirdly because Qt is now using the Wayland backend on Ubuntu instead of X11. They do not know the difference and also do not want to deal with these details – even though they may be programmers as well, the real goal is not to fiddle with the display server, but to get to a scientific result somehow.
Another issue is portability layers like Wine which need to run Windows applications as-is on Wayland. Apparently Wine’s Wayland driver has some heuristics to make window positioning work (and I am amazed by the work done on this!), but that can only go so far.
A way out?
So, how would we actually solve this? Fundamentally, this excessively long blog post boils down to just one essential question:
Do we want to force applications to submit to a UX paradigm unconditionally, potentially loosing out on application ports or keeping apps on X11 eternally, or do we want to throw them some rope to get as many applications ported over to Wayland, even through we might sacrifice some protocol purity?
I think we really have to answer that to make the discussions on wayland-protocols a lot less grueling. This question can be answered at the wayland-protocols level, but even more so it must be answered by the individual desktops and compositors.
If the answer for your environment turns out to be “Yes, we want the Wayland protocol to be more opinionated and will not make any compromises for application portability”, then your desktop/compositor should just immediately NACK protocols that add something like this and you simply shouldn’t engage in the discussion, as you reject the very premise of the new protocol: That it has any merit to exist and is needed in the first place. In this case contributors to Wayland and application authors also know where you stand, and a lot of debate is skipped. Of course, if application authors want to support your environment, you are basically asking them now to rewrite their UI, which they may or may not do. But at least they know what to expect and how to target your environment.
If the answer turns out to be “We do want some portability”, the next question obviously becomes where the line should be drawn and which changes are acceptable and which aren’t. We can’t blindly copy all X11 behavior, some porting work to Wayland is simply inevitable. Some written rules for that might be nice, but probably more importantly, if you agree fundamentally that there is an issue to be fixed, please engage in the discussions for the respective MRs! We for sure do not want to repeat X11 mistakes, and I am certain that we can implement protocols which provide the required functionality in a way that is a nice compromise in allowing applications a path forward into the Wayland future, while also being as good as possible and improving upon X11. For example, the toplevel-icon proposal is already a lot better than anything X11 ever had. Relaxing ACK requirements for the ext namespace is also a good proposed administrative change, as it allows some compositors to add features they want to support to the shared repository easier, while also not mandating them for others. In my opinion, it would allow for a lot less friction between the two different ideas of how Wayland protocol development should work. Some compositors could move forward and support more protocol extensions, while more restrictive compositors could support less things. Applications can detect supported protocols at launch and change their behavior accordingly (ideally even abstracted by toolkits).
You may now say that a lot of apps are ported, so surely this issue can not be that bad. And yes, what Wayland provides today may be enough for 80-90% of all apps. But what I hope the detour into the research lab has done is convince you that this smaller percentage of apps matters. A lot. And that it may be worthwhile to support them.
To end on a positive note: When it came to porting concrete apps over to Wayland, the only real showstoppers so far5 were the missing window-positioning and window-position-restore features. I encountered them when porting my own software, and I got the issue as feedback from colleagues and fellow engineers. In second place was UI testing and automation support, the window-icon issue was mentioned twice, but being a cosmetic issue it likely simply hurts people less and they can ignore it easier.
What this means is that the majority of apps are already fine, and many others are very, very close! A Wayland future for everyone is within our grasp! 
I will also bring my two protocol MRs to their conclusion for sure, because as application developers we need clarity on what the platform (either all desktops or even just a few) supports and will or will not support in future. And the only way to get something good done is by contribution and friendly discussion.
Footnotes
- Apologies for the clickbait-y title – it comes with the subject
︎ - When I talk about “Wayland” I mean the combined set of display server protocols and accepted protocol extensions, unless otherwise clarified.
︎ - I would have picked a picture from our lab, but that would have needed permission first
︎ - Qt has awesome “platform issues” pages, like for macOS and Linux/X11 which help with porting efforts, but Qt doesn’t even list Linux/Wayland as supported platform. There is some information though, like window geometry peculiarities, which aren’t particularly helpful when porting (but still essential to know).
︎ - Besides issues with Nvidia hardware – CUDA for simulations and machine-learning is pretty much everywhere, so Nvidia cards are common, which causes trouble on Wayland still. It is improving though.
︎











































