Jiri Eischmann

@jeischma

Linux Desktop Migration Tool 1.5

After almost a year I made another release of the Linux Desktop Migration Tool. In this release I focused on the network settings migration, specifically NetworkManager because it’s what virtually all desktop distributions use.

The result isn’t a lot of added code, but it certainly took some time to experiment with how NetworkManager behaves. It doesn’t officially support network settings migration, but it’s possible with small limitations. I’ve tested it with all kinds of network connections (wired, Wi-Fi, VPNs…) and it worked for me very well, but I’m pretty sure there are scenarios that may not work with the way I implemented the migration. I’m interested in learning about them. What is currently not fully handled are scenarios where the network connection requires a certificate. It’s either located in ~/.pki and thus already handled by the migration tool, or you have to migrate it manually.

The Linux Desktop Migration Tool now covers everything I originally planned to cover and the number of choices has grown quite a lot. So I’ll focus on dialogs and generally UX instead of adding new features. I’ll also look at optimizations. E.g. migrating files using rsync takes a lot of time if you have a lot of small files in your home. It can certainly be speeded up.

Hans de Goede

@hansdg

Is Copilot useful for kernel patch review?

Patch review is an important and useful part of the kernel development process, but it also a time-consuming part. To see if I could save some human reviewer time I've been pushing kernel patch-series to a branch on github, creating a pull-request for the branch and then assigning it to Copilot for review. The idea being that In would fix any issues Co-pilot catches before posting the series upstream saving a human reviewer from having to catch the issues.

I've done this for 5 patch-series: one, two, three, four, five, totalling 53 patches in total. click the number to see the pull-request and Copilot's reviews.

Unfortunately the results are not great on 53 patches Co-pilot had 4 low-confidence comments which were not useful and 3 normal comments. 2 of the no comments were on the power-supply fwnode series one was about spelling degrees Celcius as degrees Celsius instead which is the single valid remark. The other remark was about re-assigning a variable without freeing it first, but Copilot missed that the re-assignment was to another variable since this happened in a different scope. The third normal comment (here) was about as useless as they can come.

To be fair these were all patch-series written by me and then already self-reviewed and deemed ready for upstream posting before I asked Copilot to review them.

As another experiment I did one final pull-request with a couple of WIP patches to add USBIO support from Intel. Copilot generated 3 normal comments here all 3 of which are valid and one of them catches a real bug. Still given the WIP state of this case and the fact that my own review has found a whole lot more then just this, including the need for a bunch if refactoring, the results of this Copilot review are also disappointing IMHO.

Co-pilot also automatically generates summaries of the changes in the pull-requests, at a first look these look useful for e.g. a cover-letter for a patch-set but they are often full with half-truths so at a minimum these need some very careful editing / correcting before they can be used.

My personal conclusion is that running patch-sets through Copilot before posting them on the list is not worth the effort.

comment count unavailable comments

Jordan Petridis

@alatiera

On X11 and the Fascists Maggots

Damn Jordan, what a sad title. I know dear reader, I know…

2 weeks ago I published a blogpost about the upcoming plans of GNOME 49 and the eventual removal of the X11 session. Since then, instead of looking at feedback, bugs and issues related to the topic, we all collectively had to deal with the following, and I am not exaggerating one bit:

  • Fascists and Nazis
  • Wild Conspiracy Theories that make Qanon jealous
  • “Concerned” Trolling about the Accessibility of the Wayland session
  • A culture war where Wayland is Gay, and X11 is the glorious past they stole from you

In my wildest dreams I could have never made this shit up. You all need mandatory supervised access to the Internet from now on.

What happened:

On the backdrop of an ongoing and live streamed genocide in Palestine (and apparently WW3 now as well) an apartheid, ethnosupremacist, babykiller apologist clown with a youtube channel (I know that does not narrow it down a lot) decided to take some time off and go back to “linux” reporting. There are plenty more charismatic fascists on youtube that talk about politics in general so I assume its hard to make a living just from that and the talentless hack needed to pivot back to our corner to keep paying rent.

This is nothing new, it has been going for years and nobody pays any attention to the shunned outcast. These kind of people are not even worth our spit. However people, that should know better, were very happy to indulge and uncritically “report” on the latest made-up story that lionizes an actual Nazi and WW2 revisionist, transphobe, “911 truther” nut job, with main character syndrome, whose main claim to fame is being yelled at by Linus for spouting anti-vaxxer rhetoric in the Linux Kernel Mailing List. They were eager to legitimize conspiracy theories, spread further misinformation and amplified harassment campaigns spearheaded by fascist idiots.

Interlude

Initially I was going to talk about the absurdity of this situation and the pathetic existence of people that make their entire personality about the Init System and Display Server their computer is using. But its kinda funny and better for everyone that they ended up in their own little isolated sandbox playing with their new toys. I am sorry life doesn’t have any more meaning for them than this, even they deserve better.

Thus I will leave you only with the original conclusion of the post instead. About all the Fascist Slimes that take advantage and radicalize people solely for clicks.

Conclusion

To all you wanna be Influencers, Content-Creators, Youtubers, Bloggers and Freeloading Leeches. If you are harboring fascist maggots for clicks, views and your personal gain, you are worse than them in my eyes. You have no excuse, there is no bothsidesing fascism, you can’t say you didn’t know, we’ve been all screaming about it. You deserve each other and to spend your life surrounded by the filth you promote and legitimize. I hope you all will take a look in the mirror one day and try to atone for your sins and the damage you cause to the society.

If nothing else, do this for selfish reasons, as we’d all rather keep writing software that you can “talk” about, rather than having to deal with the fascist trash you cultivate and send our ways.

On behalf of all the desktop developers I have to state the following:

There is no place for Fascists within the Open Source and Free Software communities or the society at large. You will never fester your poisonous roots here. Go back to the cave you crawled out from where no sunlight can reach.

Crush Fascism, Free Palestine ✊

Jordan Petridis

@alatiera

X11 Session Removal FAQ

Here is a quick series of frequently asked questions about the X11 session kissing us goodbye. Shoutout to Nate from which I copied the format of the post.

Is Xorg unmaintained and abandoned?

No, the Xorg Server is still very much maintained, however its development is halted. It still receives occasional bugfixes and there are timely security releases when needed.

The common sentiment, shared among Xorg, Graphics, Kernel, Platform and Application developers is that any future development is a dead-end and shortcomings can’t be addressed without breaking X11. That’s why the majority of Xorg developers moved on to make a new, separate, thing: Wayland.

In doing so, Xorg main focus became to be as reliable as possible and fix security issues as they come up.

It’s the same people that still maintain Xorg. Thanklessly.

If you are interested in Xorg’s history I can’t recommend enough this timeless talk by Daniel.

What will happen to Xorg?

The Xorg server is still there and will continue be maintained. Of course with GNOME and KDE transitioning away from it, it will be receiving even less attention but none of this has a direct impact on your other favorite X11-only desktops or means they will magically stop working overnight.

Your favorite distribution most likely will still keep shipping Xorg packages for a couple more years, if not decades. What’s going away is the GNOME on Xorg session, not the Xorg server itself.

Why did GNOME move to Wayland now?

Early during the GNOME 46 development cycle I created the gnome-session Merge Requests in an attempt to gather feedback from people and identify leftover issues.

48.rc addressed the last big remaining a11y issues, and Orca 48 is so much nicer, in large part thanks to funding from the STF and Igalia donating a bunch of work on top of that to get things over the line. With the functionality of the Wayland Session now being on par (if not straight up better) than Xorg, we all collectively decided that it was time to move on with the removal of the Xorg session.

However 48.rc was also too late to plan and proceed with the removal of the session as well. In hindsight this was a good thing, because we found a couple very obscure bugs last month and we’d have to rush and crunch to fix these otherwise.

On May 6th, we held a meeting among the GNOME Release team. We discussed the X11 session, among other things. There was one known issue with color-calibration but a fix was planned. Discussed timelines and possible scenarios for the removal and pointed out that it would be a great opportunity to go ahead with it for 49 which aligns with 25.10 release, rather than postponing to GNOME 50 and the upcoming 26.04 LTS. We set the topic aside afterwards as we’d wait for upcoming feedback from the Ubuntu team which had a planning meeting scheduled a week or so afterwards.

On May 19th we (Release Team) held another meeting, picking up the X11 topic again. While we didn’t have a concrete decision from the Ubuntu side on what they’d plan to do, there also were not any new/unexpected issues or usecases from their side, so overall good news. Thus Adrian and Myself continued with the preparations for disabling the X11 sessions for 49.

On May 20th FESCO approved the proposal to remove the GNOME on Xorg session for Fedora 43.

June 1st I started working on a, earlier than usual, 49.alpha release and 3 days later I got a private confirmation that Ubuntu would indeed follow along with completely disabling the Xorg session for 49, matching the upstream defaults.

Late night June 7th, more like morning of the 8th and after dealing with a couple of infrastructure issues, I finished with all the preparations, tagged 49.alpha.0 for GDM, gnome-shell, mutter and gnome-session and published the announcement blogpost. 2 days later Ubuntu followed suite with the public announcement from their side.

Will my applications stop working?

Most application toolkits have Wayland backends these days, however for those that do not, we have XWayland. This let’s X11-native application keep running on Wayland as if they were using an X11 session. It happens transparently and XWayland will be around with us for decades. You don’t have to worry about losing your applications.

Is everything working for real?

GNOME on Wayland is as functional as the Xorg session and in plenty of cases a lot more capable and efficient. There’s some niche workflows that are only possible on X11, but there isn’t any functionality regression.

What’s the state of accessibility?

There has been a lot of concerned trolling and misinformation specifically around this topic sadly from people that don’t care about it and have been abusing the discourse as a straw man argument. Drowning all the people that rely on it and need to be heard. Thankfully Aaron of fireborn fame wrote recently a blogpost talking about all this in detail and clearing up misconceptions.

GNOME itself is already there when it comes to accessibility, but now next task will be rebuilding the third-party tooling (or integrating them directly when possible). We now have a foundation that allows us to provide better accessibility support and options to people, with designed solutions rather than piles of hacks held together by duck tape on top of a protocol from the 80s.

Is Wayland Gay?

Yes and Xorg is Trans.

Happy Pride month and Free Palestine ✊

Casilda 0.9.0 Development Release!

Native rendering Release!

I am pleased to announce a new development release of Casilda, a simple Wayland compositor widget for Gtk 4 which can be used to embed other processes windows in your Gtk 4 application.

The main feature of this release is dmabuf support which allow clients to use hardware accelerated libraries for their rendering brought to you by Val Packet!

You can see all her cool work here.

This allowed me to stop relaying on wlroots scene compositor and render client windows directly in the widget snapshot method which not only is faster but also integrates better with Gtk since now the background is not handled by wlroots anymore and can be set with CSS like with any other widget. This is why I decided to deprecate bg-color property.

Other improvements include transient window support and better initial window placement.

Release Notes

    • Fix rendering glitch on resize
    • Do not use wlr scene layout
    • Render windows and popups directly in snapshot()
    • Position windows on center of widget
    • Position transient windows on center of parent
    • Fix unmaximize
    • Add dmabuf support (Val Packett)
    • Added vapi generation (PaladinDev)
    • Add library soname (Benson Muite)

Fixed Issues

    • “Resource leak causing crash with dmabuf”
    • ” Unmaximize not working properly”
    • “Add dmabuff support” (Val Packett)
    • “Bad performance”
    • “Add a soname to shared library” (Benson Muite)

Where to get it?

Source code lives on GNOME gitlab here

git clone https://gitlab.gnome.org/jpu/casilda.git

Matrix channel

Have any question? come chat with us at #cambalache:gnome.org

Mastodon

Follow me in Mastodon @xjuan to get news related to Casilda and Cambalache development.

Happy coding!

Steven Deobald

@steven

2025-06-20 Foundation Report

Welcome to the mid-June Foundation Report! I’m in an airport! My back hurts! This one might be short! haha

 

## AWS OSS

Before the UN Open Source Week, Andrea Veri and I had a chance to meet Mila Zhou, Tom (Spot) Callaway, and Hannah Aubry from AWS OSS. We thanked them for their huge contribution to GNOME’s infrastructure but, more importantly, discussed other ways we can partner with them to make GNOME more sustainable and secure.

I’ll be perfectly honest: I didn’t know what to expect from a meeting with AWS. And, as it turns out, it was such a lovely conversation that we chatted nonstop for nearly 5 hours and then continued the conversation over supper. At a… vegan chinese food place, of all things? (Very considerate of them to find some vegetarian food for me!) Lovely folks and I can’t wait for our next conversation.

 

## United Nations Open Source Week

The big news for me this week is that I attended the United Nations Open Source Week in Manhattan. The Foundation isn’t in a great financial position, so I crashed with friends-of-friends (now also friends!) on an air mattress in Queens. Free (as in ginger beer) is a very reasonable price but my spine will also appreciate sleeping in my own bed tonight. 😉

I met too many people to mention, but I was pleasantly surprised by the variety of organizations and different folks in attendance. Indie hackers, humanitarian workers, education specialists, Digital Public Infrastructure Aficionados, policy wonks, OSPO leaders, and a bit of Big Tech. I came to New York to beg for money (and I did do a bit of that) but it was the conversations about the f/oss community that I really enjoyed.

We did do a “five Executive Directors” photo, because 4 previous GNOME Foundation EDs happened to be there. One of them was Richard! I got to hang out with him in person and he gave me a hug. So did Karen. It was nice. The history matters (recent history and ancient history) … and GNOME has a lot of history.

Special shout-out to Sumana Harihareswara (it’s hard for me to spell that without an “sh”) who organized an extremely cool, low-key gathering in an outdoor public space near the UN. She couldn’t make the conf herself but she managed to create the best hallway track I attended. (By dragging a very heavy bag of snacks and drinks all the way from Queens.) More of that, please. The unconf part, not the dragging snacks across the city part.

All in all, a really exciting and exhausting week.

 

## Donation Page

As I mentioned above, the GNOME Foundation’s financial situation could use help. We’ll be starting a donation drive soon to encourage GNOME users to donate, using the new donation page:

https://donate.gnome.org

This blog post is as good a time as any to say this isn’t just a cash grab. The flip side of finding money for the Foundation is finding ways to grow the project with it. I’m of the opinion that this needs to include more than running infrastructure and conferences. Those things are extremely important — nothing in recent memory has reminded me of the value of in-person interactions like meeting a bunch of new friends here in New York — the real key to the GNOME project is the project itself. And the core of the project is development.

As usual: No Promises. But if you want to hear a version of what I was saying all week, you can bug Adrian Vovk for his opinion about my opinions. 😉

The donation page would not have been possible without the help of Bart Piotrowski, Sam Hewitt, Jakub Steiner, Shivam Singhal, and Yogiraj Hendre. Thanks everyone for putting in the hard work to get this over the line, to test it with your own credit cards, and to fix bugs as they cropped up.

We will keep iterating on this as we learn more about what corporate sponsors want in exchange for their sponsorship and as we figure out how best to support Causes (campaigns), such as development.

 

## Elections

Voting has closed! Thank you to all the candidates who ran this year. I know that running for election on the Board is intimidating but I’m glad folks overcame that fear and made the effort to run campaigns. It was very important to have you all in the race and I look forward to working with my new bosses once they take their seats. That’s when you get to learn about governance and demonstrate that you’re willing to put in the work. You might be my bosses… but I’m going to push you. 😉

Until next week!

Michael Meeks

@michael

2025-06-20 Friday

  • Up early; sync with Dave & Asja. Admin, caught part of an interesting WASM Tea Time Training.
  • Sync with Thorsten, merger status round.
  • Late lunch; customer/partner call, code-read an issue with Pranam: finally some code!
  • Out for a walk with J. in the evening, relaxed on the heath and watched the sky together, pub in Moulton, home, bed.

Jussi Pakkanen

@jpakkane

Book creation using almost entirely open source tools

Some years ago I wrote a book. Now I have written a second one, but because no publisher wanted to publish it I chose to self-publish a small print run and hand it out to friends (and whoever actually wants one) as a a very-much-not-for-profit art project.

This meant that I had to create every PDF used in the printing myself. I received the shipment from the printing house yesterday.  The end result turned out very nice indeed.

The red parts in this dust jacket are not ink but instead are done with foil stamping (it has a metallic reflective surface). This was done with Scribus. The cover image was painted by hand using watercolor paints. The illustrator used a proprietary image processing app to create the final TIFF version used here. She has since told me that she wants to eventually shift to an open source app due to ongoing actions of the company making the proprietary app.

The cover itself is cloth with a debossed emblem. The figure was drawn in Inkscape and then copypasted to Scribus.

Evert fantasy book needs to have a map. This has two and they are printed in the end papers. The original picture was drawn with a nib pen and black ink and processed with Gimp. The printed version is brownish to give it that "old timey" look. Despite its apparent simplicity this PDF was the most problematic. The image itself is monochrome and printed with a Pantone spot ink. Trying to print this with CMYK inks would just not have worked. Because the PDF drawing model for spot inks in images behaves, let's say, in an unexpected way, I had to write a custom script to create the PDF with CapyPDF. As far as I know no other open source tool can do this correctly, not even Scribus. The relevant bug can be found here. It was somewhat nerve wrecking to send this out to the print shop with zero practical experience and a theoretical basis of "according to my interpretation of the PDF spec, this should be correct". As this is the first ever commercial print job using CapyPDF, it's quite fortunate that it succeeded pretty much perfectly.

The inner pages were created with the same Chapterizer tool as the previous book. It uses Pango and Cairo to generate PDFs. Illustrations in the text were drawn with Krita. As Cairo only produces RGB PDFs, as the last step it had to be converted to grayscale using Ghostscript.

My a11y journey

23 years ago I was in a bad place. I'd quit my first attempt at a PhD for various reasons that were, with hindsight, bad, and I was suddenly entirely aimless. I lucked into picking up a sysadmin role back at TCM where I'd spent a summer a year before, but that's not really what I wanted in my life. And then Hanna mentioned that her PhD supervisor was looking for someone familiar with Linux to work on making Dasher, one of the group's research projects, more usable on Linux. I jumped.

The timing was fortuitous. Sun were pumping money and developer effort into accessibility support, and the Inference Group had just received a grant from the Gatsy Foundation that involved working with the ACE Centre to provide additional accessibility support. And I was suddenly hacking on code that was largely ignored by most developers, supporting use cases that were irrelevant to most developers. Being in a relatively green field space sounds refreshing, until you realise that you're catering to actual humans who are potentially going to rely on your software to be able to communicate. That's somewhat focusing.

This was, uh, something of an on the job learning experience. I had to catch up with a lot of new technologies very quickly, but that wasn't the hard bit - what was difficult was realising I had to cater to people who were dealing with use cases that I had no experience of whatsoever. Dasher was extended to allow text entry into applications without needing to cut and paste. We added support for introspection of the current applications UI so menus could be exposed via the Dasher interface, allowing people to fly through menu hierarchies and pop open file dialogs. Text-to-speech was incorporated so people could rapidly enter sentences and have them spoke out loud.

But what sticks with me isn't the tech, or even the opportunities it gave me to meet other people working on the Linux desktop and forge friendships that still exist. It was the cases where I had the opportunity to work with people who could use Dasher as a tool to increase their ability to communicate with the outside world, whose lives were transformed for the better because of what we'd produced. Watching someone use your code and realising that you could write a three line patch that had a significant impact on the speed they could talk to other people is an incomparable experience. It's been decades and in many ways that was the most impact I've ever had as a developer.

I left after a year to work on fruitflies and get my PhD, and my career since then hasn't involved a lot of accessibility work. But it's stuck with me - every improvement in that space is something that has a direct impact on the quality of life of more people than you expect, but is also something that goes almost unrecognised. The people working on accessibility are heroes. They're making all the technology everyone else produces available to people who would otherwise be blocked from it. They deserve recognition, and they deserve a lot more support than they have.

But when we deal with technology, we deal with transitions. A lot of the Linux accessibility support depended on X11 behaviour that is now widely regarded as a set of misfeatures. It's not actually good to be able to inject arbitrary input into an arbitrary window, and it's not good to be able to arbitrarily scrape out its contents. X11 never had a model to permit this for accessibility tooling while blocking it for other code. Wayland does, but suffers from the surrounding infrastructure not being well developed yet. We're seeing that happen now, though - Gnome has been performing a great deal of work in this respect, and KDE is picking that up as well. There isn't a full correspondence between X11-based Linux accessibility support and Wayland, but for many users the Wayland accessibility infrastructure is already better than with X11.

That's going to continue improving, and it'll improve faster with broader support. We've somehow ended up with the bizarre politicisation of Wayland as being some sort of woke thing while X11 represents the Roman Empire or some such bullshit, but the reality is that there is no story for improving accessibility support under X11 and sticking to X11 is going to end up reducing the accessibility of a platform.

When you read anything about Linux accessibility, ask yourself whether you're reading something written by either a user of the accessibility features, or a developer of them. If they're neither, ask yourself why they actually care and what they're doing to make the future better.

comment count unavailable comments

This Week in GNOME

@thisweek

#205 Loading Films

Update on what happened across the GNOME project in the week from June 13 to June 20.

GNOME Core Apps and Libraries

Maps

Maps gives you quick access to maps all across the world.

mlundblad announces

Maps now shows localized metro/railway station icons in some locations

Settings

Configure various aspects of your GNOME desktop.

Matthijs Velsink announces

We ported the GNOME Settings app to Blueprint! UI definition files are much easier to read and write in Blueprint compared to the standard XML syntax that GTK uses. Hopefully this makes UI contributions more approachable to newcomers. In any case, reviewing UI changes has gotten quite enjoyable already! Settings is one of the first large core apps to make the switch (together with Calendar), and Blueprint is still considered experimental, but the experience has been great so far. Small missing features in Blueprint have not been dealbreakers.

Many thanks to Jamie Gravendeel who did most of the work and together with Hari Rana motivated us to consider the port in the first place! We’d like to thank James Westman as well for creating Blueprint and making the whole porting process so straightforward.

Calendar

A simple calendar application.

Hari Rana | TheEvilSkeleton (any/all) 🇮🇳 🏳️‍⚧️ announces

GNOME Calendar received a nice visual overhaul, thanks to the code contributed by Markus Göllnitz, which the design was led by Philipp Sauberz and Jeff Fortin. You can find the really long discussion on GitLab. This should hopefully make Calendar work better on smaller monitors thanks to the collapsible sidebar.

Afterwards, Jamie Gravendeel ported the entirety of GNOME Calendar to Blueprint. This should hopefully make it easier for everyone to contribute to Calendar’s UI.

GLib

The low-level core library that forms the basis for projects such as GTK and GNOME.

Ignacy Kuchciński (ignapk) says

There was recently an interesting improvement in GLib, that makes sure your Trash is really empty, by fixing a bug resulting in leftover files in ~/.local/share/Trash/expunged/. For more information, check out https://ignapk.blogspot.com/2025/06/taking-out-trash-or-just-sweeping-it.html

Third Party Projects

bjawebos reports

In my spare time I like to take photographs. I use different cameras with different characteristics and therefore different purposes. Most of these cameras use film as the image carrier medium. It happened a few weeks ago that I wanted to use a camera and wondered whether it had film in it or not. I was of the opinion that there was no film inserted and I opened the back of the camera. What can I say, of course there was film inside. It wasn’t much damage, I lost about 3-5 pictures. Nevertheless, I had to find a solution and since I wanted to learn more about GTK4/libadwaita and Rust anyway, I combined these two topics.

So here is the application for photographers who no longer know whether a film is inserted. The application is called Filmbook and is divided into 4 sections. The first tab “Current” shows a list of cameras with inserted films. The “History” tab shows which cameras were loaded with which films. In addition, the camera-film pairs can be marked as developed. The third and fourth tabs show the cameras and films.

The application is currently in a sufficiently stable state and I would like to test it extensively on my Pinephone Pro under Phosh to explore the weaknesses of the current design. In addition, my goal is to get in touch with other photographers to gather their ideas and needs.

So, if you feel addressed, get in touch with me. Here are a few important links:

Flathub: https://flathub.org/apps/page.codeberg.bjawebos.Filmbook Issues: https://codeberg.org/bjawebos/filmbook/issues Fediverse:

johannes_b reports

This week I released a new version of BMI Calculator. Now it includes german, italian and dutch translations. The app remembers the last entries and you can choose the color scheme. You can install the app from Flathub: https://flathub.org/apps/io.github.johannesboehler2.BmiCalculator

Pipeline

Follow your favorite video creators.

schmiddi says

Version 2.5.0 of Pipeline was now released. Pipeline now displays a random splash text when reloading the feed. This tells users about random facts about Pipeline, showcases some features and also advertises some other great alternative YouTube clients. Examples include:

  • Did you know? The first commit of Pipeline was 1566 days ago.
  • Feature Spotlight: Seeing something you don’t like? You can hide videos from your feed based on the title and uploader of the video.
  • Also try: NewPipe.

A useless feature? Pretty much. But I enjoyed coding it and maybe some people will enjoy reading the splash texts I came up with.

This release also adds debug information to the About window, which will possibly help me debug issues by knowing your versions of dependencies and the most important settings. This release also fixes minor bugs, like some buttons being hidden in a narrow layout in the video page, the description of YouTube videos containing escaped characters, and that a video will not be added to the watched list if Pipeline is closed while it is still displayed.

Fractal

Matrix messaging app for GNOME written in Rust.

Kévin Commaille reports

We released Fractal 11.2 which updates the matrix-sdk-crypto dependency to include a fix for a high severity security issue. It is available right now on Flathub.

GNOME Foundation

steven says

A week late to TWIG, but almost on time for the blog, it’s this week’s Foundation Report: Elections, GUADEC, ops, infra, fundraising, some fun meetings, and the ED gets another feedback session.

https://blogs.gnome.org/steven/2025/06/14/2025-06-14-foundation-report/

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Michael Meeks

@michael

2025-06-19 Thursday

  • Up early, tech planning call in the morning, mail catch-up, admin and TORF pieces.
  • Really exited to see the team get the first COOL 25.04 release shipped, coming to a browser near you:
    COOL 25.04 released!

    Seems our videos are getting more polished over time too which is good.
  • Mail, admin, compiled some code too; bit patch review here & there.

libinput and tablet tool eraser buttons

This is, to some degree, a followup to this 2014 post. The TLDR of that is that, many a moon ago, the corporate overlords at Microsoft that decide all PC hardware behaviour decreed that the best way to handle an eraser emulation on a stylus is by having a button that is hardcoded in the firmware to, upon press, send a proximity out event for the pen followed by a proximity in event for the eraser tool. Upon release, they dogma'd, said eraser button shall virtually move the eraser out of proximity followed by the pen coming back into proximity. Or, in other words, the pen simulates being inverted to use the eraser, at the push of a button. Truly the future, back in the happy times of the mid 20-teens.

In a world where you don't want to update your software for a new hardware feature, this of course makes perfect sense. In a world where you write software to handle such hardware features, significantly less so.

Anyway, it is now 11 years later, the happy 2010s are over, and Benjamin and I have fixed this very issue in a few udev-hid-bpf programs but I wanted something that's a) more generic and b) configurable by the user. Somehow I am still convinced that disabling the eraser button at the udev-hid-bpf level will make users that use said button angry and, dear $deity, we can't have angry users, can we? So many angry people out there anyway, let's not add to that.

To get there, libinput's guts had to be changed. Previously libinput would read the kernel events, update the tablet state struct and then generate events based on various state changes. This of course works great when you e.g. get a button toggle, it doesn't work quite as great when your state change was one or two event frames ago (because prox-out of one tool, prox-in of another tool are at least 2 events). Extracing that older state change was like swapping the type of meatballs from an ikea meal after it's been served - doable in theory, but very messy.

Long story short, libinput now has a internal plugin system that can modify the evdev event stream as it comes in. It works like a pipeline, the events are passed from the kernel to the first plugin, modified, passed to the next plugin, etc. Eventually the last plugin is our actual tablet backend which will update tablet state, generate libinput events, and generally be grateful about having fewer quirks to worry about. With this architecture we can hold back the proximity events and filter them (if the eraser comes into proximity) or replay them (if the eraser does not come into proximity). The tablet backend is none the wiser, it either sees proximity events when those are valid or it sees a button event (depending on configuration).

This architecture approach is so successful that I have now switched a bunch of other internal features over to use that internal infrastructure (proximity timers, button debouncing, etc.). And of course it laid the ground work for the (presumably highly) anticipated Lua plugin support. Either way, happy times. For a bit. Because for those not needing the eraser feature, we've just increased your available tool button count by 100%[2] - now there's a headline for tech journalists that just blindly copy claims from blog posts.

[1] Since this is a bit wordy, the libinput API call is just libinput_tablet_tool_config_eraser_button_set_button()
[2] A very small number of styli have two buttons and an eraser button so those only get what, 50% increase? Anyway, that would make for a less clickbaity headline so let's handwave those away.

Marcus Lundblad

@mlundblad

Midsommer Maps

 As tradition has it, it's about time for the (Northern Hemisphere) summer update on the happenings around Maps!

About dialog for GNOME Maps 49.alpha development 


Bug Fixes 

 Since the GNOME 48 release in March, there's been some bug fixes, such as correctly handling daylight savings time in public transit itineraries retrieved from Transitous. Also James Westman fixed a regression where the search result popover wasn't showing on small screen devices (phones) because of sizing issues.

 

More Clickable Stuff

More symbols can now be directly selected in the map view by clicking/tapping on there symbols, like roads and house numbers (and then also, like any other POI can be marked as favorites).
 
Showing place information for the AVUS motorway in Berlin

 And related to traffic and driving, exit numbers are now shown for highway junctions (exits) when available.
 
Showing information for a highway exit in a driving-on-the-right locallity

Showing information for a highway exit in a driving-on-the-left locallity

 Note how the direction the arrow is pointing depends on the side of the road vehicle traffic drives in the country/territoy of the place…
Also the icon for the “Directions” button shows a “turn off left” mirrored icon now for places in drives-on-the-left countries as an additional attention-to-detail.
 

Furigana Names in Japanese

Since some time (around when we re-designed the place information “bubbles”) we show the native name for place under the name translated in the user's locale (when they are different).
As there exists an established OpenStreetMap tag for phonetic names in Japanese (using Hiragana), name:ja-Hira akin to Furigana (https://en.wikipedia.org/wiki/Furigana) used to aid with pronounciation of place names. I had been thinking that it might be a good idea to show this when available as the dimmed supplimental text in the cases where the displayed name and native names are identical, and the Hiragana name is available. E.g. when the user's locale is Japanese and looking at Japanese names.  For other locales in these cases the displayed name would typically be the Romaji name with the Japanese full (Kanji) name displayed under it as the native name.
So, I took the opportunity to discuss this with my college Daniel Markstedt, who speaks fluent Japanese and has lived many years in Japan. As he like the idea, and demo of it, I decided to go ahead with this!
 
Showing a place in Japanese with supplemental Hiragana name

 

Configurable Measurement Systems

Since like the start of time, Maps has  shown distances in feet and miles when using a United States locale (or more precisely when measurements use such a locale, LC_MEASUREMENT when speaking about the environment variables). For other locales using standard metric measurements.
Despite this we have several times recieved bug reports about Maps not  using the correct units. The issue here is that many users tend to prefer to have their computers speaking American English.
So, I finally caved in and added an option to override the system default.
 
Hamburger menu

 
Hamburger menu showing measurement unit selection

Station Symbols

One feature I had been wanted to implement since we moved to vector tiles and integrated the customized highway shields from OpenStreeMap Americana is showing localized symbols for e.g. metro stations. Such as the classic “roundel” symbol used in London, and the ”T“ in Stockholm.
 
After adding the network:wikidata tag to the pre-generated vector tiles this has been possible to implement. We choose to rely on the Wikidata tag instead of the network name/abbreviations as this is more stable and names could risk getting collitions with unrelated networks having the same (short-) name.
 
U-Bahn station in Hamburg

Metro stations in Copenhagen

Subway stations in Boston

S-Bahn station in Berlin  

 
 This requires the stations being tagged consitently to work out. I did some mass tagging of metro stations in Stockholm, Oslo, and Copenhagen. Other than that I mainly choose places where's at least partial coverage already.
 
If you'd like to contribute and update a network with the network Wikidata tag, I prepared to quick steps to do such an edit with the JOSM OpenStreetMap desktop editor.
 
Download a set of objects to update using an Overpass query, as an example, selecting the stations of Washing DC metro
 
[out:xml][timeout:90][bbox:{{bbox}}];

(

     nwr["network"="Washington Metro"]["railway"="station"];

     );

    (._;>;);

    out meta;

 

JOSM Overpass download query editor  

 Select the region to download from

Select region in JOSM

 

Select to only show the datalayer (not showing the background map) to make it easier to see the raw data.

Toggle data layers in JOSM

 Select the nodes.

Show raw datapoints in JSOM

 

Edit the field in the tag edit panel to update the value for all selected objects

Showing tags for selected objects

Note that this sample assumed the relevant station node where already tagged with network names (the network tag). Other queries to limit selection might be needed.

Also it could also be a good idea to reach out to local OSM communities before making bulk edits like this (e.g. if there is no such tagging at all in specific region) to make sure it would be aliged with expectations and such.

Then it will also potentially take a while before it gets include in out monthly vector tile  update.

When this has been done, given a suitable icon is available as e.g. public domain or commons in WikimediaCommons, it could be bundled in data/icons/stations and a definition added in the data mapping in src/mapStyle/stations.js.

 

And More…

One feature that has been long-wanted is the ability to dowload maps for offline usage. Lately precisely this is something James Westman has been working on.

It's still an early draft, so we'll see when it is ready, but it already look pretty promising.

 

Showing the new Preferences option  

  



Preference dialog with dowloads

Selecting region to download

 
Entering a name for a downloaded region

  

Dialog showing dowloaded areas

    

 

And that's it for now! 

 
 

Alley Chaggar

@AlleyChaggar

Demystifying The Codegen Phase Part 1

Intro

I want to start off and say I’m really glad that my last blog was helpful to many wanting to understand Vala’s compiler. I hope this blog will also be just as informative and helpful. I want to talk a little about the basics of the compiler again, but this time, catering to the codegen phase. The phase that I’m actually working on, but has the least information in the Vala Docs.

Last blog, I briefly mentioned the directories codegen and ccode being part of the codegen phase. This blog will be going more into depth about it. The codegen phase takes the AST and outputs the C code tree (ccode* objects), so that it can be generated to C code more easily, usually by GCC or another C compiler you installed. When dealing with this phase, it’s really beneficial to know and understand at least a little bit of C.

ccode Directory

  • Many of the files in the ccode directory are derived from the class CCodeNode, valaccodenode.vala.
  • The files in this directory represent C Constructs. For example, the valaccodefunction.vala file represents a C code function. Regular C functions have function names, parameters, return types, and bodies that add logic. Essentially, what this class specifically does, is provide the building blocks for building a function in C.

       //...
      	writer.write_string (return_type);
          if (is_declaration) {
              writer.write_string (" ");
          } else {
              writer.write_newline ();
          }
          writer.write_string (name);
          writer.write_string (" (");
          int param_pos_begin = (is_declaration ? return_type.char_count () + 1 : 0 ) + name.char_count () + 2;
    
          bool has_args = (CCodeModifiers.PRINTF in modifiers || CCodeModifiers.SCANF in modifiers);
     //...
    

This code snippet is part of the ccodefunction file, and what it’s doing is overriding the ‘write’ function that is originally from ccodenode. It’s actually writing out the C function.

codegen Directory

  • The files in this directory are higher-level components responsible for taking the compiler’s internal representation, such as the AST and transforming it into the C code model ccode objects.
  • Going back to the example of the ccodefunction, codegen will take a function node from the abstract syntax tree (AST), and will create a new ccodefunction object. It then fills this object with information like the return type, function name, parameters, and body, which are all derived from the AST. Then the CCodeFunction.write() (the code above) will generate and write out the C function.

    //...
    private void add_get_property_function (Class cl) {
    		var get_prop = new CCodeFunction ("_vala_%s_get_property".printf (get_ccode_lower_case_name (cl, null)), "void");
    		get_prop.modifiers = CCodeModifiers.STATIC;
    		get_prop.add_parameter (new CCodeParameter ("object", "GObject *"));
    		get_prop.add_parameter (new CCodeParameter ("property_id", "guint"));
    		get_prop.add_parameter (new CCodeParameter ("value", "GValue *"));
    		get_prop.add_parameter (new CCodeParameter ("pspec", "GParamSpec *"));
      
    		push_function (get_prop);
    //...
    

This code snippet is from valagobjectmodule.vala and it’s calling CCodeFunction (again from the valaccodefunction.vala) and adding the parameters, which is calling valaccodeparameter.vala. What this would output is something that looks like this in C:

    void _vala_get_property (GObject *object, guint property_id, GValue *value, GParamSpec *pspec) {
       //... 
    }

Why do all this?

Now you might ask why? Why separate codegen and ccode?

  • We split things into codegen and ccode to keep the compiler organized, readable, and maintainable. It prevents us from having to constantly write C code representations from scratch all the time.
  • It also reinforces the idea of polymorphism and the ability that objects can behave differently depending on their subclass.
  • And it lets us do hidden generation by adding new helper functions, temporary variables, or inlined optimizations after the AST and before the C code output.

Jsonmodule

I’m happy to say that I am making a lot of progress with the JSON module I mentioned last blog. The JSON module follows very closely other modules in the codegen, specifically like the gtk module and the gobject module. It will be calling ccode functions to make ccode objects and creating helper methods so that the user doesn’t need to manually override certain JSON methods.

Jamie Gravendeel

@monster

UI-First Search With List Models

You can find the repository with the code here.

When managing large amounts of data, manual widget creation finds its limits. Not only because managing both data and UI separately is tedious, but also because performance will be a real concern.

Luckily, there’s two solutions for this in GTK:

1. Gtk.ListView using a factory: more performant since it reuses widgets when the list gets long
2. Gtk.ListBox‘s bind_model(): less performant, but can use boxed list styling

This blog post provides an example of a Gtk.ListView containing my pets, which is sorted, can be searched, and is primarily made in Blueprint.

The app starts with a plain window:

from gi.repository import Adw, Gtk


@Gtk.Template.from_resource("/app/example/Pets/window.ui")
class Window(Adw.ApplicationWindow):
    """The main window."""

    __gtype_name__ = "Window"
using Gtk 4.0;
using Adw 1;

template $Window: Adw.ApplicationWindow {
  title: _("Pets");
  default-width: 450;
  default-height: 450;

  content: Adw.ToolbarView {
    [top]
    Adw.HeaderBar {}
  }
}

Data Object

The Gtk.ListView needs a data object to work with, which in this example is a pet with a name and species.

This requires a GObject.Object called Pet with those properties, and a GObject.GEnum called Species:

from gi.repository import Adw, GObject, Gtk


class Species(GObject.GEnum):
    """The species of an animal."""

    NONE = 0
    CAT = 1
    DOG = 2

[…]

class Pet(GObject.Object):
    """Data for a pet."""

    __gtype_name__ = "Pet"

    name = GObject.Property(type=str)
    species = GObject.Property(type=Species, default=Species.NONE)

List View

Now that there’s a data object to work with, the app needs a Gtk.ListView with a factory and model.

To start with, there’s a Gtk.ListView wrapped in a Gtk.ScrolledWindow to make it scrollable, using the .navigation-sidebar style class for padding:

content: Adw.ToolbarView {
  […]

  content: ScrolledWindow {
    child: ListView {
      styles [
        "navigation-sidebar",
      ]
    };
  };
};

Factory

The factory builds a Gtk.ListItem for each object in the model, and utilizes bindings to show the data in the Gtk.ListItem:

content: ListView {
  […]

  factory: BuilderListItemFactory {
    template ListItem {
      child: Label {
        halign: start;
        label: bind template.item as <$Pet>.name;
      };
    }
  };
};

Model

Models can be modified through nesting. The data itself can be in any Gio.ListModel, in this case a Gio.ListStore works well.

The Gtk.ListView expects a Gtk.SelectionModel because that’s how it manages its selection, so the Gio.ListStore is wrapped in a Gtk.NoSelection:

using Gtk 4.0;
using Adw 1;
using Gio 2.0;

[…]

content: ListView {
  […]

  model: NoSelection {
    model: Gio.ListStore {
      item-type: typeof<$Pet>;

      $Pet {
        name: "Herman";
        species: cat;
      }

      $Pet {
        name: "Saartje";
        species: dog;
      }

      $Pet {
        name: "Sofie";
        species: dog;
      }

      $Pet {
        name: "Rex";
        species: dog;
      }

      $Pet {
        name: "Lady";
        species: dog;
      }

      $Pet {
        name: "Lieke";
        species: dog;
      }

      $Pet {
        name: "Grumpy";
        species: cat;
      }
    };
  };
};

Sorting

To easily parse the list, the pets should be sorted by both name and species.

To implement this, the Gio.ListStore has to be wrapped in a Gtk.SortListModel which has a Gtk.MultiSorter with two sorters, a Gtk.NumericSorter and a Gtk.StringSorter.

Both of these need an expression: the property that needs to be compared.

The Gtk.NumericSorter expects an integer, not a Species, so the app needs a helper method to convert it:

class Window(Adw.ApplicationWindow):
    […]

    @Gtk.Template.Callback()
    def _species_to_int(self, _obj: Any, species: Species) -> int:
        return int(species)
model: NoSelection {
  model: SortListModel {
    sorter: MultiSorter {
      NumericSorter {
        expression: expr $_species_to_int(item as <$Pet>.species) as <int>;
      }

      StringSorter {
        expression: expr item as <$Pet>.name;
      }
    };

    model: Gio.ListStore { […] };
  };
};

To learn more about closures, such as the one used in the Gtk.NumericSorter, consider reading my previous blog post.

Search

To look up pets even faster, the user should be able to search for them by both their name and species.

Filtering

First, the Gtk.ListView‘s model needs the logic to filter the list by name or species.

This can be done with a Gtk.FilterListModel which has a Gtk.AnyFilter with two Gtk.StringFilters.

One of the Gtk.StringFilters expects a string, not a Species, so the app needs another helper method to convert it:

class Window(Adw.ApplicationWindow):
    […]

    @Gtk.Template.Callback()
    def _species_to_string(self, _obj: Any, species: Species) -> str:
        return species.value_nick
model: NoSelection {
  model: FilterListModel {
    filter: AnyFilter {
      StringFilter {
        expression: expr item as <$Pet>.name;
      }

      StringFilter {
        expression: expr $_species_to_string(item as <$Pet>.species) as <string>;
      }
    };

    model: SortListModel { […] };
  };
};

Entry

To actually search with the filters, the app needs a Gtk.SearchBar with a Gtk.SearchEntry.

The Gtk.SearchEntry‘s text property needs to be bound to the Gtk.StringFilters’ search properties to filter the list on demand.

To be able to start searching by typing from anywhere in the window, the Gtk.SearchEntry‘s key-capture-widget has to be set to the window, in this case the template itself:

content: Adw.ToolbarView {
  […]

  [top]
  SearchBar {
    key-capture-widget: template;

    child: SearchEntry search_entry {
      hexpand: true;
      placeholder-text: _("Search pets");
    };
  }

  content: ScrolledWindow {
    child: ListView {
      […]

      model: NoSelection {
        model: FilterListModel {
          filter: AnyFilter {
            StringFilter {
              search: bind search_entry.text;
              […]
            }

            StringFilter {
              search: bind search_entry.text;
              […]
            }
          };

          model: SortListModel { […] };
        };
      };
    };
  };
};

Toggle Button

The Gtk.SearchBar should also be toggleable with a Gtk.ToggleButton.

To do so, the Gtk.SearchEntry‘s search-mode-enabled property should be bidirectionally bound to the Gtk.ToggleButton‘s active property:

content: Adw.ToolbarView {
  [top]
  Adw.HeaderBar {
    [start]
    ToggleButton search_button {
      icon-name: "edit-find-symbolic";
      tooltip-text: _("Search");
    }
  }

  [top]
  SearchBar {
    search-mode-enabled: bind search_button.active bidirectional;
    […]
  }

  […]
};

The search_button should also be toggleable with a shortcut, which can be added with a Gtk.ShortcutController:

[start]
ToggleButton search_button {
  […]

  ShortcutController {
    scope: managed;

    Shortcut {
      trigger: "<Control>f";
      action: "activate";
    }
  }
}

Empty State

Last but not least, the view should fall back to an Adw.StatusPage if there are no search results.

This can be done with a closure for the visible-child-name property in an Adw.ViewStack or Gtk.Stack. I generally prefer an Adw.ViewStack due to its animation curve.

The closure takes the amount of items in the Gtk.NoSelection as input, and returns the correct Adw.ViewStackPage name:

class Window(Adw.ApplicationWindow):
    […]

    @Gtk.Template.Callback()
    def _get_visible_child_name(self, _obj: Any, items: int) -> str:
        return "content" if items else "empty"
content: Adw.ToolbarView {
  […]

  content: Adw.ViewStack {
    visible-child-name: bind $_get_visible_child_name(selection_model.n-items) as <string>;
    enable-transitions: true;

    Adw.ViewStackPage {
      name: "content";

      child: ScrolledWindow {
        child: ListView {
          […]

          model: NoSelection selection_model { […] };
        };
      };
    }

    Adw.ViewStackPage {
      name: "empty";

      child: Adw.StatusPage {
        icon-name: "edit-find-symbolic";
        title: _("No Results Found");
        description: _("Try a different search");
      };
    }
  };
};

End Result

from typing import Any

from gi.repository import Adw, GObject, Gtk


class Species(GObject.GEnum):
    """The species of an animal."""

    NONE = 0
    CAT = 1
    DOG = 2


@Gtk.Template.from_resource("/org/example/Pets/window.ui")
class Window(Adw.ApplicationWindow):
    """The main window."""

    __gtype_name__ = "Window"

    @Gtk.Template.Callback()
    def _get_visible_child_name(self, _obj: Any, items: int) -> str:
        return "content" if items else "empty"

    @Gtk.Template.Callback()
    def _species_to_string(self, _obj: Any, species: Species) -> str:
        return species.value_nick

    @Gtk.Template.Callback()
    def _species_to_int(self, _obj: Any, species: Species) -> int:
        return int(species)


class Pet(GObject.Object):
    """Data about a pet."""

    __gtype_name__ = "Pet"

    name = GObject.Property(type=str)
    species = GObject.Property(type=Species, default=Species.NONE)
using Gtk 4.0;
using Adw 1;
using Gio 2.0;

template $Window: Adw.ApplicationWindow {
  title: _("Pets");
  default-width: 450;
  default-height: 450;

  content: Adw.ToolbarView {
    [top]
    Adw.HeaderBar {
      [start]
      ToggleButton search_button {
        icon-name: "edit-find-symbolic";
        tooltip-text: _("Search");

        ShortcutController {
          scope: managed;

          Shortcut {
            trigger: "<Control>f";
            action: "activate";
          }
        }
      }
    }

    [top]
    SearchBar {
      key-capture-widget: template;
      search-mode-enabled: bind search_button.active bidirectional;

      child: SearchEntry search_entry {
        hexpand: true;
        placeholder-text: _("Search pets");
      };
    }

    content: Adw.ViewStack {
      visible-child-name: bind $_get_visible_child_name(selection_model.n-items) as <string>;
      enable-transitions: true;

      Adw.ViewStackPage {
        name: "content";

        child: ScrolledWindow {
          child: ListView {
            styles [
              "navigation-sidebar",
            ]

            factory: BuilderListItemFactory {
              template ListItem {
                child: Label {
                  halign: start;
                  label: bind template.item as <$Pet>.name;
                };
              }
            };

            model: NoSelection selection_model {
              model: FilterListModel {
                filter: AnyFilter {
                  StringFilter {
                    expression: expr item as <$Pet>.name;
                    search: bind search_entry.text;
                  }

                  StringFilter {
                    expression: expr $_species_to_string(item as <$Pet>.species) as <string>;
                    search: bind search_entry.text;
                  }
                };

                model: SortListModel {
                  sorter: MultiSorter {
                    NumericSorter {
                      expression: expr $_species_to_int(item as <$Pet>.species) as <int>;
                    }

                    StringSorter {
                      expression: expr item as <$Pet>.name;
                    }
                  };

                  model: Gio.ListStore {
                    item-type: typeof<$Pet>;

                    $Pet {
                      name: "Herman";
                      species: cat;
                    }

                    $Pet {
                      name: "Saartje";
                      species: dog;
                    }

                    $Pet {
                      name: "Sofie";
                      species: dog;
                    }

                    $Pet {
                      name: "Rex";
                      species: dog;
                    }

                    $Pet {
                      name: "Lady";
                      species: dog;
                    }

                    $Pet {
                      name: "Lieke";
                      species: dog;
                    }

                    $Pet {
                      name: "Grumpy";
                      species: cat;
                    }
                  };
                };
              };
            };
          };
        };
      }

      Adw.ViewStackPage {
        name: "empty";

        child: Adw.StatusPage {
          icon-name: "edit-find-symbolic";
          title: _("No Results Found");
          description: _("Try a different search");
        };
      }
    };
  };
}

List models are pretty complicated, but I hope that this example provides a good idea of what’s possible from Blueprint, and is a good stepping stone to learn more.

Thanks for reading!

PS: a shout out to Markus for guessing what I’d write about next ;)

Hari Rana

@theevilskeleton

It’s True, “We” Don’t Care About Accessibility on Linux

Introduction

What do concern trolls and privileged people without visible or invisible disabilities who share or make content about accessibility on Linux being trash without contributing anything to projects have in common? They don’t actually really care about the group they’re defending; they just exploit these victims’ unfortunate situation to fuel hate against groups and projects actually trying to make the world a better place.

I never thought I’d be this upset to a point I’d be writing an article about something this sensitive with a clickbait-y title. It’s simultaneously demotivating, unproductive, and infuriating. I’m here writing this post fully knowing that I could have been working on accessibility in GNOME, but really, I’m so tired of having my mood ruined because of privileged people spending at most 5 minutes to write erroneous posts and then pretending to be oblivious when confronted while it takes us 5 months of unpaid work to get a quarter of recognition, let alone acknowledgment, without accounting for the time “wasted” addressing these accusations. This is far from the first time, and it will certainly not be the last.

I’m Not Angry

I’m not mad. I’m absolutely furious and disappointed in the Linux Desktop community for being quiet in regards to any kind of celebration to advancing accessibility, while proceeding to share content and cheer for random privileged people from big-name websites or social media who have literally put a negative amount of effort into advancing accessibility on Linux. I’m explicitly stating a negative amount because they actually make it significantly more stressful for us.

None of this is fair. If you’re the kind of person who stays quiet when we celebrate huge accessibility milestones, yet shares (or even makes) content that trash talks the people directly or indirectly contributing to the fucking software you use for free, you are the reason why accessibility on Linux is shit.

No one in their right mind wants to volunteer in a toxic environment where their efforts are hardly recognized by the public and they are blamed for “not doing enough”, especially when they are expected to take in all kinds of harassment, nonconstructive criticism, and slander for a salary of 0$.

There’s only one thing I am shamefully confident about: I am not okay in the head. I shouldn’t be working on accessibility anymore. The recognition-to-smearing ratio is unbearably low and arguably unhealthy, but leaving people in unfortunate situations behind is also not in accordance with my values.

I’ve been putting so much effort, quite literally hundreds of hours, into:

  1. thinking of ways to come up with inclusive designs and experiences;
  2. imagining how I’d use something if I had a certain disability or condition;
  3. asking for advice and feedback from people with disabilities;
  4. not getting paid from any company or organization; and
  5. making sure that all the accessibility-related work is in the public, and stays in the public.

Number 5 is especially important to me. I personally go as far as to refuse to contribute to projects under a permissive license, and/or that utilize a contributor license agreement, and/or that utilize anything riskily similar to these two, because I am of the opinion that no amount of code for accessibility should either be put under a paywall or be obscured and proprietary.

Permissive licenses make it painlessly easy for abusers to fork, build an ecosystem on top of it which may include accessibility-related improvements, slap a price tag alongside it, all without publishing any of these additions/changes. Corporations have been doing that for decades, and they’ll keep doing it until there’s heavy push back. The only time I would contribute to a project under a permissive license is when the tool is the accessibility infrastructure itself. Contributor license agreements are significantly worse in that regard, so I prefer to avoid them completely.

The Truth Nobody Is Telling You

KDE hired a legally blind contractor to work on accessibility throughout the KDE ecosystem, including complying with the EU Directive to allow selling hardware with Plasma.

GNOME’s new executive director, Steven Deobald, is partially blind.

The GNOME Foundation has been investing a lot of money to improve accessibility on Linux, for example funding Newton, a Wayland accessibility project and AccessKit integration into GNOME technologies. Around 250,000€ (1/4) of the STF budget was spent solely on accessibility. And get this: literally everybody managing these contracts and communication with funders are volunteers; they’re ensuring people with disabilities earn a living, but aren’t receiving anything in return. These are the real heroes who deserve endless praise.

The Culprits

Do you want to know who we should be blaming? Profiteers who are profiting from the community’s effort while investing very little to nothing into accessibility.

This includes a significant portion of the companies sponsoring GNOME and even companies that employ developers to work on GNOME. These companies are the ones making hundreds of millions, if not billions, in net profit indirectly from GNOME (and other free and open-source projects), and investing little to nothing into them. However, the worst offenders are the companies actively using GNOME without ever donating anything to fund the projects.

Some companies actually do put an effort, like Red Hat and Igalia. Red Hat employs people with disabilities to work on accessibility in GNOME, one of which I actually rely on when making accessibility-related contributions in GNOME. Igalia funds Orca, the screen reader as part of GNOME, which is something the Linux community should be thankful of. However, companies have historically invested what’s necessary to comply with governments’ accessibility requirements, and then never invest in it again.

The privileged people who keep sharing and making content around accessibility on Linux being bad without contributing anything to it are, in my opinion, significantly worse than the companies profiting off of GNOME. Companies are and stay quiet, but these privileged people add an additional burden to contributors by either trash talking or sharing trash talkers. Once again, no volunteer deserves to be in the position of being shamed and ridiculed for “not doing enough”, since no one is entitled to their free time, but themselves.

My Work Is Free but the Worth Is Not

Earlier in this article, I mentioned, and I quote: “I’ve been putting so much effort, quite literally hundreds of hours […]”. Let’s put an emphasis on “hundreds”. Here’s a list of most accessibility-related merge requests that have been incorporated into GNOME:

GNOME Calendar’s !559 addresses an issue where event widgets were unable to be focused and activated by the keyboard. That was present since the very beginning of GNOME Calendar’s existence, to be specific: for more than a decade. This alone was was a two-week effort. Despite it being less than 100 lines of code, nobody truly knew what to do to have them working properly before. This was followed up by !576, which made the event buttons usable in the month view with a keyboard, and then !587, which properly conveys the states of the widgets. Both combined are another two-week effort.

Then, at the time of writing this article, !564 adds 640 lines of code, which is something I’ve been volunteering on for more than a month, excluding the time before I opened the merge request.

Let’s do a little bit of math together with ‘only’ !559, !576, and !587. Just as a reminder: these three merge requests are a four-week effort in total, which I volunteered full-time—8 hours a day, or 160 hours a month. I compiled a small table that illustrates its worth:

Country Average Wage for Professionals Working on Digital AccessibilityWebAIM Total in Local Currency
(160 hours)
Exchange Rate Total (CAD)
Canada 58.71$ CAD/hour 9,393.60$ CAD N/A 9,393.60$
United Kingdom 48.20£ GBP/hour 7,712£ GBP 1.8502 14,268.74$
United States of America 73.08$ USD/hour 11,692.80$ USD 1.3603 15,905.72$

To summarize the table: those three merge requests that I worked on for free were worth 9,393.60$ CAD (6,921.36$ USD) in total at a minimum.

Just a reminder:

  • these merge requests exclude the time spent to review the submitted code;
  • these merge requests exclude the time I spent testing the code;
  • these merge requests exclude the time we spent coordinating these milestones;
  • these calculations exclude the 30+ merge requests submitted to GNOME; and
  • these calculations exclude the merge requests I submitted to third-party GNOME-adjacent apps.

Now just imagine how I feel when I’m told I’m “not doing enough”, either directly or indirectly, by privileged people who don’t rely on any of these accessibility features. Whenever anybody says we’re “not doing enough”, I feel very much included, and I will absolutely take it personally.

It All Trickles Down to “GNOME Bad”

I fully expect everything I say in this article to be dismissed or be taken out of context on the basis of ad hominem, simply by the mere fact I’m a GNOME Foundation member / regular GNOME contributor. Either that, or be subject to whataboutism because another GNOME contributor made a comment that had nothing to do with mine but ‘is somewhat related to this topic and therefore should be pointed out just because it was maybe-probably-possibly-perhaps ableist’. I can’t speak for other regular contributors, but I presume that they don’t feel comfortable talking about this because they dared be a GNOME contributor. At least, that’s how I felt for the longest time.

Any content related to accessibility that doesn’t dunk on GNOME doesn’t foresee as many engagement, activity, and reaction as content that actively attacks GNOME, regardless of whether the criticism is fair. Many of these people don’t even use these accessibility features; they’re just looking for every opportunity to say “GNOME bad” and will 🪄 magically 🪄 start caring about accessibility.

Regular GNOME contributors like myself don’t always feel comfortable defending ourselves because dismissing GNOME developers just for being GNOME developers is apparently a trend…

Final Word

Dear people with disabilities,

I won’t insist that we’re either your allies or your enemies—I have no right to claim that whatsoever.

I wasn’t looking for recognition. I wasn’t looking for acknowledgment since the very beginning either. I thought I would be perfectly capable of quietly improving accessibility in GNOME, but because of the overall community’s persistence to smear developers’ efforts without actually tackling the underlying issues within the stack, I think I’ve justified myself to at least demand for acknowledgment from the wider community.

I highly doubt it will happen anyway, because the Linux community feeds off of drama and trash talking instead of being productive, without realizing that it negatively demotivates active contributors while pushing away potential contributors. And worst of all: people with disabilities are the ones affected the most because they are misled into thinking that we don’t care.

It’s so unfair and infuriating that all the work I do and share online gain very little activity compared to random posts and articles from privileged people without disabilities that rant about the Linux desktop’s accessibility being trash. It doesn’t help that I become severely anxious sharing accessibility-related work to avoid signs of virtue signalling. The last thing I want is to (unintentionally) give any sign and impression of pretending to care about accessibility.

I beg you, please keep writing banger posts like fireborn’s I Want to Love Linux. It Doesn’t Love Me Back series and their interluding post. We need more people with disabilities to keep reminding developers that you exist and your conditions and disabilities are a spectrum, and not absolute.

We simultaneously need more interest from people with disabilities to contribute to free and open-source software, and the wider community to be significantly more intolerant of bullies who profit from smearing and demotivating people who are actively trying.

We should take inspiration from “Accessibility on Linux sucks, but GNOME and KDE are making progress” by OSNews. They acknowledge that accessibility on Linux is suboptimal while recognizing the efforts of GNOME and KDE. As a community, we should promote progress more often.

Locally hosting an internet-connected server

I'm lucky enough to have a weird niche ISP available to me, so I'm paying $35 a month for around 600MBit symmetric data. Unfortunately they don't offer static IP addresses to residential customers, and nor do they allow multiple IP addresses per connection, and I'm the sort of person who'd like to run a bunch of stuff myself, so I've been looking for ways to manage this.

What I've ended up doing is renting a cheap VPS from a vendor that lets me add multiple IP addresses for minimal extra cost. The precise nature of the VPS isn't relevant - you just want a machine (it doesn't need much CPU, RAM, or storage) that has multiple world routeable IPv4 addresses associated with it and has no port blocks on incoming traffic. Ideally it's geographically local and peers with your ISP in order to reduce additional latency, but that's a nice to have rather than a requirement.

By setting that up you now have multiple real-world IP addresses that people can get to. How do we get them to the machine in your house you want to be accessible? First we need a connection between that machine and your VPS, and the easiest approach here is Wireguard. We only need a point-to-point link, nothing routable, and none of the IP addresses involved need to have anything to do with any of the rest of your network. So, on your local machine you want something like:

[Interface]
PrivateKey = privkeyhere
ListenPort = 51820
Address = localaddr/32

[Peer]
Endpoint = VPS:51820
PublicKey = pubkeyhere
AllowedIPs = VPS/0


And on your VPS, something like:

[Interface]
Address = vpswgaddr/32
SaveConfig = true
ListenPort = 51820
PrivateKey = privkeyhere

[Peer]
PublicKey = pubkeyhere
AllowedIPs = localaddr/32


The addresses here are (other than the VPS address) arbitrary - but they do need to be consistent, otherwise Wireguard is going to be unhappy and your packets will not have a fun time. Bring that interface up with wg-quick and make sure the devices can ping each other. Hurrah! That's the easy bit.

Now you want packets from the outside world to get to your internal machine. Let's say the external IP address you're going to use for that machine is 321.985.520.309 and the wireguard address of your local system is 867.420.696.005. On the VPS, you're going to want to do:

iptables -t nat -A PREROUTING -p tcp -d 321.985.520.309 -j DNAT --to-destination 867.420.696.005

Now, all incoming packets for 321.985.520.309 will be rewritten to head towards 867.420.696.005 instead (make sure you've set net.ipv4.ip_forward to 1 via sysctl!). Victory! Or is it? Well, no.

What we're doing here is rewriting the destination address of the packets so instead of heading to an address associated with the VPS, they're now going to head to your internal system over the Wireguard link. Which is then going to ignore them, because the AllowedIPs statement in the config only allows packets coming from your VPS, and these packets still have their original source IP. We could rewrite the source IP to match the VPS IP, but then you'd have no idea where any of these packets were coming from, and that sucks. Let's do something better. On the local machine, in the peer, let's update AllowedIps to 0.0.0.0/0 to permit packets form any source to appear over our Wireguard link. But if we bring the interface up now, it'll try to route all traffic over the Wireguard link, which isn't what we want. So we'll add table = off to the interface stanza of the config to disable that, and now we can bring the interface up without breaking everything but still allowing packets to reach us. However, we do still need to tell the kernel how to reach the remote VPN endpoint, which we can do with ip route add vpswgaddr dev wg0. Add this to the interface stanza as:

PostUp = ip route add vpswgaddr dev wg0
PreDown = ip route del vpswgaddr dev wg0


That's half the battle. The problem is that they're going to show up there with the source address still set to the original source IP, and your internal system is (because Linux) going to notice it has the ability to just send replies to the outside world via your ISP rather than via Wireguard and nothing is going to work. Thanks, Linux. Thinux.

But there's a way to solve this - policy routing. Linux allows you to have multiple separate routing tables, and define policy that controls which routing table will be used for a given packet. First, let's define a new table reference. On the local machine, edit /etc/iproute2/rt_tables and add a new entry that's something like:

1 wireguard


where "1" is just a standin for a number not otherwise used there. Now edit your wireguard config and replace table=off with table=wireguard - Wireguard will now update the wireguard routing table rather than the global one. Now all we need to do is to tell the kernel to push packets into the appropriate routing table - we can do that with ip rule add from localaddr lookup wireguard, which tells the kernel to take any packet coming from our Wireguard address and push it via the Wireguard routing table. Add that to your Wireguard interface config as:

PostUp = ip rule add from localaddr lookup wireguard
PreDown = ip rule del from localaddr lookup wireguard

and now your local system is effectively on the internet.

You can do this for multiple systems - just configure additional Wireguard interfaces on the VPS and make sure they're all listening on different ports. If your local IP changes then your local machines will end up reconnecting to the VPS, but to the outside world their accessible IP address will remain the same. It's like having a real IP without the pain of convincing your ISP to give it to you.

comment count unavailable comments

Jamie Gravendeel

@monster

Data Driven UI With Closures

It’s highly recommended to read my previous blog post first to understand some of the topics discussed here.

UI can be hard to keep track of when changed imperatively, preferably it just follows the code’s state. Closures provide an intuitive way to do so by having data as input, and the desired value as output. They couple data with UI, but decouple the specific piece of UI that’s changed, making closures very modular. The example in this post uses Python and Blueprint.

Technicalities

First, it’s good to be familiar with the technical details behind closures. To quote from Blueprint’s documentation:

Expressions are only reevaluated when their inputs change. Because Blueprint doesn’t manage a closure’s application code, it can’t tell what changes might affect the result. Therefore, closures must be pure, or deterministic. They may only calculate the result based on their immediate inputs, not properties of their inputs or outside variables.

To elaborate, expressions know when their inputs have changed due to the inputs being GObject properties, which emit the “notify” signal when modified.

Another thing to note is where casting is necessary. To again quote Blueprint’s documentation:

Blueprint doesn’t know the closure’s return type, so closure expressions must be cast to the correct return type using a cast expression.

Just like Blueprint doesn’t know about the return type, it also doesn’t know the type of ambiguous properties. To provide an example:

Button simple_button {
  label: _("Click");
}

Button complex_button {
  child: Adw.ButtonContent {
    label: _("Click");
  };
}

Getting the label of simple_button in a lookup does not require a cast, since label is a known property of Gtk.Button with a known type:

simple_button.label

While getting the label of complex_button does require a cast, since child is of type Gtk.Widget, which does not have the label property:

complex_button.child as <Adw.ButtonContent>.label

Example

To set the stage, there’s a window with a Gtk.Stack which has two Gtk.StackPages, one for the content and one for the loading view:

from gi.repository import Adw, Gtk


@Gtk.Template.from_resource("/org/example/App/window.ui")
class Window(Adw.ApplicationWindow):
    """The main window."""

    __gtype_name__ = "Window"
using Gtk 4.0;
using Adw 1;

template $Window: Adw.ApplicationWindow {
  title: _("Demo");

  content: Adw.ToolbarView {
    [top]
    Adw.HeaderBar {}

    content: Stack {
      StackPage {
        name: "content";

        child: Label {
          label: _("Meow World!");
        };
      }

      StackPage {
        name: "loading";

        child: Adw.Spinner {};
      }
    };
  };
}

Switching Views Conventionally

One way to manage the views would be to rely on signals to communicate when another view should be shown:

from typing import Any

from gi.repository import Adw, GObject, Gtk


@Gtk.Template.from_resource("/org/example/App/window.ui")
class Window(Adw.ApplicationWindow):
    """The main window."""

    __gtype_name__ = "Window"

    stack: Gtk.Stack = Gtk.Template.Child()

    loading_finished = GObject.Signal()

    @Gtk.Template.Callback()
    def _show_content(self, *_args: Any) -> None:
        self.stack.set_visible_child_name("content")

A reference to the stack has been added, as well as a signal to communicate when loading has finished, and a callback to run when that signal is emitted.

using Gtk 4.0;
using Adw 1;

template $Window: Adw.ApplicationWindow {
  title: _("Demo");
  loading-finished => $_show_content();

  content: Adw.ToolbarView {
    [top]
    Adw.HeaderBar {}

    content: Stack stack {
      StackPage {
        name: "content";

        child: Label {
          label: _("Meow World!");
        };
      }

      StackPage {
        name: "loading";

        child: Adw.Spinner {};
      }
    };
  };
}

A signal handler has been added, as well as a name for the Gtk.Stack.

Only a couple of changes had to be made to switch the view when loading has finished, but all of them are sub-optimal:

  1. A reference in the code to the stack would be nice to avoid
  2. Imperatively changing the view makes following state harder
  3. This approach doesn’t scale well when the data can be reloaded, it would require another signal to be added

Switching Views With a Closure

To use a closure, the class needs data as input and a method to return the desired value:

from typing import Any

from gi.repository import Adw, GObject, Gtk


@Gtk.Template.from_resource("/org/example/App/window.ui")
class Window(Adw.ApplicationWindow):
    """The main window."""

    __gtype_name__ = "Window"

    loading = GObject.Property(type=bool, default=True)

    @Gtk.Template.Callback()
    def _get_visible_child_name(self, _obj: Any, loading: bool) -> str:
        return "loading" if loading else "content"

The signal has been replaced with the loading property, and the template callback has been replaced by a method that returns a view name depending on the value of that property. _obj here is the template class, which is unused.

using Gtk 4.0;
using Adw 1;

template $Window: Adw.ApplicationWindow {
  title: _("Demo");

  content: Adw.ToolbarView {
    [top]
    Adw.HeaderBar {}

    content: Stack {
      visible-child-name: bind $_get_visible_child_name(template.loading) as <string>;

      StackPage {
        name: "content";

        child: Label {
          label: _("Meow World!");
        };
      }

      StackPage {
        name: "loading";

        child: Adw.Spinner {};
      }
    };
  };
}

In Blueprint, the signal handler has been removed, as well as the unnecessary name for the Gtk.Stack. The visible-child-name property is now bound to a closure, which takes in the loading property referenced with template.loading.

This fixed the issues mentioned before:

  1. No reference in code is required
  2. State is bound to a single property
  3. If the data reloads, the view will also adapt

Closing Thoughts

Views are just one UI element that can be managed with closures, but there’s plenty of other elements that should adapt to data, think of icons, tooltips, visibility, etc. Whenever you’re writing a widget with moving parts and data, think about how the two can be linked, your future self will thank you!

Victor Ma

@victorma

A strange bug

In the last two weeks, I’ve been trying to fix a strange bug that causes the word suggestions list to have the wrong order sometimes.

For example, suppose you have an empty 3x3 grid. Now suppose that you move your cursor to each of the cells of the 1-Across slot (labelled α, β, and γ).

+---+---+---+
| α | β | γ |
+---+---+---+
| | | |
+---+---+---+
| | | |
+---+---+---+

You should expect the word suggestions list for 1-Across to stay the same, regardless of which cell your cursor is on. After all, all three cells have the same information: that the 1-Across slot is empty, and the intersecting vertical slot of whatever cell we’re on (1-Down, 2-Down, or 3-Down) is also empty.

There are no restrictions whatsoever, so all three cells should show the same word suggestion list: one that includes every three-letter word.

But that’s not what actually happens. In reality, the word suggestions list changes quite dramatically. The order of the list definitely changes. And it looks like there may even be words in one list that doesn’t appear in another. What’s going on here?

Understanding the code

My first step was to understand how the code for the word suggestions list works. I took notes along the way, in order to solidify my understanding. I especially found it useful to create diagrams for the word list resource (a pre-compiled resource that the code uses):

Word list resource diagram

By the end of the first week, I had a good idea of how the word-suggestions-list code works. The next step was to figure out the cause of the bug and how to fix it.

Investigating the bug

After doing some testing, I realized that the seemingly random orderings of the lists are not so random after all! The lists are actually all in alphabetical order—but based on the letter that corresponds to the cell, not necessarily the first letter.

What I mean is this:

  • The word suggestions list for cell α is sorted alphabetically by the first letter of the words. (This is normal alphabetical order.) For example:
    ALE, AXE, BAY, BOA, CAB
    
  • The word suggestions list for cell β is sorted alphabetically by the second letter of the words. For example:
    CAB, BAY, ALE, BOA, AXE
    
  • The word suggestions list for cell γ is sorted alphabetically by the third letter of the words. For example:
    BOA, CAB, ALE, AXE, BAY
    

Fixing the bug

The cause of the bug is quite simple: The function that generates the word suggestions list does not sort the list before it returns it. So the order of the list is whatever order the function added the words in. And because of how our implementation works, that order happens to be alphabetical, based on the letter that corresponds to the cell.

The fix for the bug is also quite simple—at least theoretically. All we need to do is sort the list before we return it. But in reality, this fix runs into some other problems that need to be addressed. Those problems are what I’m going to work on this week.

Jussi Pakkanen

@jpakkane

A custom C++ standard library part 4: using it for real

Writing your own standard library is all fun and games until someone (which is to say yourself) asks the important question: could this be actually used for real? Theories and opinions can be thrown about the issue pretty much forever, but the only way to actually know for sure is to do it.

Thus I converted CapyPDF, which is a fairly compact 15k LoC codebase from the C++ standard library to Pystd, which is about 4k lines. All functionality is still the same, which is to say that the test suite passes, there are most likely new bugs that the tests do not catch. For those wanting to replicate the results themselves, clone the CapyPDF repo, switch to the pystdport branch and start building. Meson will automatically download and set up Pystd as a subproject. The code is fairly bleeding edge and only works on Linux with GCC 15.1.

Build times

One of the original reasons for starting Pystd was being annoyed at STL compile times. Let's see if we succeeded in improving on them. Build times when using only one core in debug look like this.

When optimizations are enabled the results look like this:

In both cases the Pystd version compiles in about a quarter of the time.

Binary size

C++ gets a lot of valid criticism for creating bloated code. How much of that is due to the language as opposed to the library?

That's quite unexpected. The debug info for STL types seems to take an extra 20 megabytes. But how about the executable code itself?

STL is still 200 kB bigger. Based on observations most of this seems to come from stdlibc++'s implementation of variant. Note that if you try this yourself the Pystd version is probably 100 kB bigger, because by default the build setup links against libsubc++, which adds 100+ kB to binary sizes whereas linking against the main C++ runtime library does not.

Performance

Ok, fine, so we can implement basic code to build faster and take less space. Fine. But what about performance? That is the main thing that matters after all, right? CapyPDF ships with a simple benchmark program. Let's look at its memory usage first.

Apologies for the Y-axis does not starting at zero. I tried my best to make it happen, but LibreOffice Calc said no. In any case the outcome itself is expected. Pystd has not seen any performance optimization work so it requiring 10% more memory is tolerable. But what about the actual runtime itself?

This is unexpected to say the least. A reasonable result would have been to be only 2x slower than the standard library, but the code ended up being almost 25% faster. This is even stranger considering that Pystd's containers do bounds checks on all accesses, the UTF-8 parsing code sometimes validates its input twice, the hashing algorithm is a simple multiply-and-xor and so on. Pystd should be slower, and yet, in this case at least, it is not.

I have no explanation for this. It is expected that Pystd will start performing (much) worse as the data set size grows but that has not been tested.

Status update, 15/06/2025

This month I created a personal data map where I tried to list all my important digital identities.

(It’s actually now a spreadsheet, which I’ll show you later. I didn’t want to start the blog post with something as dry as a screenshot of a spreadsheet.)

Anyway, I made my personal data map for several reasons.

The first reason was to stay safe from cybercrime. In a world of increasing global unfairness and inequality, of course crime and scams are increasing too. Schools don’t teach how digital tech actually works, so it’s a great time to be a cyber criminal. Imagine being a house burglar in a town where nobody knows how doors work.

Lucky for me, I’m a professional door guy. So I don’t worry too much beyond having a really really good email password (it has numbers and letters). But its useful to double check if I have my credit card details on a site where the password is still “sam2003”.

The second reason is to help me migrate to services based in Europe. Democracy over here is what it is, there are good days and bad days, but unlike the USA we have at least more options than a repressive death cult and a fundraising business. (Shout to @angusm@mastodon.social for that one). You can’t completely own your digital identity and your data, but you can at least try to keep it close to home.

The third reason was to see who has the power to influence my online behaviour.

This was an insight from reading the book Technofeudalism. I’ve always been uneasy about websites tracking everything I do. Most of us are, to the point that we have made myths like “your phone microphone is always listening so Instagram can target adverts”. (As McSweeney’s Internet Tendency confirms, it’s not! It’s just tracking everything you type, every app you use, every website you visit, and everywhere you go in the physical world).

I used to struggle to explain why all that tracking feels bad. Technofeudalism frames a concept of cloud capital, saying this is now more powerful than other kinds of capital because cloud capitalists can do something Henry Ford, Walt Disney and The Monopoly Guy can only dream of: mine their data stockpile to produce precisely targeted recommendations, search bubbles and adverts which can influence your behaviour before you’ve even noticed.

This might sound paranoid when you first hear it, but consider how social media platforms reward you for expressing anger and outrage. Remember the first time you saw a post on Twitter from a stranger that you disagreed with? And your witty takedown attracted likes and praise? This stuff can be habit-forming.

In the 20th century, ad agencies changed people’s buying patterns and political views using billboards, TV channel and newspapers. But all that is like a primitive blunderbuss compared to recommendation algorithms, feedback loops and targeted ads on social media and video apps.

I lived through the days when web search for “Who won the last election” would just return you 10 pages that included the word “election”. (If you’re nostalgic for those days… you’ll be happy to know that GNOME’s desktop search engine still works like that today! : -) I can spot when apps trying to ‘nudge’ me with dark patterns. But kids aren’t born with that skill, and they aren’t necessarily going to understand the nature of Tech Billionaire power unless we help them to see it. We need a framework to think critically and discuss the power that Meta, Amazon and Microsoft have over everyone’s lives. Schools don’t teach how digital tech actually works, but maybe a “personal data map” can be a useful teaching tool?

By the way, here’s what my cobbled-together “Personal data map” looks like, taking into account security, what data is stored and who controls it. (With some fake data… I don’t want this blog post to be a “How to steal my identity” guide.)

NameRisksSensitivity ratingEthical ratingLocationControllerFirst factorSecond factorCredentials cached?Data stored
Bank accountFinancial loss102EuropeBank FingerprintNoneOn phoneMoney, transactions
InstagramIdentity theft5-10USAMetaPasswordEmailOn phonePosts, likes, replies, friends, views, time spent, locations, searches.
Google Mail (sam@gmail.com)Reset passwords9-5USAGooglePasswordNoneYes – cookiesConversations, secrets
GithubImpersonation33USAMicrosoftPasswordOTPYes – cookiesCredit card, projects, searches.

How is it going migrating off USA based cloud services?

“The internet was always a project of US power”, says Paris Marx, a keynote at PublicSpaces conference, which I never heard of before.

Closing my Amazon account took an unnecessary amount of steps, and it was sad to say goodbye to the list of 12 different address I called home at various times since 2006, but I don’t miss it; I’ve been avoiding Amazon for years anyway. When I need English-language books, I get them from an Irish online bookstore named Kenny’s. (Ireland, cleverly, did not leave the EU so they can still ship books to Spain without incurring import taxes).

Dropbox took a while because I had years of important stuff in there. I actually don’t think they’re too bad of a company, and it was certainly quick to delete my account. (And my data… right? You guys did delete all my data?).

I was using Dropbox to sync notes with the Joplin notes app, and switched to the paid Joplin Cloud option, which seems a nice way to support a useful open source project.

I still needed a way to store sensitive data, and realized I have access to Protondrive. I can’t recommend that as a service because the parent company Proton AG don’t seem so serious about Linux support, but I got it to work thanks to some heroes who added a protondrive backend to rclone.

Instead of using Google cloud services to share photos, and to avoid anything so primitive as an actual cable, I learned that KDE Connect can transfer files from my Android phone over my laptop really neatly. KDE Connect is really good. On the desktop I use GSConnect which integrates with GNOME Shell really well. I think I’ve not been so impressed by a volunteer-driven open source project in years. Thanks to everyone who worked on these great apps!

I also migrated my VPS from a US-based host Tornado VPS to one in Europe. Tornado VPS (formally prgmr.com) are a great company, but storing data in the USA doesn’t seem like the way forwards.

That’s about it so far. Feels a bit better.

What’s next?

I’m not sure whats next!

I can’t leave Github and Gitlab.com, but my days of “Write some interesting new code and push it straight to Github” are long gone. I didn’t sign up to train somebody else’s LLM for free, and neither should you. (I’m still interested in sharing interesting code with nice people, of course, but let’s not make it so easy for Corporate America to take our stuff without credit or compensation. Bring back the “sneakernet“!)

Leaving Meta platforms and dropping YouTube doesn’t feel directly useful. It’s like individually renouncing debit cards, or air travel: a lot of inconvenience for you, but the business owners don’t even notice. The important thing is to use the alternatives more. Hence why I still write a blog in 2025 and mostly read RSS feeds and the Fediverse. Gigs where I live are mostly only promoted on Instagram, but I’m sure that’s temporary.

In the first quarter of 2025, rich people put more money into AI startups than everything else put together (see: Pivot to AI). Investors love a good bubble, but there’s also an element of power here.

If programmers only know how to write code using Copilot, then whoever controls Microsoft has the power to decide what code we can and can’t write. (This currently this seems limited to not using the word ‘gender’. But I can imagine a future where it catches you reverse-engineering proprietary software, or jailbreaking locked-down devices, or trying write a new Bittorrent client).

If everyone gets their facts from ChatGPT, then whoever controls OpenAI has the power to tweak everyone’s facts, an ability that is currently limited only to presidents of major world superpowers. If we let ourselves avoid critical thinking and rely on ChatGPT to generate answers to hard questions instead, which teachers say is very much exactly what’s happening in schools now… then what?

Toluwaleke Ogundipe

@toluwalekeog

Hello GNOME and GSoC!

I am delighted to announce that I am contributing to GNOME Crosswords as part of the Google Summer of Code 2025 program. My project primarily aims to add printing support to Crosswords, with some additional stretch goals. I am being mentored by Jonathan Blandford, Federico Mena Quintero, and Tanmay Patil.

The Days Ahead

During my internship, I will be refactoring the puzzle rendering code to support existing and printable use cases, adding clues to rendered puzzles, and integrating a print dialog into the game and editor with crossword-specific options. Additionally, I should implement an ipuz2pdf utility to render puzzles in the IPUZ format to PDF documents.

Beyond the internship, I am glad to be a member of the GNOME community and look forward to so much more. In the coming weeks, I will be sharing updates about my GSoC project and other contributions to GNOME. If you are interested in my journey with GNOME and/or how I got into GSoC, I implore you to watch out for a much longer post coming soon.

Appreciation

Many thanks to Hans Petter Jansson, Federico Mena Quintero and Jonathan Blandford, who have all played major roles in my journey with GNOME and GSoC. 🙏❤

Taking out the trash, or just sweeping it under the rug? A story of leftovers after removing files

 There are many things that we take for granted in this world, and one of them is undoubtedly the ability to clean up your files - imagine a world where you can't just throw all those disk space hungry things that you no longer find useful. Though that might sound impossible, turns out some people have encountered a particularly interesting bug, that resulted in silent sweeping the Trash under the rug instead of emptying it in Nautilus. Since I was blessed to run into that issue myself, I decided to fix it and shed some light on the fun.

Trash after emptying in Nautilus, are the files really gone?


It all started with a 2009 Ubuntu launchpad ticket, reported against Nautilus. The user found 70 GB worth of files using disk analyzer in the ~/.local/share/Trash/expunged directory, even though they had emptied it with graphical interface. They did realize the offending files belonged to another user, however, they couldn't reproduce it easily at first. After all, when you try to move to trash a file or a directory not belonging to you, you would usually be correctly informed that you don't have necessary permissions, and perhaps even offer to permanently delete them instead. So what was so special about this case?

First let's get a better view of when we can and when we can't permanently delete files, something that is done at the end of a successful trash emptying operation. We'll focus only on the owners of relevant files, since other factors, such as file read/write/execute permissions, can be adjusted freely by their owners, and that's what trash implementations will do for you. Here are cases where you CAN delete files:

- when a file is in a directory owned by you, you can always delete it
- when a directory is in a directory owned by you and it's owned by you, you can obviously delete it
- when a directory is in a directory owned by you but you don't own it, and it's empty, you can surprisingly delete it as well

So to summarize, no matter who the owner of the file or a directory is, if it's in a directory owned by you, you can get rid of it. There is one exception to this - the directory must be empty, otherwise, you will not be able to remove neither it, nor its including files. Which takes us to an analogous list for cases where you CANNOT delete files:

- when a directory is in a directory owned by you but you don't own it, and it's not empty, you can't delete it.
- when a file is in a directory NOT owned by you, you can't delete it
- when a directory is in a directory NOT owned by you, you can't delete it either

In contrast with removing files in a directory you own, when you are not the owner of the parent directory, you cannot delete any of the child files and directories, without exceptions. This is actually the reason for the one case where you can't remove something from a directory you own - to remove a non-empty directory, first you need to recursively delete all of its including files and directories, and you can't do that if the directory is not owned by you.

Now let's look inside the trash can, or rather how it functions - the reason for separating permanently deleting and trashing operations, is obvious - users are expected to change their mind and be able to get their files back on a whim, so there's a need for a middle step. That's where the Trash specification comes, providing a common way in which all "Trash can" implementation should store, list, and restore trashed files, even across different filesystems - Nautilus Trash feature is one of the possible implementations. The way the trashing works is actually moving files to the $XDG_DATA_HOME/Trash/files directory and setting up some metadata to track their original location, to be able to restore them if needed. Only when the user empties the trash, are they actually deleted. If it's all about moving files, specifically outside their previous parent directory (i.e. to Trash), let's look at cases where you CAN move files:

- when a file is in a directory owned by you, you can move it
- when a directory is in a directory owned by you and you own it, you can obviously move it

We can see that the only exception when moving files in a directory you own, is when the directory you're moving doesn't belong to you, in which case you will be correctly informed you don't have permissions. In the remaining cases, users are able to move files and therefore trash them. Now what about the cases where you CANNOT move files?

- when a directory is in a directory owned by you but you don't own it, you can't move it
- when a file is in a directory NOT owned by you, you can't move it either
- when a directory is in a directory NOT owned by you, you still can't move it

In those cases Nautilus will either not expose the ability to trash files, or will tell user about the error, and the system is working well - even if moving them was possible, permanently deleting files in a directory not owned by you is not supported anyway.

So, where's the catch? What are we missing? We've got two different operations that can succeed or fail given different circumstances, moving (trashing) and deleting. We need to find a situation, where moving a file is possible, and such overlap exists, by chaining the following two rules:

- when a directory A is in a directory owned by you and it's owned by you, you can obviously move it
- when a directory B is in a directory A owned by you but you don't own it, and it's not empty, you can't delete it.

So a simple way to reproduce was found, precisely:

mkdir -p test/root
touch test/root/file
sudo chown root:root test/root

Afterwards trashing and emptying in Nautilus or gio trash command will result in the files not being deleted, and left in the ~/.local/share/Trash/expunged, which is used by the gvfsd-trash as an intermediary during emptying operation. The situations where that can happen are very rare, but they do exist - personally I have encountered this when manually cleaning container files created by podman in ~/.local/share/containers, which I arguably I shouldn't be doing in the first place, and rather leave it up to the podman itself. Nevertheless, it's still possible from the user perspective, and should be handled and prevented correctly. That's exactly what was done, a ticket was submitted and moved to appropriate place, which turned out to be glib itself, and I have submitted a MR that was merged - now both Nautilus and gio trash will recursively check for this case, and prevent you from doing this. You can expect it in the next glib release 2.85.1.

On the ending notes I want to thank the glib maintainer Philip Withnall who has walked me through on the required changes and reviewed them, and ask you one thing: is your ~/.local/share/Trash/expunged really empty? :)

Steven Deobald

@steven

2025-06-14 Foundation Report

These weeks are going by fast and I’m still releasing these reports after the TWIG goes out. Weaker humans than I might be tempted to automate — but don’t worry! These will always be artisanal, hand-crafted, single-origin, uncut, and whole bean. Felix encouraged me to add these to following week’s TWIG, at least, so I’ll start doing that.

 

## Opaque Stuff

  • a few policy decisions are in-flight with the Board — productive conversations happening on all fronts, and it feels really good to see them moving forward

 

## Elections

Voting closes in 5 days (June 19th). If you haven’t voted yet, get your votes in!

 

## GUADEC

Planning for GUADEC is chugging along. Sponsored visas, flights, and hotels are getting sorted out.

If you have a BoF or workshop proposal, get it in before tomorrow!

 

## Operations

Our yearly CPA review is finalized. Tax filings and 990 prep are in flight.

 

## Infrastructure

You may have seen our infrastructure announcement on social media earlier this week. This closes a long chapter of transitioning to AWS for GNOME’s essential services. A number of people have asked me if our setup is now highly AWS-specific. It isn’t. The vast majority of GNOME’s infrastructure runs on vanilla Linux and OpenShift. AWS helps our infrastructure engineers scale our services. They’re also generously donating the cloud infrastructure to the Foundation to support the GNOME project.

 

## Fundraising

Over the weekend, I booted up a couple of volunteer developers to help with a sneaky little project we kicked off last week. As Julian, Pablo, Adrian, and Tobias have told me: No Promises… so I’m not making any. You’ll see it when you see it. 🙂 Hopefully in a few days. This has been the biggest focus of the Foundation over the past week-and-a-half.

Many thanks to the other folks who’ve been helping with this little initiative. The Foundation could really use some financial help soon, and this project will be the base we build everything on top of.

 

## Meeting People

Speaking of fundraising, I met Loren Crary of the Python Foundation! She is extremely cool and we found out that we both somehow descended on the term “gentle nerds”, each thinking we coined it ourselves. I first used this term in my 2015 Rootconf keynote. She’s been using it for ages, too. But I didn’t originally ask for her help with terminology. I went to her to sanity-check my approach to fundraising and — hooray! — she tells me I’m not crazy. Semi-related: she asked me if there are many books on GNOME and I had to admit I’ve never read one myself. A quick search shows me Mastering GNOME: A Beginner’s Guide and The Linux GNOME Desktop For Dummies. Have you ever read a book on GNOME? Or written one?

I met Jorge Castro (of CNCF and Bazzite fame), a friend of Matt Hartley. We talked October GNOME, Wayland, dconf, KDE, Kubernetes, Fedora, and the fact that the Linux desktop is the true UI to cloud-native …everything. He also wants to be co-conspirators and I’m all about it. It had never really occurred to me that the ubiquity of dconf means GNOME is actually highly configurable, since I tend to eat the default GNOME experience (mostly), but it’s a good point. I told him a little story that the first Linux desktop experience that outstripped both Windows and MacOS for me was on a company-built RHEL machine back in 2010. Linux has been better than commercial operating systems for 15 years and the gap keeps widening. The Year of The Linux Desktop was a decade ago… just take the W.

I had a long chat with Tobias and, among other things, we discussed the possibility of internal conversation spaces for Foundation Members and the possibility of a project General Assembly. Both nice ideas.

I met Alejandro and Ousama from Slimbook. It was really cool to hear what their approach to the market is, how they ensure Linux and GNOME run perfectly on their hardware, and where their devices go. (They sell to NASA!) We talked about improving upstream communications and ways for the Foundation to facilitate that. We’re both hoping to get more Slimbooks in the hands of more developers.

We had our normal Board meeting. Karen gave me some sage advice on fundraising campaigns and grants programs.

 

## One-Month Feedback Session

I had my one-month feedback session with Rob and Allan, who are President and Vice-President at the moment, respectively. (And thus, my bosses.)

Some key take-aways are that they’d like me to increase my focus on the finances and try to make my community outreach a little more sustainable by being less verbose. Probably two sides of the same coin, there. 🙂 I’ve already shifted my focus toward finances as of two weeks ago… which may mean you’ve seen less of me in Matrix and other community spaces. I’m still around! I just have my nose in a spreadsheet or something.

They said some nice stuff, too, but nobody gets better by focusing on the stuff they’re already doing right.

 

TIL that htop can display more useful metrics

A program on my Raspberry Pi was reading data on disk, performing operations, and writing the result on disk. It did so at an unusually slow speed. The problem could either be that the CPU was too underpowered to perform the operations it needed or the disk was too slow during read, write, or both.

I asked colleagues for opinions, and one of them mentioned that htop could orient me in the right direction. The time a CPU spends waiting for an I/O device such as a disk is known as the I/O wait. If that wait time is superior to 10%, then the CPU spends a lot of time waiting for data from the I/O device, so the disk would likely be the bottleneck. If the wait time remains low, then the CPU is likely the bottleneck.

By default htop doesn't show the wait time. By pressing F2 I can access htop's configuration. There I can use the right arrow to move to the Display options, select Detailed CPU time (System/IO-Wait/Hard-IRQ/Soft-IRQ/Steal/Guest), and press Space to enable it.

I can then press the left arrow to get back to the options menu, and move to Meters. Using the right arrow I can go to the rightmost column, select CPUs (1/1): all CPUs by pressing Enter, move it to one of the two columns, and press Enter when I'm done. With it still selected, I can press Enter to alternate through the different visualisations. The most useful to me is the [Text] one.

I can do the same with Disk IO to track the global read / write speed, and Blank to make the whole set-up more readable.

With htop configured like this, I can trigger my slow program again see that the CPU is not waiting for the disk. All CPUs have a wa of 0%

If you know more useful tools I should know about when chasing bottlenecks, or if you think I got something wrong, please email me at thib@ergaster.org!

This Week in GNOME

@thisweek

#204 Sending Packets

Update on what happened across the GNOME project in the week from June 06 to June 13.

GNOME Releases

Adrian Vovk announces

The GNOME Release team is pleased to announce that we have decided to move forward with the removal of GNOME’s X11 session. To that end, we have disabled the X11 session by default at compile time, and have released an early GNOME 49.alpha.0 to get this change into distributions like Fedora Rawhide. The feedback we hear back will inform our next steps. Please check out Jordan’s blog post for more details.

GNOME Core Apps and Libraries

Adrian Vovk reports

Core components of the GNOME desktop, like GDM and gnome-session, are actively undergoing modernizations that will increase GNOME’s dependency on systemd. To ensure that our downstreams are aware of this change and have time to prepare, the GNOME release team has written a blog post explaining what is changing, why, and how to adapt. Please see Adrian’s blog for details.

Glycin

Sandboxed and extendable image loading and editing.

Sophie 🏳️‍🌈 🏳️‍⚧️ (she/her) says

Glycin, GNOME’s new image loading library that is already used by our Image Viewer (Loupe), can now also power the legacy image-loading library GdkPixbuf. This will significantly improve the safety of image handling and provide more feature in the future. The article Making GNOME’s GdkPixbuf Image Loading Safer contains more details.

Third Party Projects

nozwock announces

Packet has received several updates since the last time. Recent improvements include:

  • Desktop notifications for incoming transfers
  • The ability to run in the background and auto-start at login
  • Nautilus integration with a “Send with Packet” context menu option

As always, you can get the latest version from Flathub!

justinrdonnelly reports

Hot on the heels of the debut release of Bouncer, I’ve released a new version. Critically, this version includes a fix for non-English language users where Bouncer wouldn’t start. And if your non-English language happens to be Dutch, you get an extra bonus because it now includes Dutch translations thanks to Vistaus! Bouncer is available on Flathub!

Alexander Vanhee reports

Gradia has received a major facelift this week, both in terms of features and design:

  • A new background image mode has been added, offering six presets to choose from, or you can bring your own image!
  • A new solid colour background mode is now available, most notably including a fully transparent option. This allows you to ignore the background feature entirely and use Gradia purely for annotations.
  • Introduced an auto-increasing number stamp tool, useful for creating quick guides around an image.
  • The app now also finally persists the selected annotation tool and its options across sessions.

You can grab the app on Flathub.

Semen Fomchenkov says

Hello everyone! This week, at ALT Gnome and the ALT Linux Team, we’re happy to announce that Tuner is now available on Flathub!

This process took us longer than expected, as the Flathub team had concerns about the minimal functionality of the base Tuner app. As a result, the Flathub build of Tuner also includes the TunerTweaks module, which provides basic GNOME customization features across different distributions.

New Features in Development

We are actively working on expanding the functionality of plugins and adapting Tuner to various environments. Here are some of the features we are currently finalizing or developing and plan to include in future releases:

  • The ability to manage installed plugins directly from within Tuner, such as hiding unused ones without uninstalling them, and viewing information about plugin authors.
  • Improved API for modules to simplify the creation of basic modules and allow for more extensible functionality (already used in the Flathub build and in the TunerTweaks module).
  • Support for complex page structures, enabling more advanced modules with custom menus and submenus in the interface (thanks to the GNOME Builder team for the inspiration).

All current changes are available on the project page in ALT Linux Space

Documentation and Community

We recently launched a dedicated Matrix room for Tuner, which you can join here: Tuner Matrix Room

Once we complete major API changes in Tuner, we plan to update the module development documentation and present it as a community-driven Wiki project. We’ll be sure to notify you once it’s ready!

Pipeline

Follow your favorite video creators.

schmiddi says

Pipeline version 2.4.0 was released, making it easier to curate your video feed. Adding filters was simplified to remove videos from your feed, by adding a context menu to videos for filtering out similar videos. Based on the uploader and title of this video, you will be prompted which part of the title you want to filter on. You can now also hide videos from your feed which you already watched. Your video history is of course stored locally, and you can turn off keeping the history if you want.

Shell Extensions

Just Perfection says

We’ve updated the EGO review guidelines for clipboard access. If your extension uses the clipboard, you need to update the metadata description and follow the new guidelines.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Lennart Poettering

@mezcalero

ASG! 2025 CfP Closes Tomorrow!

The All Systems Go! 2025 Call for Participation Closes Tomorrow!

The Call for Participation (CFP) for All Systems Go! 2025 will close tomorrow, on 13th of June! We’d like to invite you to submit your proposals for consideration to the CFP submission site quickly!

Andy Wingo

@wingo

whippet in guile hacklog: evacuation

Good evening, hackfolk. A quick note this evening to record a waypoint in my efforts to improve Guile’s memory manager.

So, I got Guile running on top of the Whippet API. This API can be implemented by a number of concrete garbage collector implementations. The implementation backed by the Boehm collector is fine, as expected. The implementation that uses the bump-pointer-allocation-into-holes strategy is less good. The minor reason is heap sizing heuristics; I still get it wrong about when to grow the heap and when not to do so. But the major reason is that non-moving Immix collectors appear to have pathological fragmentation characteristics.

Fragmentation, for our purposes, is memory under the control of the GC which was free after the previous collection, but which the current cycle failed to use for allocation. I have the feeling that for the non-moving Immix-family collector implementations, fragmentation is much higher than for size-segregated freelist-based mark-sweep collectors. For an allocation of, say, 1024 bytes, the collector might have to scan over many smaller holes until you find a hole that is big enough. This wastes free memory. Fragmentation memory is not gone—it is still available for allocation!—but it won’t be allocatable until after the current cycle when we visit all holes again. In Immix, fragmentation wastes allocatable memory during a cycle, hastening collection and causing more frequent whole-heap traversals.

The value proposition of Immix is that if there is too much fragmentation, you can just go into evacuating mode, and probably improve things. I still buy it. However I don’t think that non-moving Immix is a winner. I still need to do more science to know for sure. I need to fix Guile to support the stack-conservative, heap-precise version of the Immix-family collector which will allow for evacuation.

So that’s where I’m at: a load of gnarly Guile refactors to allow for precise tracing of the heap. I probably have another couple weeks left until I can run some tests. Fingers crossed; we’ll see!

Alireza Shabani

@Revisto

Why GNOME’s Translation Platform Is Called “Damned Lies”

Damned Lies is the name of GNOME’s web application for managing localization (l10n) across its projects. But why is it named like this?

Damned Lies about GNOME

Screenshot of Gnome Damned Lies from Google search with the title: Damned Lies about GNOME

On the About page of GNOME’s localization site, the only explanation given for the name Damned Lies is a link to a Wikipedia article called “Lies, damned lies, and statistics.

“Damned Lies” comes from the saying “Lies, damned lies, and statistics” which is a 19th-century phrase used to describe the persuasive power of statistics to bolster weak arguments, as described on Wikipedia. One of its earliest known uses appeared in a 1891 letter to the National Observer, which categorised lies into three types:

“Sir, —It has been wittily remarked that there are three kinds of falsehood: the first is a ‘fib,’ the second is a downright lie, and the third and most aggravated is statistics. It is on statistics and on the absence of statistics that the advocate of national pensions relies …”

To find out more, I asked in GNOME’s i18n Matrix room, and Alexandre Franke helped a lot, he said:

Stats are indeed lies, in many ways.
Like if GNOME 48 gets 100% translated in your language on Damned Lies, it doesn’t mean the version of GNOME 48 you have installed on your system is 100% translated, because the former is a real time stat for the branch and the latter is a snapshot (tarball) at a specific time.
So 48.1 gets released while the translation is at 99%, and then the translators complete the work, but you won’t get the missing translations until 48.2 gets released.
Works the other way around: the translation is at 100% at the time of the release, but then there’s a freeze exception and the stats go 99% while the released version is at 100%.
Or you are looking at an old version of GNOME for which there won’t be any new release, which wasn’t fully translated by the time of the latest release, but then a translator decided that they wanted to see 100% because the incomplete translation was not looking as nice as they’d like, and you end up with Damned Lies telling you that version of GNOME was fully translated when it never was and never will be.
All that to say that translators need to learn to work smart, at the right time, on the right modules, and not focus on the stats.

So there you have it: Damned Lies is a name that reminds us that numbers and statistics can be misleading even on GNOME’s I10n Web application.

Varun R Mallya

@varunrmallya

The Design of Sysprof-eBPF

Sysprof

This is a tool that is used to profile applications on Linux. It tracks function calls and other events in the system to provide a detailed view of what is happening in the system. It is a powerful tool that can help developers optimize their applications and understand performance issues. Visit Sysprof for more information.

sysprof-ebpf

This is a project I am working on as part of GSoC 2025 mentored by Christian Hergert. The goal is to create a new backend for Sysprof that uses eBPF to collect profiling data. This will mostly serve as groundwork for the coming eBPF capabilities that will be added to Sysprof. This will hopefully also serve as the design documentation for anyone reading the code for Sysprof-eBPF in the future.

Testing

If you want to test out the current state of the code, you can do so by following these steps:

  1. Clone the repo and fetch my branch.
  2. Run the following script in the root of the project:
    #!/bin/bash
    set -euo pipefail
    GREEN="\033[0;32m"
    BLUE="\033[0;34m"
    RESET="\033[0m"
    
    prefix() {
     local tag="$1"
     while IFS= read -r line; do
     printf "%b[%s]%b %s\n" "$BLUE" "$tag" "$RESET" "$line"
     done
    }
    
    trap 'sudo pkill -f sysprofd; sudo pkill -f sysprof; exit 0' SIGINT SIGTERM
    
    meson setup build --reconfigure || true
    ninja -C build || exit 1
    sudo ninja -C build install || exit 1
    sudo systemctl restart polkit || exit 1
    
    # Run sysprofd and sysprof as root
    echo -e "${GREEN}Launching sysprofd and sysprof in parallel as root...${RESET}"
    
    sudo stdbuf -oL ./build/src/sysprofd/sysprofd 2>&1 | prefix "sysprofd" &
    sudo stdbuf -oL sysprof 2>&1 | prefix "sysprof" &
    
    wait
    

Capabilities of Sysprof-eBPF

alt text sysprof-ebpf will be a subprocess that will be created by sysprofd when the user selects the eBPF backend on the UI. I will be adding an options menu on the UI to choose which tracers to activate after I am done with the initial implementation. You can find my current dirty code here. As of writing this blog, this MR has the following capabilities:

  • A tiny toggle on the UI: Contains a tiny toggle on the UI to turn the activation of the eBPF backend on and off. This is a simple toggle that will start or stop the sysprof-ebpf subprocess.
  • Full eBPF compilation pipeline: This is the core of the sysprof-ebpf project. It compiles eBPF programs from C code to BPF bytecode, loads them into the kernel, and attaches them to the appropriate tracepoints. This is done using the libbpf library, which provides a high-level API for working with eBPF programs. All this is done at compile time which means that the user does not need to have a compiler to run the eBPF backend. This will soon be made modular to be able to add more eBPF programs in the future.

alt text

  • cpu-stats tracer: Can track CPU usage of the full system by reading the exit state of a struct after a function that runs on requesting /proc/stat executes inside the kernel. I am working on finding methods to make this process not random and instead triggering this manually using bpf-timers. In the current state, this just prints this info to the console, but I will be soon adding capabilities to store this directly into the syscap file.
  • sysprofd: My little program can now talk to sysprofd now and get the file descriptor to write the data to. I also accept an event-fd in this program that allows the the UI to stop this subprocess from running. I currently face a limitation on this where I have no option of choosing which tracers to activate. I am working on getting the tracer selection working by adding an options field to SysprofProxiedInstrument.

Follow up stuff

  • Adding a way to write to the syscap file: This will include adding a way to write the data collected by the tracers to the syscap file. I have already figured out how to do it, but it’ll require a bit of refactoring which I will be doing soon.
  • Adding more tracers: I will be adding more tracers to the sysprof-ebpf project. This will include tracers for memory usage, disk usage, and network usage. I will also be adding support for custom eBPF programs that can be written by the user if possible.
  • Adding UI: This will include adding options to choose which tracers to activate, and displaying the data collected by the tracers in a more readable format.

Structure of sysprof-ebpf

I planned on making this a single threaded process initially, but it dawned on me that not all ring-buffers will update at the same time and this will certainly block IO during polling, so I figured I’ll just put each tracer in it’s own DexFuture to do this capture in an async way. This has not been implemented as of writing this blog though.

alt text

The eBPF programs will follow the this block diagram in general. I haven’t made the config hashmap part of this yet, but I think I’ll make it only if it’s required in the future. All the currently planned features do not require this config map, but it certainly will be useful when I would need to make the program cross-platform or cross-kernel. This will be one of the last things I will be implementing in the project. alt text

Conclusion

I hope to make this a valuable addition to Sysprof. I will be writing more blogs as I make progress on the project. If you have any questions or suggestions, feel free to reach out to me on GitLab or Twitter. Also, I’d absolutely LOVE suggestions on how to improve the design of this project. I am still learning and I am open to any suggestions that can make this project better.

Adrian Vovk

@adrianvovk

Introducing stronger dependencies on systemd

Doesn’t GNOME already depend on systemd?

Kinda… GNOME doesn’t have a formal and well defined policy in place about systemd. The rule of thumb is that GNOME doesn’t strictly depend on systemd for critical desktop functionality, but individual features may break without it.

GNOME does strongly depend on logind, systemd’s session and seat management service. GNOME first introduced support for logind in 2011, then in 2015 ConsoleKit support was removed and logind became a requirement. However, logind can exist in isolation from systemd: the modern elogind service does just that, and even back in 2015 there were alternatives available. Some distributors chose to patch ConsoleKit support back into GNOME. This way, GNOME can run in environments without systemd, including the BSDs.

While GNOME can run with other init systems, most upstream GNOME developers are not testing GNOME in these situations. Our automated testing infrastructure (i.e. GNOME OS) doesn’t test any non-systemd codepaths. And many modules that have non-systemd codepaths do so with the expectation that someone else will maintain them and fix them when they break.

What’s changing?

GNOME is about to gain a few strong dependencies on systemd, and this will make running GNOME harder in environments that don’t have systemd available.

Let’s start with the easier of the changes. GDM is gaining a dependency on systemd’s userdb infrastructure. GNOME and systemd do not support running more than one graphical session under the same user account, but GDM supports multi-seat configurations and Remote Login with RDP. This means that GDM may try to display multiple login screens at once, and thus multiple graphical sessions at once. At the moment, GDM relies on legacy behaviors and straight-up hacks to get this working, but this solution is incompatible with the modern dbus-broker and so we’re looking to clean this up. To that end, GDM now leverages systemd-userdb to dynamically allocate user accounts, and then runs each login screen as a unique user.

In the future, we plan to further depend on userdb by dropping the AccountsService daemon, which was designed to be a stop-gap measure for the lack of a rich user database. 15 years later, this “temporary” solution is still in use. Now that systemd’s userdb enables rich user records, we can start work on replacing AccountsService.

Next, the bigger change. Since GNOME 3.34, gnome-session uses the systemd user instance to start and manage the various GNOME session services. When systemd is unavailable, gnome-session falls back to a builtin service manager. This builtin service manager uses .desktop files to start up the various GNOME session services, and then monitors them for failure. This code was initially implemented for GNOME 2.24, and is starting to show its age. It has received very minimal attention in the 17 years since it was first written. Really, there’s no reason to keep maintaining a bespoke and somewhat primitive service manager when we have systemd at our disposal. The only reason this code hasn’t completely bit rotted is the fact that GDM’s aforementioned hacks break systemd and so we rely on the builtin service manager to launch the login screen.

Well, that has now changed. The hacks in GDM are gone, and the login screen’s session is managed by systemd. This means that the builtin service manager will now be completely unused and untested. Moreover: we’d like to implement a session save/restore feature, but the builtin service manager interferes with that. For this reason, the code is being removed.

So what should distros without systemd do?

First, consider using GNOME with systemd. You’d be running in a configuration supported, endorsed, and understood by upstream. Failing that, though, you’ll need to implement replacements for more systemd components, similarly to what you have done with elogind and eudev.

To help you out, I’ve put a temporary alternate code path into GDM that makes it possible to run GDM without an implementation of userdb. When compiled against elogind, instead of trying to allocate dynamic users GDM will look-up and use the gdm-greeter user for the first login screen it spawns, gdm-greeter-2 for the second, and gdm-greeter-N for the Nth. GDM will have similar behavior with the gnome-initial-setup[-N] users. You can statically allocate as many of these users as necessary, and GDM will work with them for now. It’s quite likely that this will be necessary for GNOME 49.

Next: you’ll need to deal with the removal of gnome-session’s builtin service manager. If you don’t have a service manager running in the user session, you’ll need to get one. Just like system services, GNOME session services now install systemd unit files, and you’ll have to replace these unit files with your own service manager’s definitions. Next, you’ll need to replace the “session leader” process: this is the main gnome-session binary that’s launched by GDM to kick off session startup. The upstream session leader just talks to systemd over D-Bus to upload its environment variables and then start a unit, so you’ll need to replace that with something that communicates with your service manager instead. Finally, you’ll probably need to replace “gnome-session-ctl”, which is a tiny helper binary that’s used to coordinate between the session leader, the main D-Bus service, and systemd. It is also quite likely that this will be needed for GNOME 49

Finally: You should implement the necessary infrastructure for the userdb Varlink API to function. Once AccountsService is dropped and GNOME starts to depend more on userdb, the alternate code path will be removed from GDM. This will happen in some future GNOME release (50 or later). By then, you’ll need at the very least:

  • An implementation of systemd-userdbd’s io.systemd.Multiplexer
  • If you have NSS, a bridge that exposes NSS-defined users through the userdb API.
  • A bridge that exposes userdb-defined users through your libc’s native user lookup APIs (such as getpwent).

Apologies for the short timeline, but this blog post could only be published after I knew how exactly I’m splitting up gnome-session into separate launcher and main D-Bus service processes. Keep in mind that GNOME 48 will continue to receive security and bug fixes until GNOME 50 is released. Thus, if you cannot address these changes in time, you have the option of holding back the GNOME version. If you can’t do that, you might be able to get GNOME 49 running with gnome-session 48, though this is a configuration that won’t be tested or supported upstream so your mileage will vary (much like running GNOME on other init systems). Still, patching that scenario to work may buy you more time to upgrade to gnome-session 49.

And that should be all for now!

GNOME Foundation News

@foundationblog

GNOME Has a New Infrastructure Partner: Welcome AWS!

This post was contributed by Andrea Veri from the GNOME Foundation.

GNOME has historically hosted its infrastructure on premises. That changed with an AWS Open Source Credits program sponsorship which has allowed our team of two SREs to migrate the majority of the workloads to the cloud and turn the existing OpenShift environment into a fully scalable and fault tolerant one thanks to the infrastructure provided by AWS. By moving to the cloud, we have dramatically reduced the maintenance burden, achieved lower latency for our users and contributors and increased security through better access controls.

Our original infrastructure did not account for the exponential growth that GNOME has seen in its contributors and userbase over the past 4-5 years thanks to the introduction of GNOME Circle. GNOME Circle is composed of applications that are not part of core GNOME but are meant to extend the ecosystem without being bound to the stricter core policies and release schedules. Contributions on these projects also make contributors eligible for GNOME Foundation membership and potentially allow them to receive direct commit access to GitLab in case the contributions are consistent over a long period of time in order to gain more trust from the community. GNOME recently migrated to GitLab, away from cgit and Bugzilla.

In this post, we’d like to share some of the improvements we’ve made as a result of our migration to the cloud.

A history of network and storage challenges

In 2020, we documented our main architectural challenges:

  1. Our infrastructure was built on OpenShift in a hyperconverged setup, using OpenShift Data Foundations (ODF), running Ceph and Rook behind the scenes. Our control plane and workloads were also running on top of the same nodes.
  2. Because GNOME historically did not have an L3 network and generally had no plans to upgrade the underlying network equipment and/or invest time in refactoring it, we would have to run our gateway using a plain Linux VM with all the associated consequences.
  3. We also wanted to make use of an external Ceph cluster with slower storage, but this was not supported in ODF and required extra glue to make it work.
  4. No changes were planned on the networking equipment side to make links redundant. That meant a code upgrade on switches would have required full service downtime.
  5. We had to work with with Dell support for every broken hardware component, which added further toil.
  6. With the GNOME user and contributor base always increasing, we never really had a good way to scale our compute resources due to budget constraints.

Cloud migration improvements

In 2024, during a hardware refresh cycle, we started evaluating the idea of migrating to the public cloud. We have been participating in the AWS Open Source Credits program for many years and received sponsorship for a set of Amazon Simple Storage Service (S3) buckets that we use widely across GNOME services. Based on our previous experience with the program and the people running it, we decided to request sponsorship from AWS for the entire infrastructure, which was kindly accepted.

I believe it’s crucial to understand how AWS resolved the architectural challenges we had as a small SRE team (just two engineers!). Most importantly, the move dramatically reduced the maintenance toil we had:

  1. Using AWS’s provided software-defined networking services, we no longer have to rely on an external team to apply changes to the underlying networking layout. This also gave us a way to use a redundant gateway and NAT without having to expose worker nodes to the internet.
  2. We now use AWS Elastic Load Balancing (ELB) instances (classic load balancers are the only type supported by OpenShift for now) as a traffic ingress for our OpenShift cluster. This reduces latency as we now operate within the same VPC instead of relying on an external load balancing provider. This also comes with the ability to have access to the security group APIs which we can use to dynamically add IP addresses. This is critical when we have individuals or organizations abusing specific GNOME services with thousands of queries per minute.
  3. We also use Amazon Elastic Block Store (EBS) and Amazon Elastic File System (EFS) via the OpenShift CSI driver. This allows us to avoid having to manage a Ceph cluster, which is a major win in terms of maintenance and operability.
  4. With AWS Graviton instances, we now have access to ARM64 machines, which we heavily leverage as they’re generally cheaper than their Intel counterparts.
  5. Given how extensively we use Amazon S3 across the infrastructure, we were able to reduce latency and costs due to the use of internal VPC S3 endpoints.
  6. We took advantage of AWS Identity and Access Management (IAM) to provide granular access to AWS services, giving us the possibility to allow individual contributors to manage a limited set of resources without requiring higher privileges.
  7. We now have complete hardware management abstraction, which is vital for a team of only two engineers who are trying to avoid any additional maintenance burden.

Thank you, AWS!

I’d like to thank AWS for their sponsorship and the massive opportunity they are giving to the GNOME Infrastructure to provide resilient, stable and highly available workloads to GNOME’s users and contributors across the globe.

Log Detective: Google Summer of Code 2025

I'm glad to say that I'll participate again in the GSoC, as mentor. This year we will try to improve the RPM packaging workflow using AI, as part of the openSUSE project.

So this summer I'll be mentoring an intern that will research how to integrate Log Detective with openSUSE tooling to improve the packager workflow to maintain rpm packages.

Log Detective

Log Detective is an initiative created by the Fedora project, with the goal of

"Train an AI model to understand RPM build logs and explain the failure in simple words, with recommendations how to fix it. You won't need to open the logs at all."

As a project that was promoted by Fedora, it's highly integrated with the build tools around this distribution and RPM packages. But RPM packages are used in a lot of different distributions, so this "expert" LLM will be helpful for everyone doing RPM, and everyone doing RPM, should contribute to it.

This is open source, so if, at openSUSE, we want to have something similar to improve the OBS, we don't need to reimplement it, we can collaborate. And that's the idea of this GSoC project.

We want to use Log Detective, but also collaborate with failures from openSUSE to improve the training and the AI, and this should benefit openSUSE but also will benefit Fedora and all other RPM based distributions.

The intern

The selected intern is Aazam Thakur. He studies at University of Mumbai, India. He has experience in using SUSE as he has previously worked on SLES 15.6 during his previous summer mentorship at OpenMainFrame Project for RPM packaging.

I'm sure that he will be able to achieve great things during these three months. The project looks very promising and it's one of the things where AI and LLM will shine, because digging into logs is always something difficult and if we train a LLM with a lot of data it can be really useful to categorize failures and give a short description of what's happening.

Tanmay Patil

@txnmxy

Acrostic Generator for GNOME Crossword Editor

The experimental Acrostic Generator has finally landed inside the Crossword editor and is currently tagged as BETA.
I’d classify this as one of the trickiest and most interesting projects I’ve worked on.
Here’s how an acrostic puzzle loaded inside Crossword editor looks like:

In my previous blog post (published about a year ago), I explained one part of the generator. Since then, there have been many improvements.
I won’t go into detail about what an acrostic puzzle is, as I’ve covered that in multiple previous posts already.
If you’re unfamiliar, please check out my earlier post for a brief idea.

Coming to the Acrostic Generator, I’ll begin by showing an illustration that shows the input and the corresponding output generated by it. After that, I’ll walk through the implementation and challenges I faced.

Let’s take the quote: “CATS ALWAYS TAKE NAPS” whose author is a “CAT”.

Here’s what the Acrostic Generator essentially does

It generates answers like “CATSPAW”, “ALASKAN” and “TYES” which, as you can probably guess from the color coding, are made up of letters from the original quote.

Core Components

Before explaining how the Acrostic generator works, I want to briefly explain some of the key components involved.
1. Word list
The word list is an important part of Crosswords. It provides APIs to efficiently search for words. Refer to the documentation to understand how it works.
2. IpuzCharset
The performance of the Acrostic Generator heavily depends on IpuzCharset, which is essentially a HashMap that stores characters and their frequencies.
We perform numerous ipuz_charset_add_text and ipuz_charset_remove_text operations on the QUOTE charset. I'd especially like to highlight ipuz_charset_remove_text, which used to be computationally very slow. Last year, charset was rewritten in Rust by Federico. Compared to the earlier implementation in C using a GTree, the Rust version turned out to be quite faster.
Here’s Federico’s blog post on rustifying libipuz’s charset.

Why is ipuz_charset_remove_text latency so important? Let's consider the following example:

QUOTE: "CARNEGIE VISITED PRINCETON AND TOLD WILSON WHAT HIS YOUNG MEN NEEDED WAS NOT A LAW SCHOOL BUT A LAKE TO ROW ON IN ADDITION TO BEING A SPORT THAT BUILT CHARACTER AND WOULD LET THE UNDERGRADUATES RELAX ROWING WOULD KEEP THEM FROM PLAYING FOOTBALL A ROUGHNECK SPORT CARNEGIE DETESTED"
SOURCE: "DAVID HALBERSTAM THE AMATEURS"

In this case, the total number of maximum ipuz_charset_remove_text operations required in the worst case would be:

73205424239083486088110552395002236620343529838736721637033364389888000000

…which is a lot.

Terminology

I’d also like you guys to take a note of a few things.
1. Answers and Clues refer to the same thing, they are the solutions generated by the Acrostic Generator. I’ll be using them interchangeably throughout.
2. We’ve set two constants in the engine: MIN_WORD_SIZE = 3 and MAX_WORD_SIZE = 20. These make sure the answers are not too short or too long and help stop the engine from running indefinitely.
3. Leading characters here are all the characters of source. Each one is the first letter of corresponding answer.

Setting up things

Before running the engine, we need to set up some data structures to store the results.

typedef struct {
/* Representing a answer */
gunichar leading_char;
const gchar *letters;
guint word_length;

/* Searching the answer */
gchar *filter;
WordList *word_list;
GArray *rand_offset;
} ClueEntry;

We use a ClueEntry structure to store the answer for each clue. It holds the leading character (from the source), the letters of the answer, the word length, and some additional word list information.
Oh wait, why do we need the word length since we are already storing letters of the answer?
Let’s backtrack. Initially, I wrote the following brute-force recursive algorithm:

void
acrostic_generator_helper (AcrosticGenerator *self,
gchar nth_source_char)
{
// Iterate from min_word_size to max_word_size for every answer
for (word_length = min_word_size; word_length <= max_word_size; word_length++)
{
// get list of words starting from `nth_source_char`
// and with length equal to word_length
word_list = get_word_list (starting_letter = nth_source_char, word_length);

// Iterate throught the word list
for (guint i = 0; i < word_list_get_n_items (word_list); i++)
{
word = word_list[i];

// check if word is present in the quote charset
if (ipuz_charset_remove_text (quote_charset, word))
{
// if present we forward to the next source char
acrostic_generator_helper (self, nth_source_char + 1)
}
}
}
}

The problem with this approach is that it is too slow. We were iterating from MIN_WORD_SIZE to MAX_WORD_SIZE and trying to find a solution for every possible size. Yes, this would work and eventually we’ll find a solution, but it would take a lot of time. Also, many of the answers for the initial source characters would end up having length equal to MIN_WORD_SIZE .
To quantify this, compared to the latest approach (which I’ll discuss shortly), we would be performing roughly 20 times the current number (7.3 × 10⁷³) of ipuz_charset_remove_text operations.

To fix this, we added randomness by calculating and assigning random lengths to clue answers before running the engine.
To generate these random lengths, we break a number equal to the length of the quote string into n parts (where n is the number of source characters), each part having a random value.

static gboolean
generate_random_lengths (GArray *clues,
guint number,
guint min_word_size,
guint max_word_size)
{
if ((clues->len * max_word_size) < number)
return FALSE;

guint sum = 0;

for (guint i = 0; i < clues->len; i++)
{
ClueEntry *clue_entry;
guint len;
guint max_len = MAX (min_word_size,
MIN (max_word_size, number - sum));

len = rand() % (max_len - min_word_size + 1) + min_word_size;
sum += len;

clue_entry = &(g_array_index (clues, ClueEntry, i));
clue_entry->word_length = len;
}

return sum == number;
}

I have been continuously researching ways to generate random lengths that help the generator find answers as quickly as possible.
What I concluded is that the Acrostic Generator performs best when the word lengths follow a right-skewed distribution.

static void
fill_clue_entries (GArray *clues,
ClueScore *candidates,
WordListResource *resource)
{
for (guint i = 0; i < clues->len; i++)
{
ClueEntry *clue_entry;

clue_entry = &(g_array_index (clues, ClueEntry, i));

// Generate filter in order to get words with starting letter nth char of source string
// For eg. char = D, answer_len = 5
// filter = "D????"
clue_entry->filter = generate_individual_filter (clue_entry->leading_char,
clue_entry->word_length);


// Load all words with starting letter equal to nth char in source string
clue_entry->word_list = word_list_new ();
word_list_set_resource (clue_entry->word_list, resource);
word_list_set_filter (clue_entry->word_list, clue_entry->filter, WORD_LIST_MATCH);

candidates[i].index = i;
candidates[i].score = clue_entry->word_length;

// Randomise the word list which is sorted by default
clue_entry->rand_offset = generate_random_lookup (word_list_get_n_items (clue_entry->word_list));
}

Now that we have random lengths, we fill up the ClueEntry data structure.
Here, we generate individual filters for each clue, which are used to set the filter on each word list. For example, the filters for the example illustrated above are C??????, A??????, and T??? .
We also maintain a separate word list for each clue entry. Note that we do not store the huge word list individually for every clue. Instead, each word list object refers to the same memory-mapped word list resource.
Additionally, each clue entry contains a random offsets array, which stores a randomized order of indices. We use this to traverse the filtered word list in a random order. This randomness helps fix the problem where many answers for the initial source characters would otherwise end up with length equal to MIN_WORD_SIZE.
The advantage of pre-calculating all of this before running the engine is that the main engine loop only performs the heavy operations: ipuz_charset_remove_text and ipuz_charset_add_text.

static gboolean
acrostic_generator_helper (AcrosticGenerator *self,
GArray *clues,
guint index,
IpuzCharsetBuilder *remaining_letters,
ClueScore *candidates)
{
ClueEntry *clue_entry;

if (index == clues->len)
return TRUE;

clue_entry = &(g_array_index (clues, ClueEntry, candidates[index].index));

for (guint i = 0; i < word_list_get_n_items (clue_entry->word_list); i++)
{
const gchar *word;

g_atomic_int_inc (self->count);


// traverse based on random indices
word = word_list_get_word (clue_entry->word_list,
g_array_index (clue_entry->rand_offset, gushort, i));

clue_entry->letters = word;

if (ipuz_charset_builder_remove_text (remaining_letters, word + 1))
{
if (!add_or_skip_word (self, word) &&
acrostic_generator_helper (self, clues, index + 1, remaining_letters, candidates))
return TRUE;

clean_up_word (self, word);
ipuz_charset_builder_add_text (remaining_letters, word + 1);
clue_entry->letters = NULL;
}

}

clue_entry->letters = NULL;

return FALSE;
}

The approach is quite simple. As you can see in the code above, we perform ipuz_charset_remove_text many times, so it was crucial to make the ipuz_charset_remove_text operation efficient.
When all the characters in the charset have been used/removed and the index becomes equal to number of clues, it means we have found a solution. At this point, we return, store the answers in an array, and continue our search for new answers until we receive a stop signal.
We also maintain a skip list that is updated whenever we find an clue answer and is cleaned up during backtracking. This makes sure there are no duplicate answers in the answers list.

Performance Improvements

I compared the performance of the acrostic generator using the current Rust charset implementation against the previous C GTree implementation. I have used the following quote and source strings with the same RNG seed for both implementations:

QUOTE: "To be yourself in a world that is constantly trying to make you something else is the greatest accomplishment."
SOURCE: "TBYIWTCTMYSEGA"
Results:
+-----------------+--------------------+
| Implementation | Time taken(secs) |
+-----------------+--------------------+
| C GTree | 74.39 |
| Rust HashMap | 17.85 |
+-----------------+--------------------+

The Rust HashMap implementation is nearly 4 times faster than the original C GTree version for the same random seed and traversal order.

I have also been testing the generator to find small performance improvements. Here are some of them:

  1. When searching for answers, looking for answers for clues with longer word lengths first helps find solutions faster
  2. We switched to using nohash_hasher for the hashmap because we are essentially storing {char: frequency} pairs. Trace reports showed that significant time and resources were spent computing hash using Rust’s default SipHash implementation which was unnecessary. MR
  3. Inside ipuz_charset_remove_text, instead of cloning the original data, we use a rollback mechanism that tracks all modifications and rolls back in case of failure. MR

I also remember running the generator on some quote and source input back in the early days. It ran continuously for four hours and still couldn’t find a single solution. We even overflowed the gint counter which tracks number of words tried. Now, the same generator can return 10 solutions in under 10 seconds. We’ve come a long way! 😀

Crossword Editor

Now that I’ve covered the engine, I’ll talk about the UI part.
We started off by sketching potential designs on paper. @jrb came up with a good design and we decided to move forward with it, making a few tweaks to it.

First, we needed to display a list of the generated answers.

For this, I implemented my own list model where each item stores a string for the answer and a boolean indicating whether the user wants to apply that answer.
To allow the user to run, stop the generator and then apply answers, we reused the compact version of the original autofill component used in normal crosswords. The answer list gets updated whenever the slider is moved.

We have tried to reuse as much code as possible for acrostics, keeping most of the code common between acrostics and normal crosswords.
Here’s a quick demo of the acrostic editor in action:

We also maintain a cute little histogram on the right side of the bottom panel to summarize clue lengths.

You can also try out the Acrostic Generator using our CLI app, which I originally wrote to quickly test the engine. To use the binary, you’ll need to build Crosswords Editor locally. Example usage:

$ ./_build/src/acrostic-generator -q "For most of history, Anonymous was a woman. I would venture to guess that Anon, who wrote so many poems without signing them, was often a woman. And it is for this reason that I would implore women to write all the more" -s "Virginia wolf"
Starting acrostic generator. Press Ctrl+C to cancel.
[ VASOTOMY ] [ IMFROMMISSOURI ] [ ROMANIANMONETARYUNIT ] [ GREATFEATSOFSTRENGTH ] [ ITHOUGHTWEHADADEAL ] [ NEWSSHOW ] [ INSTITUTION ] [ AWAYWITHWORDS ] [ WOOLSORTERSPNEUMONIA ] [ ONEWOMANSHOWS ] [ LOWMANONTHETOTEMPOLE ] [ FLOWOUT ]
[ VALOROUSNESS ] [ IMMUNOSUPPRESSOR ] [ RIGHTEOUSINDIGNATION ] [ GATEWAYTOTHEWEST ] [ IWANTYOUTOWANTME ] [ NEWTONSLAWOFMOTION ] [ IMTOOOLDFORTHISSHIT ] [ ANYONEWHOHADAHEART ] [ WOWMOMENT ] [ OMERS ] [ LAWUNTOHIMSELF ] [ FORMATWAR ]

Plans for the future

To begin with, we’d really like to improve the overall design of the Acrostic Editor and make it more user friendly. Let us know if you have any design ideas, we’d love to hear your suggestions!
I’ve also been thinking about different algorithms for generating answers in the Acrostic Generator. One idea is to use a divide-and-conquer approach, where we recursively split the quote until we find a set of sub-quotes that satisfy all constraints of answers.

To conclude, here’s an acrostic for you all to solve, created using the Acrostic Editor! You can load the file in Crosswords and start playing.

Thanks for reading!

Luis Villa

@luis

book reports, mid-2025

Some brief notes on books, at the start of a summer that hopefully will allow for more reading.

Monk and Robot (Becky Chambers); Mossa and Pleiti (Malka Older)

Summer reading rec, and ask for more recs: “cozy sci-fi” is now a thing and I love it. Characters going through life, drinking hot beverages, trying to be comfortable despite (waves hands) everything. Mostly coincidentally, doing all those things in post-dystopian far-away planets (one fictional, one Jupiter).

Novellas, perfect for summer reads. Find a sunny nook (or better yet, a rainy summer day nook) and enjoy. (New Mossa and Pleiti comes out Tuesday, yay!)

Buzz Aldrin, in the Apollo 11 capsule, with a bright window visible and many dials and switches behind him. He is wearing white clothing with NASA patches, but not a full space suit, and is focused on whatever is in front of him, out of frame.
A complex socio-technical system, bounding boldly, perhaps foolishly, into the future. (Original via NASA)

Underground Empire (Henry Farrell and Abraham Newman)

This book is about things I know a fair bit about, like international trade sanctions, money transfers, and technology (particularly the intersection of spying and data pipes). So in some sense I learned very little.

But the book efficiently crystallizes all that knowledge into a very dense, smart, important observation: that some aspects of American so-called “soft” (i.e., non-military) power are in increasingly very “hard”. To paraphrase, the book’s core claim is that the US has, since 2001, amassed what amounts to several, fragmentary “Departments of Economic War”. These mechanisms use control over financial and IP transfers to allow whoever is in power in DC to fight whoever it wants. This is primarily China, Russia, and Iran, but also to some extent entities as big as the EU and as small as individual cargo ship captains.

The results are many. Among other things, the authors conclude that because this change is not widely-noticed, it is undertheorized, and so many of the players lack the intellectual toolkit to reason about it. Relatedly, they argue that the entire international system is currently more fragile and unstable than it has been in a long time exactly because of this dynamic: the US’s long-standing military power is now matched by globe-spanning economic control that previous US governments have mostly lacked, which in turn is causing the EU and China to try to build their own countervailing mechanisms. But everyone involved is feeling their way through it—which can easily lead to spirals. (Threaded throughout the book, but only rarely explicitly discussed, is the role of democracy in all of this—suffice to say that as told here, it is rarely a constraining factor.)

Tech as we normally think of it is not a big player here, but nevertheless plays several illustrative parts. Microsoft’s historical turn from government fighter to Ukraine supporter, Meta’s failed cryptocurrency, and various wiretapping comes up for discussion—but mostly in contexts that are very reactive to, or provocative irritants to, the 800lb gorillas of IRL governments.

Unusually for my past book reports on governance and power, where I’ve been known to stretch almost anything into an allegory for open, I’m not sure that this has many parallels. Rather, the relevance to open is that these are a series of fights that open may increasingly be drawn into—and/or destabilize. Ultimately, one way of thinking about this modern form of power dynamics is that it is a governmental search for “chokepoints” that can be used to force others to bend the knee, and a corresponding distaste for sources of independent power that have no obvious chokepoints. That’s a legitimately complicated problem—the authors have some interesting discussion with Vitalik Buterin about it—and open, like everyone else, is going to have to adapt.

Dying Every Day: Seneca at the Court of Nero (James Romm)

Good news: this book documents that being a thoughtful person, seeking good in the world, in the time of a mad king, is not a new problem.

Bad news: this book mostly documents that the ancients didn’t have better answers to this problem than we moderns do.

The Challenger Launch Decision (Diane Vaughan)

The research and history in this book are amazing, but the terminology does not quite capture what it is trying to share out as learnings. (It’s also very dry.)

The key takeaway: good people, doing hard work, in systems that slowly learn to handle variation, can be completely unprepared for—and incapable of handling—things outside the scope of that variation.

It’s definitely the best book about the political analysis of the New York Times in the age of the modern GOP. Also probably good for a lot of technical organizations handling the radical-but-seemingly-small changes detailed in Underground Empire.

Spacesuit: Fashioning Apollo (Nicholas De Monchaux)

A book about how interfaces between humans and technology is hard. (I mean clothes, but also everything else.) Delightful and wide-ranging; maybe won’t really learn any deep lessons here but it’d be a great way to force undergrads to grapple with Hard Human Problems That Engineers Thought Would Be Simple.

Crosswords 0.3.15: Planet Crosswords

It’s summer, which means its time for GSoC/Outreachy. This is the third year the Crosswords team is participating, and it has been fantastic. We had a noticeably large number of really strong candidates who showed up and wrote high-quality submissions — significantly more than previous years. There were a more candidates then we could handle, and it was a shame to have to turn some down.

In the end, Tanmay, Federico, and I got together and decided to stretch ourselves and accept three interns for the summer: Nancy, Toluwaleke, and Victor. They will be working on word lists, printing, and overlays respectively, and I’m so thrilled to have them helping out.

A result of this is that there will be a larger number of Crossword posts on planet.gnome.org this summer. I hope everyone is okay with that, and encourages them so they stay involved with GNOME and Free Software.

Release

This last release was mostly a bugfix release. The intern candidates outdid themselves this year by fixing a large number of bugs — so many that I’m releasing this to get them to users. Some highlights:

  • Mahmoud added an open dialog to the game and got auto-download of puzzles working. He also created an arabic .ipuz file to test with which revealed quite a few rendering bugs.
Arabic Crossword
Arabic Crossword
  • Toluwaleke refined the selection code. This was accidentally marked as a newcomer issue, and was absolutely not supposed to be. Nevertheless, he nailed it and has left selection in a much healthier state.
    • [ It’s worth highlighting that the initial MR for this issue is a masterclass in contributions, and one of the best MRs I’ve ever received. If you’re a potential GSoC intern, you could learn a lot from reading it. ]
  • Victor fixed divided cells and a number of small behavior bugs. He also did methodical research into other crossword editors.
Divided Cells
Divided Cells
  • Patel and Soham contributed visual improvements for barred and acrostic puzzles

In addition, GSoC-alum Tanmay has kept plugging on his Acrostic editor. It’s gotten a lot more sophisticated, and for the first time we’re including it in the stable build (albeit as a Beta). This version can be used to create a simple acrostic puzzle. I’ll let Tanmay post about it in the coming days. 

Coordinates

Specs are hard, especially for file formats. We made an unfortunate discovery about the ipuz spec this cycle. The spec uses a coordinate system to refer to cells in a puzzle — but does not define what the coordinate system means. It provides an example with the upper left corner being (0,0) and that’s intuitively a normal addressing system. However, they refer to (ROW1, COL1) in the spec, and there are a few examples in the spec that start the upper left at (1, 1).

When we ran across this issue while writing libipuz we tried a few puzzles in puzzazz (the original implementation) to confirm that (0,0) was the intended origin coordinate. However, we have run across some implementations and puzzles in the wild starting at (1,1). This is going to be pretty painful to untangle, as they two interpretations are largely incompatible. We have a plan to detect the coordinate system being used, but it’ll be a rough heuristic at best until the spec gets clarified and revamped.

By the Numbers

With this release, I took a step back and took stock of my little project. The recent releases have seemed pretty substantial, and it’s worth doing a little introspection. As of this release, we’ve reached:

  • 85KLOC total. 60KLOC in the app and 25KLOC in the library
  • 27K words of design docs (development guide)
  • 126 distinct test cases
  • 33 different contributors. I’m now at 82% of the commits and dropping
  • 6 translations (and hopefully many more some day)
  • Over 100 unencumbered puzzles in the base puzzle sets. This number needs to grow.

All in all, not too shabby, and not so little anymore.

A Final Request

Crosswords has an official flatpak, an unofficial snap, and Fedora and Arch packages. People have built it on Macs, and there’s even an APK that exists. However, there’s still no Debian package. That distro is not my world: I’m hoping someone out there will be inspired to package this project for us.

Transparency report for May 2025

Transparency report for July 2024 to May 2025 – GNOME Code of Conduct Committee

GNOME’s Code of Conduct is our community’s shared standard of behavior for participants in GNOME. This is the Code of Conduct Committee’s periodic summary report of its activities from July 2024 to May 2025.

The current members of the CoC Committee are:

  • Anisa Kuci
  • Carlos Garnacho
  • Christopher Davis
  • Federico Mena Quintero
  • Michael Downey
  • Rosanna Yuen

All the members of the CoC Committee have completed Code of Conduct Incident Response training provided by Otter Tech, and are professionally trained to handle incident reports in GNOME community events.

The committee has an email address that can be used to send reports: conduct@gnome.org as well as a website for report submission: https://conduct.gnome.org/

Reports

Since July 2024, the committee has received reports on a total of 19 possible incidents. Of these, 9 incidents were determined to be actionable by the committee, and were further resolved during the reporting period.

  • Report about an individual in a GNOME Matrix room acting rudely toward others. A Committee representative discussed the issue with the reported individual and adjusted room permissions.
  • Report about an individual acting in a hostile manner toward a new GNOME contributor in a community channel. A Committee representative contacted the reported person to provide a warning and to suggest methods of friendlier engagement.
  • Report about a discussion on a community channel that had turned heated. After going through the referenced conversation, the Committee noted that all participants were using non-friendly language and that the turning point in the conversation was a disagreement over a moderator’s action. The committee contacted the moderator and reminded them to use kinder words in the future.
  • Report related to technical topics out of the scope of the CoC committee. The issue was forwarded to the Board of Directors.
  • Report about members’ replies in community channels; after reviewing the conversation the CoC committee decided that it was not actionable. The conversation did not violate the Code of Conduct.
  • Report about inappropriate and insulting comments made by a member in social moments during an offline event. The CoC Committee sent a warning to the reported person.
  • Report against two members making comments the reporter considered disrespectful in a community channel. After reading through the conversation, the Committee did not see any violations to the CoC. No actions were taken.
  • Report on someone using abrasive and aggressive language in a community channel. After reading the conversation, the Committee agrees with this assessment. As this person had previously been found to have violated the CoC, the Committee has banned the person from the channel for one month.
  • Report about ableist language in a GitLab merge request. The reported person was given warning not to use such language.
  • Report against GNOME in general without any specifics. A request for more information was sent, and after no reply after a number of months, the issue has been closed with no action.
  • Report against the moderating team’s efforts to keep discussions within the Code of Conduct. No action was taken.
  • Report about a contributor being aggressive to the reporter who is working with them, on multiple occasions. The CoC committee talked both to the reporter and the reported person, and also to other people working with them in order to solve the disagreements. The result was that the reporter had some patterns on their behavior that made it difficult to collaborate with them too. The conclusion was that all parties acknowledged their wrong behaviors and will work on improving that and be more collaborative.
  • Report about a disagreement with a maintainer’s decision. The report was non-actionable.
  • Report about a contributor who set up harassment campaigns against Foundation and non-Foundation members. This person has been suspended indefinitely from participation in GNOME.
  • Report about a moderator being hostile in community channels; this was not the first report we received about this member so they got banned from the channel.
  • Report about a blog syndicated in planet.gnome.org. The committee evaluated the blog in question and found it not to contravene the CoC, so it took no action afterwards.
  • Five reports, unrelated to each other, with technical support requests. These were marked as not actionable.
  • Report with a general comment about GNOME, marked as not actionable.
  • A question about where to report security issues; informed the reporter about security@gnome.org.

Changes to the CoC Committee procedures

The Foundation’s Executive Director commissioned an external review of the CoC Committee’s procedures in October of 2024. After discussion with the Foundation Board of Directors, we have made the following changes to the committee procedures:

  • Establish a “chain of command” for requesting tasks to be performed by sysadmins after an incident report.
  • Clarify the procedures for notifying affected people and teams or committees after a report.
  • Clarify the way notifications are made about a report’s consequences, and update the Committee’s communications infrastructure in general.
  • Specify how to handle reports related to Foundation staff or contractors.

The history of changes can be seen in this merge request to the repository for the Code of Conduct.

CoC Committee blog

We have a new blog at https://conduct.gnome.org/blog/, where you can read this transparency report. In the future, we hope to post materials about dealing with interpersonal conflict, non-violent communication, and other ideas to help the GNOME community.

Meetings of the CoC committee

The CoC committee has two meetings each month for general updates, and weekly ad-hoc meetings when they receive reports. There are also in-person meetings during GNOME events.

Ways to contact the CoC committee

  • https://conduct.gnome.org – contains the GNOME Code of Conduct and a reporting form.
  • conduct@gnome.org – incident reports, questions, etc.

Alley Chaggar

@AlleyChaggar

Compiler Knowledge

Intro

I apologize that I’m a little late updating my blog, but over the past two weeks, I’ve been diving into Vala’s compiler and exploring how JSON (de)serialization could be integrated. My mentor, Lorenz, and I agreed that focusing on JSON is a good beginning.

Understanding the Vala Compiler

Learning the steps it takes to go from Vala code to C code is absolutely fascinating.

Vala’s Compiler 101

  • The first step in the compiler is the lexical analysis. This is handled by valascanner.vala, where your Vala code gets tokenized, which breaks up your code into chunks called tokens that are easier for the compiler to understand.
switch (begin[0]) {
		case 'f':
			if (matches (begin, "for")) return TokenType.FOR;
			break;
		case 'g':
			if (matches (begin, "get")) return TokenType.GET;
			break;

The code above is a snippet of Vala’s scanner, it’s responsible for recognizing specific keywords like ‘for’ and ‘get’ and returning the appropriate token type.

  • Next is syntax analysis and the creation of the abstract syntax tree (AST). In Vala, it’s managed by valaparser.vala, which checks if your code structure is correct, for example, if that pesky ‘}’ is missing.

    inline bool expect (TokenType type) throws ParseError {
    	if (accept (type)) {
    		return true;
    	}
      
    	switch (type) {
    	case TokenType.CLOSE_BRACE:
    		safe_prev ();
    		report_parse_error (new ParseError.SYNTAX ("following block delimiter %s missing", type.to_string ()));
    		return true;
    	case TokenType.CLOSE_BRACKET:
    	case TokenType.CLOSE_PARENS:
    	case TokenType.SEMICOLON:
    		safe_prev ();
    		report_parse_error (new ParseError.SYNTAX ("following expression/statement delimiter %s missing", type.to_string ()));
    		return true;
    	default:
    		throw new ParseError.SYNTAX ("expected %s", type.to_string ());
    	}
    }
    

    This is a snippet of Vala’s parser, it tries to accept a specific token type, like again that ‘}’. If ‘}’ is there, it continues parsing. Else if not, it throws a syntax error.

  • Then comes semantic analysis, the “meat and logic,” as I like to call it. This happens in valasemanticanalyzer.vala, where the compiler checks if things make sense. Do the types match? Are you using the correct number of parameters?

    public bool is_in_constructor () {
          unowned Symbol? sym = current_symbol;
          while (sym != null) {
              if (sym is Constructor) {
                  return true;
              }
              sym = sym.parent_symbol;
          }
          return false;
      }
    

    This code is a snippet of Vala’s semantic analyzer, which helps the compiler understand if the current code is a constructor. Starting from the current symbol, which represents where the compiler is in the code, it then moves through its parent symbols. If it finds a parent symbol that is a constructor, it returns true. Else if the parent symbol is null, it returns false.

  • After that, the flow analysis phase, located in valaflowanalyzer.vala, analyzes the execution order of the code. It figures out how control flows through the program, which is useful for things like variable initialization and unreachable code.

    public override void visit_lambda_expression (LambdaExpression le) {
    	var old_current_block = current_block;
    	var old_unreachable_reported = unreachable_reported;
    	var old_jump_stack = jump_stack;
    	mark_unreachable ();
    	jump_stack = new ArrayList<JumpTarget> ();
      
    	le.accept_children (this);
      
    	current_block = old_current_block;
    	unreachable_reported = old_unreachable_reported;
    	jump_stack = old_jump_stack;
    	}
    

    The snippet of Vala’s flow analyzer ensures that control flow, like unreachable code or jump statements, is properly analyzed within the lambda expression.

  • After all that, we now want to convert the Vala code into C code using a variety of Vala files in the directories ccode and codegen.

All of these classes inherit from valacodevisitor.vala, which is basically the mother of classes that provides the visit_* methods that allow each phase in the compiler to walk the source code tree.

I know this brief isn’t all of what there is to understand about the compiler, but it’s a start. Also, let’s take a moment to appreciate everyone who has contributed to Vala’s compiler design, it’s truly an art 🎨

The Coding Period Begins!!!

Now that GSoC’s official coding period is here, I’m continuing my research on how to implement JSON support.

Right now, I’m still learning the codegen phase AKA the phase of converting vala into C, I’m exploring json glib and starting to work on a valajsonmodule.vala in the code gen.

Another thing I want to work on is the Vala docs. The docs aren’t bad, but I’ve realized the information is pretty limited the deeper you get into the compiler.

I’m excited that this is starting to slowly make sense, little by little.

Using Portals with unsandboxed apps

Nowadays XDG Desktop Portal plays an important part in interaction between apps and the system, providing much needed security and unifying the experience, regardless of the desktop environment or toolkit you're using. While one could say it was created for sandboxed Flatpak apps, portals could bring major advantages to unsandboxed, host apps as well:

- Writing universal code: you don't need to care about writing desktop-specific code, as different desktops and toolkits will provide their own implementations

- Respecting the privacy of the user: portals use a permission system, which can be granted, revoked and controlled by the user. While host apps could bypass them, user can still be presented with dialogs, which will ask for permission to perform certain actions or obtain information.

Okay, so they seem like a good idea after all. Now, how do we use them?

More often than not, you don't actually have to manually call the D-Bus API - for many of the portals, toolkits and desktop will interact with them on your behalf, exposing easy to use high-level APIs. For example, if you're developing an app using GTK4 on GNOME and want to inhibit suspend or logout, you would call gtk_application_inhibit  which will actually prefer using the Inhibit portal over directly talking to gnome-session-manager. There are also convenience libraries to help you, available for different programming languages.

That sounds easy, is that all? Unfortunately, there are some caveats.

The fact that we can safely say that flatpaks are first-class citizen when interacting with portals, compared to host apps, is a good thing - they offer many benefits, and we should embrace them. However, in the real world there are many instances of apps installed without sandbox, and the transition will take time, so in the meantime we need to make sure they play correctly with portals as well.

One such instance is the getting the information about the app - in flatpak land, it's obtained from a special .flatpak-info file located in the sandbox. In the host apps though, xdg-desktop-portal tries to parse the app id from the systemd unit name, only accepting "app-" prefixed format, specified in the XDG standardization for applications. This works for some applications, but unfortunately not all, at least at this time. One such example is D-Bus activated apps, which are started with "dbus-" prefixed systemd unit name, or the ones started from the terminal with even different prefixes. In all those cases, the app id exposed to the portal is empty.

One major problem, when xdg-desktop-portal doesn't have access to the app-id, is undoubtedly failure of inhibiting logout/suspend when using the Inhibit portal. Applications on GNOME using GTK4 will call gtk_application_inhibit, which in turn calls xdg-desktop-portal-gtk inhibit portal implementation, which finally talks to the gnome-session-manager D-Bus API. However, it requires app-id to function correctly, and will not inhibit the session without it. The situation should get better in the next release of gnome-session but it could still cause problems for the user, not knowing the name of the application that is preventing logout/suspend.

Moreover, while not as critical, other portals also rely on that information in some way. Account portal used for obtaining the information about the user will mention the app display name when asking for confirmation, otherwise will call it the "requesting app", which the user may not recognize, and is more likely to cancel. Location portal will do the same, and Background portal won't allow autostart if it's requested.

GNOME Shell logout dialog when Nautilus is copying files, inhibiting indirectly via portal 


How can we make sure our host apps play well with portals?

Fortunately, there are many ways to make sure your host app interacts correctly with portals. First and foremost, you should always try to follow the XDG cgroup pathname standardization for applications. Most desktop environments already follow the standard, and if they don't, you should definitely report it as a bug. There are some exceptions, however - D-Bus activated apps are started by the D-Bus message bus implementations on behalf of desktops, and currently they don't put the app in the correct systemd unit. There is an effort to fix that on the dbus-broker side, but these things take time, and there is also the case of apps started from the terminal, which have different unit names altogether.

When for some reason your app was launched in a way that doesn't follow the standard, you can use the special interface for registering with XDG Desktop Portal, the host app Registry, which overwrites the automatic detection. It should be considered a temporary solution, as it is expected to be eventually deprecated (with the details of the replacement specified in the documentation), nevertheless it lets us fix the problem at present. Some toolkits, like GTK, will register the application for you, during the GtkApplication startup call.

There is one caveat, though - it needs to be the first call to the portal, otherwise it will not overwrite the automatic detection. This means that when relying on GTK to handle the registration, you need to make sure you don't interact with the portal before the GtkApplication startup chain-up call. So no more gtk_init in main.c, which on Wayland uses Settings portal to open display, all such code needs to be moved just after the application startup chain-up. If for some reason you really cannot do that, you'll have to call the D-Bus method yourself, before any portal interaction is made.

The end is never the end...

If you made it this far, congratulations and thanks for taking this rabbit hole with me. If it's still not enough, you can check out the ticket I reported and worked on in nautilus, giving even more context to how we ended up here. Hope you learned something that will make your app better :)

Victor Ma

@victorma

Coding begins!

Today marks the end of the community bonding period, and the start of the coding period, of GSoC.

In the last two weeks, I’ve been looking into other crossword editors that are on the market, in order to see what features they have that we should implement. I compiled everything I saw into a findings document.

Once that was done, I went through the document and distilled it down into a final list. I also added other feature ideas that I already had in mind.

Eventually, through a discussion with my mentor, we decided that I should start by tackling a bug that I found. This will help me get more familiar with the fill algorithm code, and it will inform my decisions going forward, in terms of what features I should work on.

Tobias Bernard

@tbernard

Summer of GNOME OS

So far, GNOME OS has mostly been used for testing in virtual machines, but what if you could just use it as your primary OS on real hardware?

Turns out you can!

While it’s still early days and it’s not recommended for non-technical audiences, GNOME OS is now ready for developers and early adopters who know how to deal with occasional bugs (and importantly, file those bugs when they occur).

The Challenge

To get GNOME OS to the next stage we need a lot more hardware testing. This is why this summer (June, July, and August) we’re launching a GNOME OS daily-driving challenge. This is how it works:

  • 10 points for daily driving GNOME OS on your primary computer for at least 4 weeks
  • 1 point for every (valid, non-duplicate) issue created
  • 3 points for every (merged) merge request
  • 5 points for fixing an open issue

You can sign up for the challenge and claim points by adding yourself to the list of participants on the Hedgedoc. As the challenge progresses, add any issues and MRs you opened to the list.

The person with the most points on September 1 will receive a OnePlus 6 (running postmarketOS, unless someone gets GNOME OS to work on it by then). The three people with the most points on September 1 (noon UTC) will receive a limited-edition shirt (stay tuned for designs!).

Important links:

FAQ

Why GNOME OS?

Using GNOME OS Nightly means you’re running the latest latest main for all of our projects. This means you get all the dope new features as they land, months before they hit Fedora Rawhide et al.

For GNOME contributors that’s especially valuable because it allows for easy testing of things that are annoying/impossible to try in a VM or nested session (e.g. notifications or touch input). For feature branches there’s also the possibility to install a sysext of a development branch for system components, making it easy to try things out before they’ve even landed.

More people daily driving Nightly has huge benefits for the ecosystem, because it allows for catching issues early in the cycle, while they’re still easy to fix.

Is my device supported?

Most laptops from the past 5 years are probably fine, especially Thinkpads. The most important specs you need are UEFI and if you want to test the TPM security features you need a semi-recent TPM (any Windows 11 laptop should have that). If you’re not sure, ask in the GNOME OS channel.

Does $APP work on GNOME OS?

Anything available as a Flatpak works fine. For other things, you’ll have to build a sysext.

Generally we’re interested in collecting use cases that Flatpak doesn’t cover currently. One of the goals for this initiative is finding both short-term workarounds and long-term solutions for those cases.

Please add such use cases to the relevant section in the Hedgedoc.

Any other known limitations?

GNOME OS uses systemd-sysupdate for updating the system, which doesn’t yet support delta updates. This means you have to download a new 2GB image from scratch for every update, which might be an issue if you don’t have regular access to a fast internet connection.

The current installer is temporary, so it’s missing many features we’ll have in the real installer, and the UI isn’t very polished.

Anything else I should know before trying to install GNOME OS?

Update the device’s firmware, including the TPM’s firmware, before nuking the Windows install the computer came with (I’m speaking from experience)!

I tried it, but I’m having problems :(

Ask in the GNOME OS Matrix channel!

Michael Hill

@mdhill

Publishing a book from the GNOME desktop

My first two books were written online using Pressbooks in a browser. A change in the company’s pricing model prompted me to migrate another edition of the second book to LaTeX. Many enjoyable hours were spent searching online for how to implement everything from the basics to special effects. After a year and a half a nearly finished book suddenly congealed.

Here’s what I’m using: Fedora’s TeX Live stack, Emacs (with AUCTeX and the memoir class), Evince, and the Citations flatpak, all on a GNOME desktop. The cover of the first book was done professionally by a friend. For the second book (first and second editions) I’ve used the GNU Image Manipulation Program.

For print on demand, Lulu.com. The company was founded by Bob Young, who (among other achievements) rejuvenated a local football team, coincidentally my dad’s (for nearly 80 years and counting). Lulu was one of the options recommended by Adam Hyde at the end of the Mallard book sprint hosted by Google. Our book didn’t get printed in time to take home, so  I uploaded it to Lulu and ordered a few copies with great results. My second book is also on Amazon’s KDP under another ISBN; I’m debating whether to do that again.

Does this all need to be done from GNOME? For me, yes. The short answer came from Richard Schwarting on the occasion of our Boston Summit road trip: “GNOME makes me happy.”

The long answer…
In my career working as a CAD designer in engineering, I’ve used various products by Autodesk (among others). I lived through the AutoCAD-MicroStation war of the 1990s on the side of MicroStation (using AutoCAD when necessary). MicroStation brought elegance to the battle, basing their PC and UNIX ports on their revolutionary new Mac interface. They produced a student version for Linux. After Windows 95 the war was over and mediocrity won.

Our first home computer was an SGI Indy, purchased right in the middle of that CAD war. Having experienced MicroStation on IRIX I can say it’s like running GNOME on a PC: elegant if not exquisite compared to the alternative.

For ten years I was the IT guy at a small engineering company. While carrying out my insidious plan of installing Linux servers and routers, I was able to indulge certain pastimes, building and testing XEmacs (formerly Lucid Emacs) and fledgling GNOME on Debian unstable/experimental. Through the SGI Linux effort I got to meet online acquaintances from Sweden, Mexico, and Germany in person at Ottawa Linux Symposium and Debconf .

At the peak of my IT endeavours, I was reading email in Evolution from OpenXchange Server on SuSE Enterprise Server while serving a Windows workstation network with Samba. When we were acquired by a much larger company, my Linux servers met with expedient demise as we were absorbed into their global Windows Server network. The IT department was regionalized and I was promoted back into the engineering side of things. It was after that I encountered the docs team.

These days I’m compelled to keep Windows in a Box on my GNOME desktop in order to run Autodesk software. It’s not unusual for me to grind my teeth while I’m working. A month ago a surprise hiatus in my day job was announced, giving me time to enjoy GNOME, finish the book, and write a blog post.

So yes, it has to be GNOME.

In 2004 I used LaTeX in XEmacs to write a magazine article that was ultimately published in the UK. This week, for old times’ sake, I installed XEmacs (no longer packaged for Fedora) on my desktop. This requires an EPEL 8 package on CentOS 9 in Boxes. It can be seen in the screenshot. The syntax highlighting is real but LaTeX-mode isn’t quite operational yet.

Nancy Nyambura

@nwnyambura

Outreachy Internship:My First Two Weeks with GNOME:


Diving into Word Scoring for Crosswords

In my first two weeks as an Outreachy intern with GNOME, I’ve been getting familiar with the project I’ll be contributing to and settling into a rhythm with my mentor, Jonathan Blandford. We’ve agreed to meet every Monday to review the past week and plan goals for the next — something I’ve already found incredibly grounding and helpful.

What I’m Working On: The Word Score Project

My project revolves around improving how GNOME’s crossword tools (like GNOME Crosswords) assess and rank words. This is part of a larger effort to support puzzle constructors by helping them pick better words for their grids — ones that are fun, fresh, and fair.

But what makes a “good” crossword word?

This is what the Word Score project aims to answer. It proposes a scoring system that assigns numerical values to words based on multiple measurable traits, such as:

  • Lexical interest (e.g. does it contain unusual bigrams/trigrams like “KN” or “OXC”?),
  • Frequency in natural language (based on datasets like Google Ngrams),
  • Familiarity to solvers (which may differ from frequency),
  • Definition count (some words like SET or RUN are goldmines for cryptic clues),
  • Sentiment and appropriateness (nobody wants a vulgar word in a breakfast puzzle).

The goal is to build a system that supports both the autofill functionality and the word list interface in GNOME Crosswords, giving human setters better tools while respecting editorial judgment. In other words, this project isn’t about replacing setters — it’s about enhancing their toolkit.

You can read more about the project’s goals and philosophy in our draft document: Thoughts on Scoring Words (final link coming soon).

Week 1: Building and Breaking Puzzles

During my first week, I spent time getting familiar with the project environment and experimenting with crossword puzzle generation. I created test puzzles to better understand how word placement, scoring, and validation work under the hood.

This hands-on experimentation helped me form a clearer mental model of how GNOME Crosswords structures and fills puzzles — and why scoring matters. The way words interact in a grid can make some fills elegant and others feel forced or unplayable.

Week 2: Wrestling with libipuz and Introspection

In the second week, my focus shifted to working on libipuz, a C library that parses and exports puzzles using the IPUZ format. but getting libipuz working with GNOME’s introspection system proved more challenging than expected.

Initially, I tried to use it inside the crosswords container, but it wasn’t cooperating. After some digging (and rebuilding), we decided to create a separate container specifically for libipuz to enable introspection and allow scripting in languages like Python and JavaScript to interact with it.

This also gave me a deeper understanding of how GNOME handles language bindings via GObject Introspection — something I hadn’t worked with before, but I’m quickly getting the hang of.

Bonus: Scrabble-Inspired Scoring Script

As a side exploration, I also wrote a quick Python script that calculates Scrabble-style scores for words. While Scrabble scoring isn’t the same as what we want in crosswords (it values rare letters like Z and Q), it gave me a fun way to experiment with scoring mechanics and visualize how simple rules change the ranking of word lists. This mini-project helped me warm up to the idea of building more complex scoring systems later on.


What’s Next?

In the coming weeks, I’ll continue refining the scoring dimensions, writing more scripts to calculate traits (especially frequency and lexical interest), and exploring how this scoring system can be surfaced in GNOME Crosswords. I’m excited to see how this evolves — and even more excited to share updates as I go.

Thanks for reading!


Ahmed Fatthi

@ausername1040

GSoC 2025: First Two Weeks Progress Report

The first two weeks of my Google Summer of Code (GSoC) journey with GNOME Papers have been both exciting and productive. I had the opportunity to meet my mentors, discuss the project goals, and dive into my first major task: improving the way document mutex locks are handled in the codebase.


🤝 Mentor Meeting & Planning

We kicked off with a meeting to get to know each other and to discuss the open Merge Request 499. The MR focuses on moving document mutex locks from the libview/shell layer down to the individual backends (DjVu, PDF, TIFF, Comics). We also outlined the remaining work and clarified how to approach it for the best results.

TIL that Signal Stories are Fun

When Signal introduced Stories, I didn't understand why. To me, Signal is all about giving as little information to as few people as possible but still being able to have a social life.

I didn't use any app that had stories. Only a few friends published Instagram stories, and many more followed public stories. I thought of stories as "broadcast content to as many people as possible," which is the opposite of what Signal is about for me.

It turns out I was wrong. Signal lets you curate who can see your stories. By default, all your contacts can see your stories, but you can also create smaller circles of people who will see them, or you can create stories from existing Signal groups.

Since I've realized that social media like Mastodon affect me more (negatively) than I thought, I've significantly reduced what I read and publish there. But I still want to share happy moments with friends. So, I gave Signal stories a go, and it has been more fun and useful than I thought.

When I publish a story on Signal, I know who will read it. It's not for the public, but it's for friends. I can publish more personal things, and people reply more genuinely. Friends ask where I am or how I'm doing at the moment. We listen to each other. And, to my great satisfaction, a few friends have started publishing stories since I started!

I also publish different things on Signal stories than on Mastodon. On Mastodon, I shared thoughts or, let's be honest, hot takes. On Signal, I share moments. I share what I do and experience, not necessarily what I think.

The UX is still a bit clunky, stories feel poorly integrated into Signal, and I don't understand why Signal broadcasts stories to your whole address book by default. But I enjoy having a place where I can share privately and spontaneously what I'm doing with a short list of people I trust and care about.

[!info] Signal is good tech, help them

If you've never tried Signal stories, I strongly encourage you to do so. If you use Signal and can afford it, consider supporting them financially; they deserve it.

Keep up the good work, Signal. You're an excellent app and a great nonprofit, and I wish more organizations took inspiration from you.

Alireza Shabani

@Revisto

We Started a Podcast for This Week in GNOME (in Farsi)

Hi, we’ve started a new project: a Farsi-language podcast version of This Week in GNOME.

Each week, we read and summarise the latest TWIG post in Farsi, covering updates from GNOME Core, GNOME Circle apps, and other community-related news. Our goal is to help Persian-speaking users and contributors stay connected with the GNOME ecosystem.

The podcast is hosted by me (Revisto), along with Mirsobhan and Hadi. We release one short episode per week.

Since I also make music, I created a short theme for the podcast to give it more identity and consistency across episodes. It’s simple, but it adds a nice touch of production value that we hope makes the podcast feel more polished.

We’re also keeping a GitHub repository in which I’m uploading each of my episode scripts (in Farsi) in Markdown + audio files. The logo and banner assets have been uploaded in SVG as well for transparency.

Partial screenshot of 201st script of TWIG podcast in Obsidian in Farsi, written in markdown.

You can listen to the podcast on:

Let us know what you think, and feel free to share it with Farsi-speaking friends or communities interested in GNOME.

Ahmed Fatthi

@ausername1040

About This Blog & My GSoC Journey

Learn more about this blog, my GSoC 2025 project with GNOME, and my background in open source development.

Christian Hergert

@hergertme

Sysprof in your Mesa

Thanks to the work of Christian Gmeiner, support for annotating time regions using Sysprof marks has landed in Mesa.

That means you’ll be able to open captures with Sysprof and see the data along other useful information including callgraphs and flamegraphs.

I do think there is a lot more we can do around better visualizations in Sysprof. If that is something you’re interested in working on please stop by #gnome-hackers on Libera.chat or drop me an email and I can find things for you to work on.

See the merge request here.

Hans de Goede

@hansdg

IPU6 cameras with ov02c10 / ov02e10 now supported in Fedora

I'm happy to share that 3 major IPU6 camera related kernel changes from linux-next have been backported to Fedora and have been available for about a week now the Fedora kernel-6.14.6-300.fc42 (or later) package:

  1. Support for the OV02C10 camera sensor, this should e.g. enable the camera to work out of the box on all Dell XPS 9x40 models.
  2. Support for the OV02E10 camera sensor, this should e.g. enable the camera to work out of the box on Dell Precision 5690 laptops. When combined with item 3. below and the USBIO drivers from rpmfusion this should also e.g. enable the camera on other laptop models like e.g. the Dell Latitude 7450.
  3. Support for the special handshake GPIO used to turn on the sensor and allow sensor i2c-access on various new laptop models using the Lattice MIPI aggregator FPGA / USBIO chip.

If you want to give this a test using the libcamera-softwareISP FOSS stack, run the following commands:

sudo rm -f /etc/modprobe.d/ipu6-driver-select.conf
sudo dnf update 'kernel*'
sudo dnf install libcamera-qcam
reboot
qcam

Note the colors being washed out and/or the image possibly being a bit over or under exposed is expected behavior ATM, this is due to the software ISP needing more work to improve the image quality. If your camera still does not work after these changes and you've not filed a bug for this camera already please file a bug following these instructions.

See my previous blogpost on how to also test Intel's proprietary stack from rpmfusion if you also have that installed.

comment count unavailable comments

Status update, 22/05/2025

Hello. It is May, my favourite month. I’m in Manchester, mainly as I’m moving projects at work, and its useful to do that face-to-face.

For the last 2 and a half years, my job has mostly involved a huge, old application inside a big company, which I can’t tell you anything about. I learned a lot about how to tackle really, really big software problems where nobody can tell you how the system works and nobody can clearly describe the problem they want you to solve. It was the first time in a long time that I worked on production infrastructure, in that, we could have caused major outages if we rolled out bad changes. Our team didn’t cause any major outages in all that time. I will take that as a sign of success. (There’s still plenty of legacy application to decommission, but it’s no longer my problem).

A green tiled outside wall with graffiti

During that project I tried to make time to work on end to end testing of GNOME using openQA as well… with some success, in the sense that GNOME OS still has working openQA tests, but I didn’t do very well at making improvements, and I still don’t know if or when I’ll ever have time to look further at end-to-end testing for graphical desktops. We did a great Outreachy internship at least with Tanju and Dorothy adding quite a few new tests.

Several distros test GNOME downstream, but we still don’t have much of a story of how they could collaborate upstream. We do still have the monthly Linux QA call so we have a space to coordinate work in that area… but we need people who can do the work.

My job now, for the moment, involves a Linux-based operating system that is intended to be used in safety-critical contexts. I know a bit about operating systems and not much about functional safety. I have seen enough to know there is nothing magic about a “safety certificate” — it represents some thinking about risks and how to detect and mitigate them. I know Codethink is doing some original thinking in this area. It’s interesting to join in and learn about what we did so far and where it’s all going.

Giving credit to people

The new GNOME website, which I really like, describes the project as “An independent computing platform for everyone”.

There is something political about that statement: it’s implying that we should work towards equal access to computer technology. Something which is not currently very equal. Writing software isn’t going to solve that on its own, but it feels like a necessary part of the puzzle.

If I was writing a more literal tagline for the GNOME project, I might write: “A largely anarchic group maintaining complex software used by millions of people, often for little or no money.” I suppose that describes many open source projects.

Something that always bugs me is how a lot of this work is invisible. That’s a problem everywhere: from big companies and governments, down to families and local community groups, there’s usually somebody who does more work than they get credit for.

But we can work to give credit where credit is due. And recently several people have done that!

Outgoing ED Richard Littauer in “So Long and Thanks For All the Fish” shouted out a load of people who work hard in the GNOME Foundation to make stuff work.

Then incoming GNOME ED, Steven Deobald wrote a very detailed “2025-05-09 Foundation Report” (well done for using the correct date format, as well), giving you some idea about how much time it takes to onboard a new director, and how many people are involved.

And then Georges wrote about some people working hard on accessibility in “In celebration of accessibility”.

Giving credit is important and helpful. In fact, that’s just given me an idea, but explaining that will have to wait til next month.

canal in manchester