Jakub Steiner

@jimmac

12 months instead of 12 minutes

Hey Kids! Other than raving about GNOME.org being a static HTML, there’s one more aspect I’d like to get back to in this writing exercise called a blog post.

Share card gets updated every release too

I’ve recently come across an apalling genAI website for a project I hold deerly so I thought I’d give a glimpse on how we used to do things in the olden days. It is probably not going to be done this way anymore in the enshittified timeline we ended up in. The two options available these days are — a quickly generated slop website or no website at all, because privately owned social media is where it’s at.

The wanna-be-catchy title of this post comes from the fact the website underwent numerous iterations (iterations is the core principle of good design) spanning over a year before we introduced the redesign.

So how did we end up with a 3D model of a laptop for the hero image on the GNOME website, rather than something generated in a couple of seconds and a small town worth of drinking water or a simple SVG illustration?

The hero image is static now, but used to be a scroll based animation at the early days. It could have become a simple vector style illustration, but I really enjoy the light interaction of the screen and the laptop, especially between the light and dark variants. Toggling dark mode has been my favorite fidget spinner.

Creating light/dark variants is a bit tedious to do manually every release, but automating still a bit too hard to pull off (the taking screenshots of a nightly OS bit). There’s also the fun of picking a theme for the screenshot rather than doing the same thing over and over. Doing the screenshooting manually meant automating the rest, as a 6 month cycle is enough time to forget how things are done. The process is held together with duct tape, I mean a python script, that renders the website image assets from the few screenshots captured using GNOME OS running inside Boxes. Two great invisible things made by amazing individuals that could go away in an instant and that thought gives me a dose of anxiety.

This does take a minute to render on a laptop (CPU only Cycles), but is a matter of a single invocation and a git commit. So far it has survived a couple of Blender releases, so fingers crossed for the future.

Sophie has recently been looking into translations, so we might reconsider that 3D approach if translated screenshots become viable (and have them contained in an SVG similar to how os.gnome.org is done). So far the 3D hero has always been in sync with the release, unlike in our Wordpress days. Fingers crossed.

Philip Withnall

@pwithnall

Parental controls screen time limits backend

Ignacy blogged recently about all the parts of the user interface for screen time limits in parental controls in GNOME. He’s been doing great work pulling that all together, while I have been working on the backend side of things. We’re aiming for this screen time limits feature to appear in GNOME 50.

High level design

There’s a design document which is the canonical reference for the design of the backend, but to summarise it at a high level: there’s a stateless daemon, malcontent-timerd, which receives logs of the child user’s time usage of the computer from gnome-shell in the child’s session. For example, when the child stops using the computer, gnome-shell will send the start and end times of the most recent period of usage. The daemon deduplicates/merges and stores them. The parent has set a screen time policy for the child, which says how much time they’re allowed on the computer per day (for example, 4h at most; or only allowed to use the computer between 15:00 and 17:00). The policy is stored against the child user in accounts-service.

malcontent-timerd applies this policy to the child’s usage information to calculate an ‘estimated end time’ for the child’s current session, assuming that they continue to use the computer without taking a break. If they stop or take a break, their usage – and hence the estimated end time – is updated.

The child’s gnome-shell is notified of changes to the estimated end time and, once it’s reached, locks the child’s session (with appropriate advance warning).

Meanwhile, the parent can query the child’s computer usage via a separate API to malcontent-timerd. This returns the child’s total screen time usage per day, which allows the usage chart to be shown to the parent in the parental controls user interface (malcontent-control). The daemon imposes access controls on which users can query for usage information. Because the daemon can be accessed by the child and by the parent, and needs to be write-only for the child and read-only for the parent, it has to be a system daemon.

There’s a third API flow which allows the child to request an extension to their screen time for the day, but that’s perhaps a topic for a separate post.

IPC diagram of screen time limits support in malcontent. Screen time limit extensions are shown in dashed arrows.

So, at its core, malcontent-timerd is a time range store with some policy and a couple of D-Bus interfaces built on top.

Per-app time limits

Currently it only supports time limits for login sessions, but it is built in such a way that adding support for time limits for specific apps would be straightforward to add to malcontent-timerd in future. The main work required for that would be in gnome-shell — recording usage on a per-app basis (for apps which have limits applied), and enforcing those limits by freezing or blocking access to apps once the time runs out. There are some interesting user experience questions to think about there before anyone can implement it — how do you prevent a user from continuing to use an app without risking data loss (for example, by killing it)? How do you unambiguously remind the user they’re running out of time for a specific app? Can we reliably find all the windows associated with a certain app? Can we reliably instruct apps to save their state when they run out of time, to reduce the risk of data loss? There are a number of bits of architecture we’d need to get in place before per-app limits could happen.

Wrapping up

As it stands though, the grant funding for parental controls is coming to an end. Ignacy will be continuing to work on the UI for some more weeks, but my time on it is basically up. With the funding, we’ve managed to implement digital wellbeing (screen time limits and break reminders for adults) including a whole UI for it in gnome-control-center and a fairly complex state machine for tracking your usage in gnome-shell; a refreshed UI for parental controls; parental controls screen time limits as described above; the backend for web filtering (but more on that in a future post); and everything is structured so that the extra features we want in future should bolt on nicely.

While the features may be simple to describe, the implementation spans four projects, two buses, contains three new system daemons, two new system data stores, and three fairly unique new widgets. It’s tackled all sorts of interesting user design questions (and continues to do so). It’s fully documented, has some unit tests (but not as many as I’d like), and can be integration tested using sysexts. The new widgets are localisable, accessible, and work in dark and light mode. There are even man pages. I’m quite pleased with how it’s all come together.

It’s been a team effort from a lot of people! Code, design, input and review (in no particular order): Ignacy, Allan, Sam, Florian, Sebastian, Matthijs, Felipe, Rob. Thank you Endless for the grant and the original work on parental controls. Administratively, thank you to everyone at the GNOME Foundation for handling the grant and paperwork; and thank you to the freedesktop.org admins for providing project hosting for malcontent!

Lennart Poettering

@mezcalero

Mastodon Stories for systemd v258

Already on Sep 17 we released systemd v258 into the wild.

In the weeks leading up to that release I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd258 hash tag. It was my intention to post a link list here on this blog right after completing that series, but I simply forgot! Hence, in case you aren't using Mastodon, but would like to read up, here's a list of all 37 posts:

I intend to do a similar series of serieses of posts for the next systemd release (v259), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.

We intend to shorten the release cycle a bit for the future, and in fact managed to tag v259-rc1 already yesterday, just 2 months after v258. Hence, my series for v259 will begin soon, under the #systemd259 hash tag.

In case you are interested, here is the corresponding blog story for systemd v257, and here for v256.

Christian Hergert

@hergertme

Status Week 46

Ptyxis

  • More back and forth on people who have issues with diacritics and ibus interaction. It’s extremely frustrating at times because the two places where stuff like this gets reported first is the text editor and terminal, when rarely those two applications have anything to do with the issue.

    In this case we might have a workaround by being a bit more explicit about focus grabs.

  • Merge support for changing profile on existing tab.

VTE

  • Back-and-forth on updating MR for PTY errno, merge to master, vte-0-82, possibly backport further for RHEL/CentOS.

  • Research some us dead key issues with diacritics and see if we can find where in VTE a problem could be. Text Editor apparently doesn’t replicate the same issue, so it’s possible we should fix something in VTE directly or in GTK IM abstractions. As mentioned in Ptyxis we can probably work around this for now.

Foundry

  • Now that foundry has support for API keys we need to have a mechanism to rotate those keys (and query for expiration). FoundryKeyRotator provides this abstraction to FoundrySecretService.

    This comes with foundry secret rotate HOST SERVICE which makes it easy to keep things up-to-date. It would be nice to do this automatically at some point though it’s rather annoying because you’ll get an email about it, at least from gitlab.

    To check the expiration, foundry secret check-expires-at is provided check again, takes a HOST SERVICE pair.

    Defaults to what the server wants for minimum key lifetime, or you can provide --expire-at=YYYY-MM-DD to specify expiration.

  • Implement symbol APIs for FoundryTextDocument which will power things like the symbol tree, what symbol is underneath my cursor, symbol path bars, and the like. Also added some command line tools for this so that it is easy to test the infrastructure when issues are inevitably filed.

    foundry symbol-tree FILE and foundry find-symbol-at FILE LN COL will quickly test the machinery for filing bug reports.

  • Updated the CTags parser (which is our usually “first implementation” symbol provider) out of simplicity. Allow it to generate data to GBytes instead of files on disk for usage on modified buffers. Also allow it to load an index from memory w/o touching disk for the other side of this index structure.

  • GCC changed the SARIF environment variable to EXPERIMENTAL_SARIF_SOCKET so track that for GCC 16 release.

  • Handy foundry_read_all_bytes(int fd) to exhaust a FD into a GBytes using the libdex AIO subsystem (io_uring, etc).

  • Prototype a tree-sitter based plugin for symbol providers so we can have some faster extractors at least for some very common languages.

  • Add FoundrySymbolLocator for locating symbols by a number of strategies that can be different based on the provider. Combine this with a new FoundrySymbolIntent to integrate into the intent system.

  • Implement FoundryPathNavigator for use in future pathbar work. Add subclasses for FoundrySymbolNavigator and FoundryDocumentationNavigator to make pathbars of those common hierarchies super easy from applications. A FoundryFileNavigator to be able to navigate the file-system. This ties in well with the intent system so activating path elements routes through intents like open-file intent, symbol intent, etc.

  • Implement FoundryPathBar and associated infrastructure for it to be abstracted in libfoundry-adw.

  • Implement LSP progress operations ($/progress and creation operations) so we can bridge them to FoundryOperation. Had to implement some missing bits of FoundryJsonrpcDriver while at it.

  • Improve support for LSP symbol APIs, particularly around support for hierarchical symbol trees. Newer revisions allow for symbols to contain other symbols rather than trying to recreate it from the containerName.

  • Discovered that there is an upper-limit to the number of GWeakRef that can be created and that number is surprisingly lower than I would expect. Sure there is extra overhead with weak refs, but having limits so low is surely a mistake.

  • Lots of work on LSP implementation to bridge things like diagnostics and symbols. It is amazing now much easier it is to do this time around now that I have fibers instead of callback noodles.

  • We have a much saner way of implementing buffer tracking this time around (especially after pushing commit notify into GTK) so the LSP integration around buffer synchronization has a cleaner implementation now.

  • Add tooltips support to the diagnostics gutter renderer.

  • A new “retained” listmodel which allows you to hold/release items in a list model and they’ll stay pinned until the hold count reaches zero. This is helpful for situations where you don’t want to affect an item while there is a mouse over something, or a popover is shown, that sort of deal. I didn’t opt for the scalable RBTree implementation yet, but someone could certainly improve it to do so.

Buider

  • Work on the new auxiliary panel design which will work a bit like it does in Text Editor but allow for panel groupings.

  • Symbols panel implementation work using the new symbol providers. Implement symbol intent handling as well to tie it all together.

  • Implement pathbar integration into text editor and documentation browser using new navigator integration.

  • Diagnostics panel for monitoring diagnostics across the active page without having to resort to scanning around either the global diagnostics or within the gutter.

  • Add annotation provider so we can show diagnostics/git-blame like we do now but in a much more optimized manner. Having diagnostics inline is new though.

  • Lots of styling work for the auxiliary panel to try to make it work well in the presence of a grid of documents. A bit harder to get right than in Text Editor.

  • Ergonomics for the messages panel (clipboard support, clearing history, etc).

  • Work on operation bay for long running operations similar to what Nautilus does. This will handle things like progress from LSPs, deployment to remote devices, etc.

CentOS

  • libadwaita backports. This one is rather frustrating because I’ve been told we can’t do sassc on the build infrastructure and since 1.6.3 libadwaita no longer generates the CSS on CI to be bundled with the tarball.

    So continue with the madness of about 60 patches layered on top of 1.6.2 to keep things progressing there. One patch won’t get in because of the CSS change which is unfortunate as it is somewhat a11y related.

    At the moment the options are (given the uncompromising no-sassc restriction), to keep back-porting and not get CSS changes, to pull in newer tarballs and generate the CSS on my machine and patch that in, or just keep doing this until we can *gestures* do something more compromising on the CentOS build infrastructure.

  • VTE backports for 0.78.6

GtkSourceView

  • Branched for 5.20 development so we can start adding new features.

  • Fix a gir annotation on GtkSourceAnnotation that had wrong transfer.

  • Make GtkSourceAnnotation right justified when it fits in the available space.

  • Add some nice styling to annotations so they are bit more pleasing to look at.

Transparency report for May 2025 to October 2025

GNOME’s Code of Conduct is our community’s shared standard of behavior for participants in GNOME. This is the Code of Conduct Committee’s periodic summary report of its activities from May 2025 to October 2025.

The current members of the CoC Committee are:

  • Anisa Kuci
  • Carlos Garnacho
  • Christopher Davis
  • Federico Mena Quintero
  • Michael Downey
  • Rosanna Yuen

All the members of the CoC Committee have completed Code of Conduct Incident Response training provided by Otter Tech, and are professionally trained to handle incident reports in GNOME community events.

The committee has an email address that can be used to send reports: conduct@gnome.org as well as a website for report submission: https://conduct.gnome.org/

Reports

Since May 2025, the committee has received reports on a total of 25 possible incidents. Many of these were not actionable; all the incidents listed here were resolved during the reporting period.

  • Report on a conspiracy theory, closed as not actionable.
  • Report that was not actionable.
  • Report about a blog post; not a CoC violation and not actionable.
  • Report about interactions in GitLab; not a CoC violation and not actionable.
  • Report about a blog post; not a CoC violation and not actionable.
  • Question about an Export Control Classification Number (ECCN) for GDM; redirected to discourse.gnome.org.
  • Report about a reply in GitLab; not a CoC violation; pointed out resources about unpaid/volunteer work in open source.
  • Report about a reply in GitLab; not a CoC violation but using language against the community guidelines; sent a reminder to the reported person to use non-violent communication.
  • Two reports about a GNOME Shell extension; recommended actions to take to the extension reviewers.
  • Report about another GNOME Shell extension; recommended actions to take to the extension reviewers.
  • Multiple reports about a post on planet.gnome.org; removed the post from the feed and its site.
  • Report with a fake attribution; closed as not actionable.
  • Report with threats; closed as not actionable.
  • Report with a fake attribution; closed as not actionable.
  • Report that was not actionable.
  • Support request; advised reporter to direct their question to the infrastructure team.
  • Report closed due to not being actionable; gave the reporter advice on how to deal with their issue.
  • Report about a reply in GitLab; reminded both the reporter and reported person how to communicate appropriately.
  • Report during GUADEC about an incident during the conference; in-person reminder to the reported individual to mind their behavior.
  • Report about a long-standing GitLab interaction; sent a request for a behavior change to the reported person.
  • Report on a conspiracy theory, closed as not actionable.
  • Report about a Mastodon post, closed as it is not a CoC violation.
  • Report closed due to not being actionable, and not a CoC violation.
  • Report closed due to not being actionable, and not a CoC violation.
  • Report closed due to not being actionable, and not a CoC violation.

Meetings of the CoC committee

The CoC committee has two meetings each month for general updates, and weekly ad-hoc meetings when they receive reports. There are also in-person meetings during GNOME events.

Ways to contact the CoC committee

  • https://conduct.gnome.org – contains the GNOME Code of Conduct and a reporting form.
  • conduct@gnome.org – incident reports, questions, etc.

Allan Day

@aday

GNOME Foundation Update, 2025-11-14

This post is another in my series of GNOME Foundation updates, each of which provides an insight into what’s happened at the GNOME Foundation over the past week. If you are new to these posts I would encourage you to look over some of the previous entries – there’s a fair amount going on at the Foundation right now, and my previous posts provide some useful background.

Old business

It has been another busy week at the GNOME Foundation. Here’s a quick summary:

  • We had a regular Board meeting (as in, the meeting was part of our regular schedule), where we discussed details about the annual report, some financial policy questions, and partnerships.
  • There was another planning meeting for the Digital Wellbeing program, which is close to wrapping up. If you haven’t seen it already, Ignacy gave a great overview of the work that’s been done on this!
  • There have been more meetings with Dawn Matlak, our new finance advisor and systems guru. We are now at the stage where our new finance system is being setup, which is exciting! The plan is to consolate our payments processing on this new platform, which will reduce operational complexity. Invoice processing in future will also be highly automated, and we are going to get additional capabilities around virtual credit cards, which we already have plans for.
  • Preparations continued for GNOME.Asia 2025, which is happening in Tokyo next month. Assisting attendees with visas and travel has been a particular focus.

Most of these items are a continuation of activities that I’ve described in more detail in previous posts, and I’m a bit light on new news this week, but I think that’s to be expected sometimes!

Post

This is the tenth in my series of GNOME Foundation updates, and this seems like a good point to reflect on how they are going. The weekly posting cadance made sense in the beginning, and wrapping up the week on a Friday afternoon is quite enjoyable, but I am unsure if a weekly post is too much reading for some.

So, I’d love to hear feedback: do you like the weekly updates, or do you find it hard to keep up? Would you prefer a higher-level monthly update? Do you like hearing about background operational details, or are you more interested in programs, events and announcements? Answers to these questions would be extremely welcome! Please let me know what you think, either in the comments or by reaching out on Matrix.

That’s it from me for now. Thanks for reading, and have a great day.

Mid-November News

Misc news about the gedit text editor, mid-November edition!

Website: new design

Probably the highlight this month is the new design of the gedit website.

If it looks familiar to some of you, it's normal, it's because it's an adaptation of the previous GtkSourceView website that was developed in the old gnomeweb-wml repository. gnomeweb-wml (projects.gnome.org) is what predates all the wiki pages for Apps and Projects. The wiki has been retired, so another solution had to be found.

For the timeline, projects.gnome.org was available until 2013/2014 where all the content had been migrated to the wiki. Then the wiki has been retired in 2024.

Note that there are still rough edges on the gedit website, and more importantly some efforts still need to be done to bring the old CSS stylesheet forward to the new(-ish) responsive web design world.

For the most nostalgic of you:

And for the least nostalgic of you:

  • gedit website from last month (October 2025)

    Screenshot of gedit website in October 2025

What we can say is that the gedit project has stood the test of time!

Enter TeX: improved search and replace

Some context: I would like some day to unify the search and replace feature between Enter TeX and gedit. It needs to retain the best of each.

In Enter TeX it's a combined horizontal bar, something that I would like in gedit too to replace the dialog window that occludes part of the text.

In gedit the strengths include: the search-as-you-type possibility, and a history of past searches. Both are missing in Enter TeX. (These are not the only things that need to be retained; the same workflows, keyboard shortcuts etc. are also an integral part of the functionality).

So to work towards that goal, I started in Enter TeX. I merged around 50 commits in the git repository for this change already, rewriting in C (from Vala) some parts and improving the UI along the way. The code needs to be in C because it'll be moved to libgedit-tepl so that it can be consumed by gedit easily.

Here is how it looks:

Screenshot of the search and replace in Enter TeX

Internal refactoring for GeditWindow and its statusbar

GeditWindow is what we can call a god class. It is too big, both in the number of lines and the number of instance variables.

So this month I've continued to refactor it, to extract a GeditWindowStatus class. There was already a GeditStatusbar class, but its features have now been moved to libgedit-tepl as TeplStatusbar.

GeditWindowStatus takes up the responsibility to create the TeplStatusbar, to fill it with the indicators and other buttons, and to make the connection with GeditWindow and the current tab/document.

So as a result, GeditWindow is a little less omniscient ;-)

As a conclusion

gedit does not materialize out of empty space; it takes time to develop and maintain. To demonstrate your appreciation of this piece of software and help its future development, remember that you can fund the project. Your support is critical and much appreciated.

This Week in GNOME

@thisweek

#225 Volume Levels

Update on what happened across the GNOME project in the week from November 07 to November 14.

GNOME Core Apps and Libraries

Settings

Configure various aspects of your GNOME desktop.

Zoey Ahmed 🏳️‍⚧️ 💙💜🩷 reports

GNOME Settings volume levels page received a change to fix applications inputs and outputs being hard to distinguish. This change separates the applications with outputs and inputs streams into separate lists, and adds a microphone icon to the inputs list.

Thank you to Hari Rana and Matthijs Velsink for helping me with my first MR, and Jeff Fortin for nudging me to persue this change!

volume_levels.png

Files

Providing a simple and integrated way of managing your files and browsing your file system.

Tomasz Hołubowicz says

Nautilus now supports Ctrl+Insert and Shift+Insert for copying and pasting files, matching the behavior of other GTK applications, browsers, and file managers like Dolphin and Thunar. These CUA keybindings were previously only functional in Nautilus’s location bar, creating an inconsistency. The addition also benefits users with keyboards that have dedicated copy/paste keys, which typically emit these key combinations. These shortcuts are particularly useful for left-handed users and also allow the same bindings to work across applications, file managers, and terminal emulators, where Ctrl+Shift+C/V are typically required. The Ctrl+V paste shortcut is now also visible in the context menu.

GLib

The low-level core library that forms the basis for projects such as GTK and GNOME.

Philip Withnall announces

In https://gitlab.gnome.org/GNOME/glib/-/merge_requests/4900, Philip Chimento has added a G_GNUC_FLAG_ENUM macro to GLib, which can be used in an enum definition to tell the compiler it’s for a flag type (i.e. enum values which can be bitwise combined). This allows for better error reporting, particularly when building with -Wswitch (which everyone should be using!).

So now we can have enums which look like this, for example:

typedef enum {
  G_CONVERTER_NO_FLAGS     = 0,         /*< nick=none >*/
  G_CONVERTER_INPUT_AT_END = (1 << 0),  /*< nick=input-at-end >*/
  G_CONVERTER_FLUSH        = (1 << 1)   /*< nick=flush >*/
} G_GNUC_FLAG_ENUM GConverterFlags;

GNOME Circle Apps and Libraries

Gaphor

A simple UML and SysML modeling tool.

Dan Yeaw announces

Gaphor, the simple modeling tool, version 3.2.0 is now out! Some highlights include:

  • Troubleshooting info can now be found in the About dialog
  • Introduction of CSS classes: .item for all items you put on the diagram
  • Improved updates in Model Browser for attribute/parameter types
  • macOS: native window decorations and window menu

Grab the new version on Flathub.

Third Party Projects

Haydn reports

Typesetter, a minimalist desktop application for creating beautiful documents with Typst, is now available on Flathub.

Features include:

  • Adaptive, user-friendly interface: Focus on writing. Great for papers, reports, slides, books, and any structured writing.
  • Powered by Typst: A modern markup-based typesetting language, combining the simplicity of Markdown with the power of LaTeX.
  • Local-first: Your files stay on your machine. No cloud lock-in.
  • Package support: Works offline, but can fetch and update packages online when needed.
  • Automatic preview: See your rendered document update as you write.
  • Click-to-jump: Click on a part of the preview to jump to the corresponding position in the source file.
  • Centered scrolling: Keeps your writing visually anchored as you type.
  • Syntax highlighting: Makes your documents easier to read and edit.
  • Fast and native: Built in Rust and GTK following the GNOME human interface guidelines.

Get Typesetter on Flathub

typesetter-dark-preview.png

typesetter-light-editor.png

Vladimir Kosolapov announces

Lenspect 1.0.2 has just been released on Flathub

This version features some quality-of-life improvements:

  • Improved drag-and-drop design
  • Increased file size limit to 650MB
  • Added more result items from VirusTotal
  • Added notifications for background scans
  • Added file opener integration
  • Added key storage using secrets provider

Check out the project on GitHub

lenspect.png

GNOME Websites

Sophie (she/her) reports

The API to access information about GNOME projects has moved from apps.gnome.org to static.gnome.org/catalog. Everything based on the old API links has to move to the new links. The format of the API also slightly changed.

Pages like apps.gnome.org, welcome.gnome.org, developer.gnome.org/components/, and others are based on the API data. The separation will help with maintainability of the code.

More information can be found in the catalog’s git repository.

Shell Extensions

Dudu Maroja reports

The 2 Wallpapers GNOME extension is a neat tool that changes your wallpaper whenever you open a window. You can choose to set a darker, blurry, desaturated, or completely different image, whatever suits your preference. This extension was designed to help you focus on your active windows while letting your desktop shine when you want it.

The main idea behind this extension is to allow the use of transparent windows without relying on heavy processing or on-the-fly effects like blur, which can consume too much battery or GPU resources.

Grab it here: 2 Wallpapers Extension

dagimg-dot says

I have been working on Veil - a modern successor to the Hide items extension. which lets you hide all or chosen items on the gnome panel with auto-hide feature and smooth animations. you can check out the demo on GNOME’s reddit https://www.reddit.com/r/gnome/comments/1orr1co/veil_a_cleaner_quieter_gnome_panel_hide_items/

Dmy3k announces

Adaptive Brightness Extension

This week the extension received a big update to preferences UI.

Interactive Brightness Configuration

  • You can now customize how your screen brightness responds to different lighting conditions using an easy-to-use graphical interface
  • Configure brightness levels for 5 different light ranges (from night to bright outdoor)
  • See a visual graph showing your brightness curve

Improved Settings Layout

  • Settings are now organized into 3 clear tabs: Calibration, Preview, and Keyboard
  • Each lighting condition can be expanded to adjust its range and brightness level
  • Live preview shows you exactly how brightness will respond to ambient light

Better Keyboard Backlight Control

  • Choose specific lighting conditions where keyboard backlight turns on (instead of just on/off)

Available at extensions.gnome.org and github.

gnome_extensions_adaptive_brightness_prefs.png

Miscellaneous

GNOME OS

The GNOME operating system, development and testing platform

Ada Magicat ❤️🧡🤍🩷💜 reports

Tulip Blossom from Project Bluefin has been working on building bootc images of different Linux systems, including GNOME OS. To ensure bootc users have the best experience possible with our system, Jordan Petridis and Valentin David from the GNOME OS team are working on building an OCI image that can be directly used by bootc. It is currently a work in progress, but we expect to land it soon. This collaboration is a great opportunity to expand our community, contributor base and share our vision for how to build operating systems.

Note that this does not represent a change in our plans for GNOME OS itself; It will continue using the same systemd tools for deploying and updating the system.

gnomeos-bootc.png

Ada Magicat ❤️🧡🤍🩷💜 reports

In Ignacy’s update on his Digital Wellbeing work this week, you might have noticed he shared the progress of his work in a complete system image. That image is based on GNOME OS and built on the same infrastructure as our main images.

This shows the power of GNOME OS as a development platform, especially for features that involve changes in many different parts of our stack. It also allows anyone with a machine, virtual or physical, to test these new features easier than ever before.

We hope to further improve our tools so that they are useful to more developers and make it easier and more convenient to test changes like this.

GNOME Foundation

Allan Day says

Another weekly Foundation update is available this week, with a summary of everything that’s been happening at the GNOME Foundation. It’s been a mixed week, with a Board meeting, ongoing finance work, GNOME.Asia preparations, and digital wellbeing planning.

Digital Wellbeing Project

Ignacy Kuchciński (ignapk) announces

As part of the Digital Wellbeing project, sponsored by the GNOME Foundation, there is an initiative to redesign the Parental Controls to bring it on par with modern GNOME apps and implement new features such as Screen Time monitoring, Bedtime Schedule and Web Filtering. Recently the child account overview gained screen time usage information, the Screen Time page was added with session limits controls, the wellbeing panel in Settings was integrated with parental controls, and screen limits were introduced in the Shell. There’s more to come, see https://blogs.gnome.org/ignapk/2025/11/10/digital-wellbeing-contract-screen-time-limits/ for more information.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Andy Wingo

@wingo

the last couple years in v8's garbage collector

Let’s talk about memory management! Following up on my article about 5 years of developments in V8’s garbage collector, today I’d like to bring that up to date with what went down in V8’s GC over the last couple years.

methodololology

I selected all of the commits to src/heap since my previous roundup. There were 1600 of them, including reverts and relands. I read all of the commit logs, some of the changes, some of the linked bugs, and any design document I could get my hands on. From what I can tell, there have been about 4 FTE from Google over this period, and the commit rate is fairly constant. There are very occasional patches from Igalia, Cloudflare, Intel, and Red Hat, but it’s mostly a Google affair.

Then, by the very rigorous process of, um, just writing things down and thinking about it, I see three big stories for V8’s GC over this time, and I’m going to give them to you with some made-up numbers for how much of the effort was spent on them. Firstly, the effort to improve memory safety via the sandbox: this is around 20% of the time. Secondly, the Oilpan odyssey: maybe 40%. Third, preparation for multiple JavaScript and WebAssembly mutator threads: 20%. Then there are a number of lesser side quests: heuristics wrangling (10%!!!!), and a long list of miscellanea. Let’s take a deeper look at each of these in turn.

the sandbox

There was a nice blog post in June last year summarizing the sandbox effort: basically, the goal is to prevent user-controlled writes from corrupting memory outside the JavaScript heap. We start from the assumption that the user is somehow able to obtain a write-anywhere primitive, and we work to mitigate the effect of such writes. The most fundamental way is to reduce the range of addressable memory, notably by encoding pointers as 32-bit offsets and then ensuring that no host memory is within the addressable virtual memory that an attacker can write. The sandbox also uses some 40-bit offsets for references to larger objects, with similar guarantees. (Yes, a sandbox really does reserve a terabyte of virtual memory).

But there are many, many details. Access to external objects is intermediated via type-checked external pointer tables. Some objects that should never be directly referenced by user code go in a separate “trusted space”, which is outside the sandbox. Then you have read-only spaces, used to allocate data that might be shared between different isolates, you might want multiple cages, there are “shared” variants of the other spaces, for use in shared-memory multi-threading, executable code spaces with embedded object references, and so on and so on. Tweaking, elaborating, and maintaining all of these details has taken a lot of V8 GC developer time.

I think it has paid off, though, because the new development is that V8 has managed to turn on hardware memory protection for the sandbox: sandboxed code is prevented by the hardware from writing memory outside the sandbox.

Leaning into the “attacker can write anything in their address space” threat model has led to some funny patches. For example, sometimes code needs to check flags about the page that an object is on, as part of a write barrier. So some GC-managed metadata needs to be in the sandbox. However, the garbage collector itself, which is outside the sandbox, can’t trust that the metadata is valid. We end up having two copies of state in some cases: in the sandbox, for use by sandboxed code, and outside, for use by the collector.

The best and most amusing instance of this phenomenon is related to integers. Google’s style guide recommends signed integers by default, so you end up with on-heap data structures with int32_t len and such. But if an attacker overwrites a length with a negative number, there are a couple funny things that can happen. The first is a sign-extending conversion to size_t by run-time code, which can lead to sandbox escapes. The other is mistakenly concluding that an object is small, because its length is less than a limit, because it is unexpectedly negative. Good times!

oilpan

It took 10 years for Odysseus to get back from Troy, which is about as long as it has taken for conservative stack scanning to make it from Oilpan into V8 proper. Basically, Oilpan is garbage collection for C++ as used in Blink and Chromium. Sometimes it runs when the stack is empty; then it can be precise. But sometimes it runs when there might be references to GC-managed objects on the stack; in that case it runs conservatively.

Last time I described how V8 would like to add support for generational garbage collection to Oilpan, but that for that, you’d need a way to promote objects to the old generation that is compatible with the ambiguous references visited by conservative stack scanning. I thought V8 had a chance at success with their new mark-sweep nursery, but that seems to have turned out to be a lose relative to the copying nursery. They even tried sticky mark-bit generational collection, but it didn’t work out. Oh well; one good thing about Google is that they seem willing to try projects that have uncertain payoff, though I hope that the hackers involved came through their OKR reviews with their mental health intact.

Instead, V8 added support for pinning to the Scavenger copying nursery implementation. If a page has incoming ambiguous edges, it will be placed in a kind of quarantine area for a while. I am not sure what the difference is between a quarantined page, which logically belongs to the nursery, and a pinned page from the mark-compact old-space; they seem to require similar treatment. In any case, we seem to have settled into a design that was mostly the same as before, but in which any given page can opt out of evacuation-based collection.

What do we get out of all of this? Well, not only can we get generational collection for Oilpan, but also we unlock cheaper, less bug-prone “direct handles” in V8 itself.

The funny thing is that I don’t think any of this is shipping yet; or, if it is, it’s only in a Finch trial to a minority of users or something. I am looking forward in interest to seeing a post from upstream V8 folks; whole doctoral theses have been written on this topic, and it would be a delight to see some actual numbers.

shared-memory multi-threading

JavaScript implementations have had the luxury of a single-threadedness: with just one mutator, garbage collection is a lot simpler. But this is ending. I don’t know what the state of shared-memory multi-threading is in JS, but in WebAssembly it seems to be moving apace, and Wasm uses the JS GC. Maybe I am overstating the effort here—probably it doesn’t come to 20%—but wiring this up has been a whole thing.

I will mention just one patch here that I found to be funny. So with pointer compression, an object’s fields are mostly 32-bit words, with the exception of 64-bit doubles, so we can reduce the alignment on most objects to 4 bytes. V8 has had a bug open forever about alignment of double-holding objects that it mostly ignores via unaligned loads.

Thing is, if you have an object visible to multiple threads, and that object might have a 64-bit field, then the field should be 64-bit aligned to prevent tearing during atomic access, which usually means the object should be 64-bit aligned. That is now the case for Wasm structs and arrays in the shared space.

side quests

Right, we’ve covered what to me are the main stories of V8’s GC over the past couple years. But let me mention a few funny side quests that I saw.

the heuristics two-step

This one I find to be hilariousad. Tragicomical. Anyway I am amused. So any real GC has a bunch of heuristics: when to promote an object or a page, when to kick off incremental marking, how to use background threads, when to grow the heap, how to choose whether to make a minor or major collection, when to aggressively reduce memory, how much virtual address space can you reasonably reserve, what to do on hard out-of-memory situations, how to account for off-heap mallocated memory, how to compute whether concurrent marking is going to finish in time or if you need to pause... and V8 needs to do this all in all its many configurations, with pointer compression off or on, on desktop, high-end Android, low-end Android, iOS where everything is weird, something called Starboard which is apparently part of Cobalt which is apparently a whole new platform that Youtube uses to show videos on set-top boxes, on machines with different memory models and operating systems with different interfaces, and on and on and on. Simply tuning the system appears to involve a dose of science, a dose of flailing around and trying things, and a whole cauldron of witchcraft. There appears to be one person whose full-time job it is to implement and monitor metrics on V8 memory performance and implement appropriate tweaks. Good grief!

mutex mayhem

Toon Verwaest noticed that V8 was exhibiting many more context switches on MacOS than Safari, and identified V8’s use of platform mutexes as the problem. So he rewrote them to use os_unfair_lock on MacOS. Then implemented adaptive locking on all platforms. Then... removed it all and switched to abseil.

Personally, I am delighted to see this patch series, I wouldn’t have thought that there was juice to squeeze in V8’s use of locking. It gives me hope that I will find a place to do the same in one of my projects :)

ta-ta, third-party heap

It used to be that MMTk was trying to get a number of production language virtual machines to support abstract APIs so that MMTk could slot in a garbage collector implementation. Though this seems to work with OpenJDK, with V8 I think the churn rate and laser-like focus on the browser use-case makes an interstitial API abstraction a lose. V8 removed it a little more than a year ago.

fin

So what’s next? I don’t know; it’s been a while since I have been to Munich to drink from the source. That said, shared-memory multithreading and wasm effect handlers will extend the memory management hacker’s full employment act indefinitely, not to mention actually landing and shipping conservative stack scanning. There is a lot to be done in non-browser V8 environments, whether in Node or on the edge, but it is admittedly harder to read the future than the past.

In any case, it was fun taking this look back, and perhaps I will have the opportunity to do this again in a few years. Until then, happy hacking!

Jiri Eischmann

@jeischma

How We Streamed OpenAlt on Vhsky.cz

The blog post was originally published on my Czech blog.

When we launched Vhsky.cz a year ago, we did it to provide an alternative to the near-monopoly of YouTube. I believe video distribution is so important today that it’s a skill we should maintain ourselves.

To be honest, it’s bothered me for the past few years that even open-source conferences simply rely on YouTube for streaming talks, without attempting to secure a more open path. We are a community of tech enthusiasts who tinker with everything and take pride in managing things ourselves, yet we just dump our videos onto YouTube, even when we have the tools to handle it internally. Meanwhile, it’s common for conferences abroad to manage this themselves. Just look at FOSDEM or Chaos Communication Congress.

This is why, from the moment Vhsky.cz launched, my ambition was to broadcast talks from OpenAlt—a conference I care about and help organize. The first small step was uploading videos from previous years. Throughout the year, we experimented with streaming from OpenAlt meetups. We found that it worked, but a single stream isn’t quite the stress test needed to prove we could handle broadcasting an entire conference.

For several years, Michal Vašíček has been in charge of recording at OpenAlt, and he has managed to create a system where he handles recording from all rooms almost single-handedly (with assistance from session chairs in each room). All credit to him, because other conferences with a similar scope of recordings have entire teams for this. However, I don’t have insight into this part of the process, so I won’t focus on it. Michal’s job was to get the streams to our server; our job was to get them to the viewers.

OpenAlt’s AV background with running streams. Author: Michal Stanke.

Stress Test

We only got to a real stress test the weekend before the conference, when Bashy prepared a setup with seven streams at 1440p resolution. This was exactly what awaited us at OpenAlt. Vhsky.cz runs on a fairly powerful server with a 32-core i9-13900 processor and 96 GB of RAM. However, it’s not entirely dedicated to PeerTube. It has to share the server with other OSCloud services (OSCloud is a community hosting of open source web services).

We hadn’t been limited by performance until then, but seven 1440p streams were truly at the edge of the server’s capabilities, and streams occasionally dropped. In reality, this meant 14 continuous transcoding processes, as we were streaming in both 1440p and 480p. Even if you don’t change the resolution, you still need to transcode the video to leverage useful distribution features, which I’ll cover later. The 480p resolution was intended for mobile devices and slow connections.

Remote Runner

We knew the Vhsky.cz server alone couldn’t handle it. Fortunately, PeerTube allows for the use of “remote runners”. The PeerTube instance sends video to these runners for transcoding, while the main instance focuses only on distributing tasks, storage, and video distribution to users. However, it’s not possible to do some tasks locally and offload others. If you switch transcoding to remote runners, they must handle all the transcoding. Therefore, we had to find enough performance somewhere to cover everything.

I reached out to several hosting providers known to be friendly to open-source activities. Adam Štrauch from Roští.cz replied almost immediately, saying they had a backup machine that they had filed a warranty claim for over the summer and hadn’t tested under load yet. I wrote back that if they wanted to see how it behaved under load, now was a great opportunity. And so we made a deal.

It was a truly powerful machine: a 48-core Ryzen with 1 TB of RAM. Nothing else was running on it, so we could use all its performance for video transcoding. After installing the runner on it, we passed the stress test. As it turned out, the server with the runner still had a large reserve. For a moment, I toyed with the idea of adding another resolution to transcode the videos into, but then I decided we’d better not tempt fate. The stress test showed us we could keep up with transcoding, but not how it would behave with all the viewers. The performance reserve could come in handy.

Load on the runner server during the stress test. Author: Adam Štrauch.

Smart Video Distribution

Once we solved the transcoding performance, it was time to look at how PeerTube would handle video distribution. Vhsky.cz has a bandwidth of 1 Gbps, which isn’t much for such a service. If we served everyone the 1440p stream, we could serve a maximum of 100 viewers. Fortunately, another excellent PeerTube feature helps with this: support for P2P sharing using HLS and WebRTC.

Thanks to this, every viewer (unless they are on a mobile device and data) also becomes a peer and shares the stream with others. The more viewers watch the stream, the more they share the video among themselves, and the server load doesn’t grow at the same rate.

A two-year-old stress test conducted by the PeerTube developers themselves gave us some idea of what Vhsky could handle. They created a farm of 1,000 browsers, simulating 1,000 viewers watching the same stream or VOD. Even though they used a relatively low-performance server (quad-core i7-8700 CPU @ 3.20GHz, slow hard drive, 4 GB RAM, 1 Gbps connection), they managed to serve 1,000 viewers, primarily thanks to data sharing between them. For VOD, this saved up to 98% of the server’s bandwidth; for a live stream, it was 75%:

If we achieved a similar ratio, then even after subtracting 200 Mbps for overhead (running other services, receiving streams, data exchange with the runner), we could serve over 300 viewers at 1440p and multiples of that at 480p. Considering that OpenAlt had about 160 online viewers in total last year, this was a more than sufficient reserve.

Live Operation

On Saturday, Michal fired up the streams and started sending video to Vhsky.cz via RTMP. And it worked. The streams ran smoothly and without stuttering. In the end, we had a maximum of tens of online viewers at any one time this year, which posed no problem from a distribution perspective.

In practice, the server data download savings were large even with just 5 peers on a single stream and resolution.

Our solution, which PeerTube allowed us to flexibly assemble from servers in different data centers, has one disadvantage: it creates some latency. In our case, however, this meant the stream on Vhsky.cz was about 5-10 seconds behind the stream on YouTube, which I don’t think is a problem. After all, we’re not broadcasting a sports event.

Diagram of the streaming solution for OpenAlt. Labels in Czech, but quite self-explanatory.

Minor Problems

We did, however, run into minor problems and gained experience that one can only get through practice. During Saturday, for example, we found that the stream would occasionally drop from 1440p to 480p, even though the throughput should have been sufficient. This was because the player felt that the delivery of stream chunks was delayed and preemptively switched to a lower resolution. Setting a higher cache increased the stream delay slightly, but it significantly reduced the switching to the lower resolution.

Subjectively, even 480p wasn’t a problem. Most of the screen was taken up by the red frame with the OpenAlt logo and the slides. The speaker was only in a small window. The reduced resolution only caused slight blurring of the text on the slides, which I wouldn’t even have noticed as a problem if I wasn’t focusing on it. I could imagine streaming only in 480p if necessary. But it’s clear that expectations regarding resolution are different today, so we stream in 1440p when we can.

Over the whole weekend, the stream from one room dropped for about two talks. For some rooms, viewers complained that the stream was too quiet, but that was an input problem. This issue was later fixed in the recordings.

When uploading the talks as VOD (Video on Demand), we ran into the fact that PeerTube itself doesn’t support bulk uploads. However, tools exist for this, and we’d like to use them next time to make uploading faster and more convenient. Some videos also uploaded with the wrong orientation, which was likely a problem in their metadata, as PeerTube wasn’t the only player that displayed them that way. YouTube, however, managed to handle it. Re-encoding them solved the problem.

On Saturday, to save performance, we also tried transcoding the first finished talk videos on the external runner. For these, a bar is displayed with a message that the video failed to save to external storage, even though it is clearly stored in object storage. In the end we had to reupload them because they were available to watch, but not indexed.

A small interlude – my talk about PeerTube at this year’s OpenAlt. Streamed, of course, via PeerTube:

Thanks and Support

I think that for our very first time doing this, it turned out very well, and I’m glad we showed that the community can stream such a conference using its own resources. I would like to thank everyone who participated. From Michal, who managed to capture footage in seven lecture rooms at once, to Bashy, who helped us with the stress test, to Archos and Schmaker, who did the work on the Vhsky side, and Adam Štrauch, who lent us the machine for the external runner.

If you like what we do and appreciate that someone is making OpenAlt streams and talks available on an open platform without ads and tracking, we would be grateful if you supported us with a contribution to one of OSCloud’s accounts, under which Vhsky.cz runs. PeerTube is a great tool that allows us to operate such a service without having Google’s infrastructure, but it doesn’t run for free either.

Christian Hergert

@hergertme

Status Week 45

Ptyxis

  • Handle some incoming issue reports which basically amounts to copying their question into google, searching, and copying the first result back. A reminder that we really need dedicated support channels that are not the issue tracker.

    But more importantly, how you move people there is still problematic. I could of course just tell them to go over “there”, but when the questions are so simple you end up taking the gentler approach and just answering it begrudgingly rather than coming off abrupt.

  • Issue reported about an async spin loop from GMainLoop after closing a window which has a really long paste actively feeding it. Looks like it needs to be addressed in VTE (more on that below), but also decided to raise priority of our fallback SIGKILL handler if SIGHUP failed.

  • Code review for incoming community feature to swap profiles on an active tab.

VTE

  • MR sent upstream to hopefully address the corner case of disposing a terminal widget while paste is ongoing.

  • Noticed that ibus-daemon is taking considerable CPU when running the ucs-decode test tool on Ptyxis/VTE. Probably something related to tracking input cursor positions w/ the text protocol or similar. Reached out to @garncho for any tricks we might be able to pull off from the VTE side of things.

Libdex

  • Looking into some strange behaviors with dex_thread_wait_for() when implementing the cross-thread semantics for libgit2 wrapping. As a result I changed where we control life-cycle for the waited upon future so that it is guaranteed longer than the DexWaiter.

  • Spend some time thinking about how we might approach a more generalized wrapper for existing GIO-like async functions. I was never satisfied with DexAsyncPair and perhaps now that we will gain a gdbus-codegen for Libdex we could gain a *.gir future wrapping codegen too.

  • Add a new DexFutureListModel which is a GListModel which is populated by a DexFuture resolving to a GListModel. Very handy when you have a future and want a GListModel immediately.

Foundry

  • Add FoundryFileRow so we have easy mechanics for the whole typing a file/directory path and browsing to it. Re-use the existing path collapse/expand stuff in Foundry to allow for niceties like ~/Directory. Fix a bunch of obnoxiousness that was in the Builder implementation of this previously.

  • Add foundry_operation_is_cancelled() to simplify integrating with external blocking/threaded code such as libgit2. This allows breaking out of a clone operation by checking for cancellation in the progress callbacks from the server but works for any sort of blocking API that itself has callback functions in its state machine.

  • Add new CLI formatter for GFlags to use value_nick instead of the whole enumeration value in UP_CASE.

  • Add FoundryBuildPipeline:build-system with discover from the FoundryBuildAddin before full addin loading occurs. This replicates the IdeBuildSystemDiscovery from Builder without having to use a secondary addin object.

  • Add priorities to FoundryBuildAddin so that we can pre-sort by priority before build-system discovery. That fixes loading the WebKit project out of the box so that CMakeLists.txt will be matched before Makefile if you configured in tree.

  • Implement write-back for FoundrySdkManager:sdk to the active configuration after checking of the config supports the SDK.

  • Implement write-back for buildconfig files.

  • Implement mechanics for write-back PluginFlatpakConfig. Also start on implementation for all the FlatpakSerializable types. This is made much more complicated by needing to keep track of the different subtle ways lists can be deserialized as well as referenced files which can be included. It will mean write-back may have a bucket of files to update. Though right now we only will do this for a few select fields so we can probably punt a bit.

  • Meson cleanup to all our tools that test various aspects of Foundry outside the confines of an entire IDE.

  • Non-destructive editing of flatpak manifests works for the core feature-set we need. It requires storing a lot of state while parsing manifests (and recursively their includes) but it seems to work well enough.

  • Setup needs-attention tracking for panels and pages.

  • Write a new foundry.1 manpage for inclusion in distributions that really want that provided.

  • Add :author property to FoundryForgeIssue and FoundryForgeMergeRequest so we can start presenting peer information in the Builder UI. Implement this for gitlab plugin.

  • Improve handling to changes in implicit-trailing-newline from file settings getting loaded/applied. That way we don’t run into weird behavior with file-change monitoring with ligbit2 getting something different than we’d expect.

  • Support for keyword search when listing issues/merge-requests through the forge interfaces.

  • Add FoundryDocumentationIntent for intent to read documentation.

  • Very basic support for Justfile using just to build. Currently it is sort of a cop-out because it uses build and clean instead of trying to track down [default] and what not. Contributions welcome.

Builder

  • Iteration on the new clone dialog based on Foundry including wiring up all the PTY usage, page navigation, etc.

  • Update builder-dark GtkSourceView style to fit in better with updated Adwaita palette. Even though it’s not using Tango palette for all the colors, we still want it to fit in.

  • Iteration on forge listings

  • Wire up needs attention for panels/pages

  • Setup forge splitbutton so we can jump to gitlab (or other forge) quickly or alternatively list issues/etc in app as we gain more forge support.

  • Implement manuals panel and page on top of the Foundry service FoundryDocumentationManager.

  • Implement browser page and port over old urlbar implementation. Still slogging through all the intricacies of navigation policy which I didn’t handle so well in Builder originally.

  • Work on bringing over path bar from Manuals so we can use it more generically for things like symbol paths and what not.

  • A lot of little things here and there that just need plumbing now that we’re in a world of futures and listmodels.

  • Lots of work on the updated greeter using Foundry. It looks mostly the same, just less madness in implementation.

Flathub

  • Investigate why older Builder is what is shown as published.

  • Merge PR to update Ptyxis to 49.2

  • Update Manuals to 49.1 since I missed the .0. Had a lot of libfoundry things to bring over as well. Needs an exceptions update to flathub linter.

Text Editor

  • Update builder style scheme

Libpanel

  • Update needs-attention style for the panel switcher buttons

Manuals

  • Make ctrl+k select all the existing text in the search entry as part of the focus-search action.

Digital Wellbeing Contract: Screen Time Limits

It’s been four months since my last Digital Wellbeing update. In that previous post I talked about the goals of the Digital Wellbeing project. I also described our progress improving and extending the functionality of the GNOME Parental Controls application, as well as redesigning the application to meet the current design guidelines.

Introducing Screen Time Limits

Following our work on the Parental Controls app, the next major work item was to implement screen time limits functionality, offering the parents ability to check the child’s screen time usage, set the time limits, and lock the child account outside of a specified curfew. This feature actually spanned across *three* different GNOME projects:

  • Parental Controls: Screen Time page was added to the Parental Controls app, so that parents can view child screen usage an set the time limits, it also includes a detailed bar chart
  • Settings: Wellbeing panel needed to make its own time limits settings impossible to change when the child has parental controls session limits enabled, since they are not in effect in such situation. There’s now also a banner with an explanation that guides to the Parental Controls app
  • Shell: Child session needed to actually lock when the limit was reached

Out of all of the three above, the Parental Controls and Shell changes have been already merged, while the Settings integration has been through unwritten review during the bi-weekly Settings meeting and adjusted to the feedback, so it’s only a matter of time now before it reaches the main branch as well. You can find the screenshots of the added functionalities below, and the reference designs can be find in the app-mockups and os-mockups tickets.

Child screen usage

When viewing a managed account, a summary of screen time is shown with actions for changing further settings, as well as actions to access additional settings for restrictions and filtering.

Child account view with added screen time overview and action for more options

The Screen Time view shows an overview of the child’s account’s screen time as well as controls which mirror those of the Settings panel to control screen limits and downtime for the child.

Screen Time page with detailed screen time records and time limit controls

Settings integration

On the Settings side, a child account will see a banner in the Wellbeing panel that lets them know some settings cannot be changed, with a link to the Parental Controls app.

Wellbeing panel with a banner informing that limits can only be changed in Parental Controls

Screen limits in GNOME Shell

We have implemented the locking mechanism in GNOME Shell. When a Screen Time limit is reached, the session locks, so that the child can’t use the computer for the rest of the day.

Following is a screen cast of the Shell functionality:

Preventing children from unlocking has not been implemented yet. However, fortunately, the hardest part was implementing the framework for the rest of the code, so hopefully the easier graphical change will take less to implement and the next update will be much sooner than this one.

GNOME OS images

You don’t have to take my word for it, especially since one can notice I’ve had to cut the recording at one point (forgot that one can’t switch users in the lock screen :P) – you can check out all of these features in the very same GNOME OS live image I’ve used in the recording, that you can either run in GNOME Boxes, or try on your hardware if you know what you’re doing 🙂

Malcontent changes

While all of these user facing changes look cool, none of them would be actually possible without the malcontent backend, which Philip Withnall has been working on. While the daily schedule had already been implemented, the daily limit session limit had to be added, as well as malcontent timer daemon API for Shell to use. There has been many other improvements, web filtering daemon has been added, which I’ll use in the future for implementing Web Filtering page in Parental Controls app.

Conclusion

Our work for the GNOME Foundation is funded by Endless and Dalio Philanthropies, so kudos to them! I want to thank Florian Müllner for his patience too, during the very educative for me merge request review, and answering to all of my Shell development wonderings. I also want to thank Matthijs Velsink and Felipe Borges for finding time to review the Settings integration.

Now that this foundation has been made, we’ll be focusing on finishing the last remaining bit of the session limits support in Shell, which is tweaking the appearance of lock screen when the limit is reached, and implementing the ignore button for extending screen limit, as well as notifications, followed by Web Filtering support in Parental Controls. Until next update!

Luis Villa

@luis

Three LLM-assisted projects

Some notes on my first serious coding projects in something like 20 years, possibly longer. If you’re curious what these projects mean, more thoughts over on the OpenML.fyi newsletter.

TLDR

A GitHub contribution graph, showing a lot of activity in the past three weeks after virtually none the rest of the year.

News, Fixed

The “Fix The News” newsletter is a pillar of my mental health these days, bringing me news that the world is not entirely going to hell in a handbasket. And my 9yo has repeatedly noted that our family news diet is “broken” in exactly the way Fix The News is supposed to fix—hugely negative, hugely US-centric. So I asked Claude to create a “newspaper” version of FTN — a two page pdf of some highlights. It was a hit.

So I’ve now been working with Claude Code to create and gradually improve a four-days-a-week “News, Fixed” newspaper. This has been super-fun for the whole family—my wife has made various suggestions over my shoulder, my son devours it every morning, and it’s the first serious coding project I’ve tackled in ages. It is almost entirely strictly personal (it still has hard-coded Duke Basketball schedules) but nevertheless is public and FOSS. (It is even my first usage of reuse.software—and also of SonarQube Server!)

Example newspaper here.

No matter how far removed you are from practical coding experience, I cannot recommend enough finding a simple, fun project like this that scratches a human itch in your life, and using the project to experiment with the new code tools.

Getting Things Done assistant

While working on News, Fixed a friend pointed out Steve Yegge’s “beads”, which reimagines software issue tracking as an LLM-centric activity — json-centric, tracked in git, etc. At around the same time, I was also pointed at Superpowers—essentially, canned “skills” like “teach the LLM, temporarily, how to brainstorm”. 

The two of these together in my mind screamed “do this for your overwhelmed todo list”. I’ve long practiced various bastardized versions of Getting Things Done, but one of the hangups has been that I’m inconsistent about doing the daily/weekly/nth-ly reviews that good GTD really relies on. I might skip a step, or not look through all my huge “someday-maybe” list, or… any of many reasons one can be tired and human when faced with a wall of text. Also, while there are many tools out there to do GTD, in my experience they either make some of the hardest parts (like the reviews) your problem, or they don’t quite fit with how I want to do GTD, or both. Hacking on my own prompts to manage the reviews seems to fit these needs to a T.

I currently use Amazing Marvin as my main GTD tool. It is funky and weird and I’ve stuck with it much longer than any other task tracker I’ve ever used. So what I’ve done so far:

  • wrapped the Marvin API to extract json
  • discovered the Marvin API is very flaky, so done some caching and validation
  • written a lot of prompts for the various phases/tasks in GTD. These work to varying degrees and I really want to figure out how to collaborate with others on them, because I suspect that as more tools offer LLM-ish APIs (whoa, todoist!) these prompts are where the real fun and action will be.

This is all read-only right now because of limitations in the Marvin API but for various reasons I’m not yet ready to embark on building my own entire UI. So this will do for now. But this code, therefore, is very limited to me. The prompts on the other hand… 

Note that my emphasis is not on “do tasks”, it is on helping me stay on priority. Less “chief of staff”, more “executive assistant”—both incredibly valuable when done well, but different roles. This is different from some of the use examples for Yegge’s Beads, which really are around agents.

Also note: the results have been outstanding. I’m getting more easily into my doing zone, I think largely because I have less anxiety about staring at the Giant Wall of Tasks that defines the life of any high-level IC. And my projects are better organized and todos feel more accurate than they have been in a long time, possibly ever.

a note on LLMs and issue/TODO tracking

It is worth noting that while LLMs are probabilistic/lossy, so they can’t find the “perfect” next TODO to work on, that’s OK. Personal TODO and software issue tracking are inherently subjective, probabilistic activities—there is no objectively perfect “next best thing to work on”, “most important thing to work on”, etc. So the fact that an LLM is only probabilistic in identifying the next task to work on is fine—no human can do substantially better. In fact I’m pretty sure that once an issue list is past a certain point, the LLM is likely to be able to do better— if (and like many things LLM, this is a big if) you can provide it with documented standards explaining how you want to do prioritization. (Literally one of the first things I did at my first job was write standards on how to prioritize bugs—the forerunner of this doc—so I have strong opinions, and experience, here.)

Skills for license “concluded”

While at a recent Linux Foundation event, I was shocked to realize how many very smart people haven’t internalized the skills/prompts/context stuff. It’s either “you chat with it” or “you train a model”. This is not their fault; it is hard to keep up!

Of course this came up most keenly in the context of the age-old problem of “how do I tell what license an open source project is under”. In other words, what is the difference between “I have scanned this” and “I have reached the zen state of SPDX’s ‘concluded’ field”.

So … yes, I’ve started playing with scripts and prompts on this. It’s much less further along than the other two projects above, but I think it could be very fruitful if structured correctly. Some potentially big benefits above and beyond the traditional scanning and/or throw a lawyer at it approaches:

  • reporting: my very strong intuition, admittedly not yet tested, is that plain-English reports on factors below, plus links into repos, will be much easier for lawyers to use as a starting point than the UIs of traditional license-scanner tools. And I suspect ultimately more powerful as well, since they’ll be able to draw on some of the things below.
  • context sensitivity: unlike a regexp, an LLM can likely fairly reliably understand from context some of the big failures of traditional pattern matching like “this code mentions license X but doesn’t actually include it”.
  • issue analysis and change analysis: unlike traditional approaches, LLMs can look at the change history of key files like README and LICENSE and draw useful context from them. “oh hey README mentioned a license change on Nov. 9, 2025, here’s what the change was and let’s see if there are any corresponding issues and commit logs that explain this change” is something that an LLM really can do. (Also it can do that with much more patience than any human.) 

ClearlyDefined offers test data on this, by the way — I’m really looking forward to seeing if this can be made actually reliable or not. (And then we can hook up reuse.software on the backend to actually improve the upstream metadata…)

But even then, I may not ever release this. There’s a lot of real risks here and I still haven’t thought them through enough to be comfortable with them. That’s true even though I think the industry has persistently overstated its ability to reach useful conclusions about licensing, since it so persistently insists on doing licensing analysis without ever talking to maintainers.

More to come?

I’m sure there will be more of these. That said, one of the interesting temptations of this is that it is very hard to say “something is done” because it is so easy to add more. (eg, once my personal homebrew News Fixed is done… why not turn it into a webapp? once my GTD scripts are done… why not port the backend? etc. etc.) So we’ll see how that goes.

Allan Day

@aday

GNOME Foundation Update, 2025-11-07

It’s Friday, so it’s time to provide an update on what’s been happening at the GNOME Foundation over the past week. Here’s my summary of the main activities and events, covering what both Board and staff members have been up to.

GNOME.Asia

I mentioned GNOME.Asia 2025 in my last post, but I’ll mention it again since it’s only a month until the event in Tokyo, which is being co-hosted with LibreOffice Asia.

As you’d expect, there is a lot of activity happening as GNOME.Asia 2025 approaches. Kristi has been busy with a plethora of organisational tasks, including scheduling, printing, planning for the day trip, and more.

Travel has also been a focus this week. The Travel Committee has approved sponsorship for a number of attendees, and we have moved on to providing assistance to those who need documentation for visas.

Finally, registration is now open! There are two registration sites: one for in-person attendees, and one for remote attendees. If you plan on attending, please do take the time to register!

Transitions

This week was a big week for us, with the announcement of Rosanna’s departure from the organisation. Internally transition arrangements have been in progress for a little while, with responsibilities being redistributed, accounts being handed over, and infrastucture that was physically managed by Rosanna being replaced (such as our mailing address and phone number). This work continued this week.

I’d like to thank Rosanna for her extremely helpful assistance during this transition. I’d also like to thank everyone who was pitched in this week, particularly around travel (thank you Kristi, Julian, Maria, Asmit!), as well as Cassidy and Arun for picking up tasks as they have arisen.

The Foundation is running smoothly despite our recent staffing change. Payments are being processed quickly and reliably, events and sysadmin work are happening as normal, and accounting tasks are being taken care of. I’m also confident that we’ll continue to work reliably and effectively as we move forward. There are improvements that we have planned which help with this, such as the streamlining of our financial systems and processes.

Ongoing tasks

It has become a common refrain in my updates that there is lots going on behind the scenes that doesn’t make it into these posts. This week I thought that I’d call some of those more routine activities out, so readers can get a sense of what those background tasks are.

It turns out that there is indeed quite a lot of them, so I’ve broken them down into sections.

Finances and accounting

It’s the beginning of the month, which is when most invoices tend to get submitted to us, so this week has involved a fair amount of payments processing. We use a mix of platforms for payments, and have a shared tracker for payments tasks. At the time of writing all invoices received since the beginning of the month have been paid, except for a couple of items where we needed additional information.

As mentioned in previous posts, we are in the process of deploying a set of improvements to our banking arrangements, and this continued this week. The changes are coming in bit by bit, and there are tasks for us to do at each step. It will be a number of weeks before the changes are completed.

Dawn who joined us last week has been doing research as part of her work to improve our finance systems. This has involved doing calls with team members and stakeholders, and is nearly complete.

Meetings!

Kristi booked the room for our regular pre-FOSDEM Advisory Board meeting, and I’ve invited representatives. Thanks to everyone who has sent an RSVP so far!

Next week we have another regular Board meeting scheduled, so there has been the routine work of preparing the agenda and sending out invitations.

Sysadmin work

Bart has been busy as usual, and it’s hard to capture everything he does. Recent activity includes improvements to donate.gnome.org, improvements to Flathub build pipelines, and working through a troublesome issue with the geolocation data used by GNOME apps.

That’s it for this week! Thanks for reading, and see you next week.

This Week in GNOME

@thisweek

#224 Reduced Motion

Update on what happened across the GNOME project in the week from October 31 to November 07.

GNOME Core Apps and Libraries

GTK

Cross-platform widget toolkit for creating graphical user interfaces.

Emmanuele Bassi says

A new accessibility setting is now available for GTK applications: reduced motion. This setting can be used to provide alternative animations that do not induce discomfort or distraction, without disabling them altogether. The setting can be changed in the Settings application, and will be available across desktops through the settings portal. If you have animations defined in CSS, you can use the (prefers-reduced-motion: reduce) media query selector, available in the GTK 4.21 development cycle leading to the 4.22 stable release next year.

Mutter

A Wayland display server and X11 window manager and compositor library.

Bilal Elmoussaoui reports

The X11 backend has been dropped from Mutter/GNOME Shell, removing approximately 27k lines of code from Mutter. The Xwayland support is still there.

Third Party Projects

Alexander Vanhee announces

Bazaar’s search got a major upgrade this week. We moved away from the sidebar to a “rich card” based system. Each app card show it’s most important information, making it easier to quickly find and install the right application without the need to switch between subpages. We also made sure to keep the quick install flow for power users intact. You can still search and install without ever having to touch the mouse or tab keys, just type your query and press Enter to install the top result. I’m also happy that the apps core navigation got a rework. This means the header bars now finally follow with the page transitions and have more relevant titles.

Get Bazaar on Flathub

Alain announces

Planify 4.15.2 — Smarter Quick Add, Spell Check, and Better Backups

This update focuses on making your daily workflow smoother and more intuitive.

With improved keyboard navigation for project selection, a smarter Quick Add that keeps task attributes when adding multiple tasks, and optional spell check for titles and descriptions — Planify continues to refine how you organize your work.

Backups are now even easier to manage: Planify shows you the location of your backup files and automatically restarts after a restore. You’ll also find new, clearer selection widgets in Preferences for completion modes and reminders, plus a host of UI polish and bug fixes.

Update to Planify 4.15.2 and enjoy a faster, cleaner experience when managing your tasks.

Get it on Flathub: https://flathub.org/en/apps/io.github.alainm23.planify

Gir.Core

Gir.Core is a project which aims to provide C# bindings for different GObject based libraries.

Marcel Tiede announces

GirCore 0.7.0-preview.3 got released. This release features support for the GNOME 49 SDK including GTK 4.20 and libadwaita 1.8. Additionally there is improved support for GLib.List, new API for Cairo.ImageSurface and more.

Shell Extensions

amritashan says

Hello! I’d like to share my new extension, Automatic Theme Switcher, which was just accepted.

It was born from the need to switch themes based on actual daylight, not a fixed time. It lets users trigger their light/dark theme using real solar events like sunrise, sunset, golden hour, dawn, first light, last light or dusk.

For privacy and flexibility, users have full control:

  • Use optional, approximate IP-based location detection.
  • Enter manual coordinates (ideal for VPN users).
  • Set a fixed time for a simple, offline mode.

It also features a comfort setting to gradually dim or brighten the screen over a set period, making the theme transition very smooth.

Link on GNOME Extensions page: https://extensions.gnome.org/extension/8675/automatic-theme-switcher/

erzicky announces

I have too many favorite wallpapers and can’t never choose one. But at the same time I also find live slideshows distracting. I just wanted a different wallpaper from my collection every time I start my PC, Since I couldn’t find a setting or extension for this specific need, I created my extension to do it: BootPaper, a simple extension that sets a new, random wallpaper from your local folder every time you boot up.

you can find it here: https://extensions.gnome.org/extension/8749/bootpaper/

Caue reports

Quick Lofi is a GNOME Shell extension that lets you play lofi music and other sounds, locally or online, on your desktop with just one click. It works on GNOME 46 and newer versions.

Get it from the GNOME Extensions page or see the source on GitHub. It’s simple, fast, and made to keep you focused while enjoying your favorite sounds.

Arnis (kem-a) reports

Switchcraft is a small GNOME 40+ utility that watches your desktop’s light/dark preference and runs your shell commands the moment the theme flips. That means you can finally keep GTK 4/libadwaita in sync and tell older apps, icon themes, extensions or even dotfiles to follow along.

Highlights

  • Listens to org.gnome.desktop.interface color-scheme and reacts instantly

  • Per-theme command lists (light/dark) with enable/disable

  • Reusable constants (store paths, schemas, colors once, use in many commands)

  • Import/Export functionality for backup or portability

  • Built for GNOME 40+ / GTK 4 / libadwaita

Why it’s nice for GNOME users Many theme-switch tools only toggle GNOME or only GTK; Switchcraft is the “glue” for everything else - you can poke gsettings in extensions, refresh dock backgrounds, swap icon sets or run custom scripts so all UI elements move together. This is similar in spirit to tools like Night Theme Switcher, but with more emphasis on running your own commands.

Get it on Github: https://github.com/kem-a/switchcraft

Dmytro announces

Auto Power Profile Extension

Brings smart, automatic power management to GNOME Shell. Now with GNOME 49 support, it switches power profiles for you based on whether you’re plugged in, your battery level, and what apps you’re running—so you don’t have to think about it.

What it does:

  • Configure your preferred power profiles for AC and battery once, then let the extension handle switching automatically
  • Performance app tracking: Optionally add performance-hungry apps (games, video editors, etc.) to a list, and the extension will boost performance when they’re running—even on battery if you want
  • Remembers when you manually change profiles and uses your choice as the new default (configurable)
  • Respects UPower’s low-battery power-saver mode

Perfect for laptop users who want better battery life without sacrificing performance when it matters.

Compatible with GNOME 45-49. Available on GNOME Extensions and GitHub.

GNOME Foundation

Allan Day reports

A new GNOME Foundation update is available, covering what’s been happening over the past week. Highlights this week include GNOME.Asia preparations, internal transition arrangements, and a look behind the scenes at the more routine work that goes on each week at the Foundation.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

Victor Ma

@victorma

Google Summer of Code final report

For Google Summer of Code 2025, I worked on GNOME Crosswords. GNOME Crosswords is a project that consists of two apps:

Here are links to everything that I worked on.

Merge requests

Merge requests related to the word suggestion algorithm:

  1. Improve word suggestion algorithm
  2. Add word-list-tests-utils.c
  3. Refactor clue-matches-tests.c by using a fixture
  4. Use better test assert macros
  5. Add macro to reduce boilerplate code in clue-matches-tests.c
  6. Add a macro to simplify the test_clue_matches calls
  7. Add more tests to clue-matches-tests.c
  8. Use string parameter in macro function
  9. Add performance tests to clue-matches-tests.c
  10. Make phase 3 of word_list_find_intersection() optional
  11. Improve print functions for WordArray and WordSet

Other merge requests:

  1. Fix and refactor editor puzzle import
  2. Add MIME sniffing to downloader
  3. Add support for remaining divided cell types in svg.c
  4. Fix intersect sort
  5. Fix rebus intersection
  6. Use a single suggested words list for Editor

Design documents

Other documents

Development:

Word suggestion algorithm:

Competitive analysis:

Other:

Blog posts

  1. Introducing my GSoC 2025 project
  2. Coding begins
  3. A strange bug
  4. Bugs, bugs, and more bugs!
  5. My first design doc
  6. It’s alive!
  7. When is an optimization not optimal?
  8. This is a test post

Journal

I kept a daily journal of the things that I was working on.

Project summary

I improved GNOME Crossword Editor’s word suggestion algorithm, by re-implementing it as a forward-checking algorithm. Previously, our word suggestion algorithm only considered the constraints imposed by the intersection where the cursor is. This resulted in frequent dead-end word suggestions, which led to user frustration.

To fix this problem, I re-implemented our word suggestion algorithm to consider the constraints imposed by every intersection in the current slot. This significantly reduces the number of dead-end word suggestions and leads to a better user experience.

As part of this project, I also researched the field of constraint satisfaction problems and wrote a report on how we can use the AC-3 algorithm to further improve our word suggestion algorithm in the future.

I also performed a competitive analysis of other crossword editors on the market and wrote a detailed report, to help identify missing features and guide future development.

Word suggestion algorithm improvements

The goal of any crossword editor software is to make it as easy as possible to create a good crossword puzzle. To that end, all crossword editors have a feature called a word suggestion list. This is a dynamic list of words that fit the current slot. It helps the user find words that fit the slots on their grid.

In order to generate the word suggestion list, crossword editors use a word suggestion algorithm. The simplest example of a word suggestion algorithm considers two constraints:

  • The size of the current slot.
  • The letters in the current slot.

So for example, if the current slot is C A _ S, then this basic word suggestion algorithm would return all four-letter words that start with CA and end in S—such as CATS or CABS, but not COTS.

The problem

There is a problem with this basic word suggestion algorithm, however. Consider the following grid:

+---+---+---+---+
| | | | Z |
+---+---+---+---+
| | | | E |
+---+---+---+---+
| | | | R |
+---+---+---+---+
| W | O | R | | < current slot
+---+---+---+---+

4-Down begins with ZER, so the only word it can be is ZERO. This constrains the bottom-right cell to the letter O.

4-Across starts with WOR. We know that the bottom-right cell must be O, so that means that 4-Across must be WORO. But WORO is not a word. So, 4-Down and 4-Across are both unfillable, because no letter fits in the bottom-right cell. This means that there are no valid word suggestions for either 4-Across or 4-Down.

Now, suppose that the current slot is 4-Across. The basic algorithm only considers the constraints imposed by the current slot, and so it returns all words that match the pattern W O R _—such as WORD and WORM. But none of these word suggestions actually fit in the slot—they all cause 4-Down to become some nonsensical word.

The problem is that the basic algorithm only looks at the current slot, 4-Across. It does not also look at other slots, like 4-Down. Because of that, the algorithm doesn’t realize that 4-Down causes 4-Across to be unfillable. And so, the algorithm generates incorrect word suggestions.

Our word suggestion algorithm

Our word suggestion algorithm was a bit more advanced than this basic algorithm. Our algorithm considered two constraints:

  • The constraints imposed by the current slot.
  • The constraints imposed by the intersecting slot where the cursor is.

This means that our algorithm could actually handle the problematic grid properly if the cursor is on the bottom-right cell. But not if the cursor is on any other cell of 4-Across:

Broken behaviour

Consequences

All this means that our word suggestion algorithm was prone to generating dead-end words—words that seem to fit a slot, but that actually lead to an unfillable grid.

In the problematic grid example I gave, this unfillability is immediately obvious. The user fills 4-Across with a word like WORM, and they instantly see that this turns 4-Down into ZERM, a nonsense word. That makes this grid not so bad.

The worst cases are the insidious ones, where the fact that a word suggestion leads to an unfillable grid is not obvious at first. This leads to a ton of wasted time and frustration for the user.

My solution

To fix this problem, I re-implemented our word suggestion algorithm to account for the constraints imposed by all the intersecting slots. Now, our word suggestion algorithm correctly handles the problematic grid example:

Fixed behaviour

Our new algorithm doesn’t eliminate dead-end words entirely. After all, it only checks the intersecting slots of the current slot—it does not also check the intersecting slots of the intersecting slots, etc.

However, the constraints imposed by a slot onto the current slot become weaker, the more intersections-removed it is. Consider: in order for a slot that’s two intersections away from the current slot to constrain the current slot, it must first constrain a mutual slot (a slot that intersects both of them) enough for that mutual slot to then constrain the current slot.

Compare that to a slot that is only one intersection away from the current slot. All it has to do is be constrained enough that it limits what letters the intersecting cell can be.

And so, although my changes do not eliminate dead-end words entirely, they do significantly reduce their prevalence, resulting in a much better user experience.

The end

This concludes my Google Summer of Code 2025 project! I give my thanks to Jonathan Blandford for his invaluable mentorship and clear communication throughout the past six months. And I thank the GNOME Foundation for its participation in GSoC and commitment to open source.

Managing Diabetes in Software Freedom

[ The below is a cross-post of an article that I published on my blog at Software Freedom Conservancy. ]

Our member project representatives and others who collaborate with SFC on projects know that I've been on part-time medical leave this year. As I recently announced publicly on the Fediverse, I was diagnosed in March 2025 with early-stage Type 2 Diabetes. I had no idea that that the diagnosis would become a software freedom and users' rights endeavor.

After the diagnosis, my doctor suggested immediately that I see the diabetes nurse-practitioner specialist in their practice. It took some time get an appointment with him, so I saw him first in mid-April 2025.

I walked into the office, sat down, and within minutes the specialist asked me to “take out your phone and install the Freestyle Libre app from Abbott”. This is the first (but, will probably not be the only) time a medical practitioner asked me to install proprietary software as the first step of treatment.

The specialist told me that in his experience, even early-stage diabetics like me should use a Continuous Glucose Monitor (CGM). CGM's are an amazing (relatively) recent invention that allows diabetics to sample their blood sugar level constantly. As we software developers and engineers know: great things happen when your diagnostic readout is as low latency as possible. CGMs lower the latency of readouts from 3–4 times a day to every five minutes. For example, diabetics can see what foods are most likely to cause blood sugar spikes for them personally. CGMs put patients on a path to manage this chronic condition well.

But, the devices themselves, and the (default) apps that control them are hopelessly proprietary. Fortunately, this was (obviously) not my first time explaining FOSS from first principles. So, I read through the license and terms and conditions of the ironically named “Freestyle Libre” app, and pointed out to the specialist how patient-unfriendly the terms were. For example, Abbott (the manufacturer of my CGM) reserves the right to collect your data (anonymously of course, to “improve the product”). They also require patients to agree that if they take any action to reverse engineer, modify, or otherwise do the normal things our community does with software, the patient must agree that such actions “constitute immediate, irreparable harm to Abbott, its affiliates, and/or its licensors”. I briefly explained to the specialist that I could not possibly agree. I began in real-time (still sitting with the specialist) a search for a FOSS solution.

As I was searching, the specialist said: “Oh, I don't use any of it myself, but I think I've heard of this ‘open source’ thing — there is a program called xDrip+ that is for insulin-dependent diabetics that I've heard of and some patients report it is quite good”.

While I'm (luckily) very far from insulin-dependency, I eventually found the FOSS Android app called Juggluco (a portmanteau for “Juggle glucose”). I asked the specialist to give me the prescription and I'd try Juggluco to see if it would work.

CGM's are very small and their firmware is (by obvious necessity) quite simple. As such, their interfaces are standard. CGM's are activated with Near Field Communication (NFC) — available on even quite old Android devices. The Android device sends a simple integer identifier via NFC that activates the CGM. Once activated — and through the 15-day life of the device — the device responds via Bluetooth with the patient's current glucose reading to any device presenting that integer.

Fortunately, I quickly discovered that the FOSS community was already “on this”. The NFC activation worked just fine, even on the recently updated “Freestyle Libre 3+”. After the sixty minute calibration period, I had a continuous readout in Juggluco.

CGM's lower latency feedback enables diabetics to have more control of their illness management. one example among many: the patient can see (in real time) what foods most often cause blood sugar spikes for them personally. Diabetes hits everyone differently; data allows everyone to manage their own chronic condition better.

My personal story with Juggluco will continue — as I hope (although not until after FOSDEM 2026 😆) to become an upstream contributor to Juggluco. Most importantly, I hope to help the app appear in F-Droid. (I must currently side-load or use Aurora Store to make it work on LineageOS.)

Fitting with the history that many projects that interact with proprietary technology must so often live through, Juggluco has faced surreptitious removal from Google's Play Store. Abbott even accused Juggluco of using their proprietary libraries and encryption methods, but the so-called “encryption method” is literally sending an single integer as part of NFC activation.

While Abbott backed off, this is another example of why the movement of patients taking control of the technology remains essential. FOSS fits perfectly with this goal. Software freedom gives control of technology to those who actually rely on it — rather than for-profit medical equipment manufacturers.

When I returned to my specialist for a follow-up, we reviewed the data and graphs that I produced with Juggluco. I, of course, have never installed, used, or even agreed to Abbott's licenses and terms, so I have never seen what the Abbott app does. I was thus surprised when I showed my specialist Juggluco's summary graphs. He excitedly told me “this is much better reporting than the Abbott app gives you!”. We all know that sometimes proprietary software has better and more features than the FOSS equivalent, so it's a particularly great success when our community efforts outdoes a wealthy 200 billion-dollar megacorp on software features!


Please do watch SFC's site in 2026 for more posts about my ongoing work with Juggluco, and please give generously as an SFC Sustainer to help this and our other work continue in 2026!

Jordan Petridis

@alatiera

DHH and Omarchy: Midlife crisis

Couple weeks ago Cloudflare announced it would be sponsoring some Open Source projects. Throwing money at pet projects of random techbros would hardly be news, but there was a certain vibe behind them and the people leading them.

In an unexpected turn of events, the millionaire receiving money from the billion-dollar company, thought it would be important to devote a whole blog post to random brokeboy from Athens that had an opinion on the Internet.

I was astonished to find the blog post. Now that I moved from normal stalkers to millionaire stalkers, is it a sign that I made it? Have I become such a menace? But more importantly: Who the hell even is this guy?

D-H-Who?

When I was painting with crayons in a deteriorating kindergarten somewhere in Greece, DHH, David Heinemeier Hansson, was busy with dumping Ruby on Rails in the world and becoming a niche tech celebrity. His street cred for releasing Ruby on Rails would later be replaced by his writing on remote work. Famously authoring “Remote: Office Not Required”, a book based on his own company, 37signals.

That cultural cache would go out the window in 2022 when he got in hot water with his own employees after an internal review process concluded that 37signals had been less than stellar when it came to handling race and diversity. Said review process culminated in a clash, where the employees were interested in further exploration of the topic, which DHH responded to them with “You are the person you are complaining about” (meaning: you, pointing out a problem, is the problem).

No politics at work

This incident lead the two founders of 37signals to the executive decision to forbid any kind of “societal and political discussions” inside the company, which, predictably, lead to a third of the company resigning in protest. This was a massive blow to 37signals. The company was famous for being extremely selective when hiring, as well as affording employees great benefits. Suddenly having a third of the workforce resign over disagreement with management sent a far more powerful message than anything they could have imagined.

It would become the starting point for the downwards and radicalizing spiral along with the extended and very public crashout DHH will be going through in the coming years.

Starting your own conference so you can never be banned from it

Subsequently, DHH was uninvited from keynoting at RailsConf on the account of everyone being grossed out about the handling of the matter and in solidarity with the community members along the employees that quit in protest.

That, in turn, would lead to the creation of the Rails Foundation and starting Rails World. A new conference about Rails that 100%-swear-to-god was not just about DHH having his own conference where he can keynote and would never be banned.

In the following years DHH would go to explore and express all the spectrum of “down the alt-right pipeline” opinions, like:

Omarchy

You either log off a hero, or you see yourself create another linux distribution, and having failed the first part, DHH has been pouring his energy into creating a new project. While letting everyone know how he much prefers that than going to therapy. Thus, Omarchy was born, a set of copy pasted Window Manager and Vim configs turned distro. One of the two projects that Cloudflare will be proudly funding shortly. The only possible option for the compositor would be Hyprland, and even though it’s Wayland (bad!), it’s one of the good-non-woke ones. In a similar tone, the project website would be featuring the tight integration of Omarchy with SuperGrok.

Rubygems

On a parallel track, the entire Ruby community more or less collapsed in the last two months. Long story short, is that one of the major Ruby Central sponsors, Sidekiq, pulled out the funding after DHH was invited to speak at RailsConf 2025. Shopify, where DHH sits in the boards of directors, was quick to save the day and match the lost funding. Coincidentally an (allegedly) takeover of key parts of the Ruby Infrastructure was carried out by Ruby Central and placed under the control of Shopify in the following weeks.

This story is ridiculous, and the entire ruby community is imploding following this. There’s an excellent write-up of the story so far here.

In a similar note, and at the same time, we also find DHH drooling over Off-brand Peter Thiel and calling for an Anduril takeover of the Nix community in order to purge all the wokes.

On Framework

At the same time, Framework had been promoting Omarchy in their social media accounts for a good while. And DHH in turn has been posting about how great Framework hardware is and how the Framework CEO is contributing to his Arch Linux reskin. On October 8th, Framework announced its sponsorhip of the Hyprland project, following 37signal doing the same thing couple weeks earlier. On the same day they made another post promoting Omarchy yet again. This caused a huge backlash and overall PR nightmare, with the apex being a forum thread with over 1700 comments so far.

The first reply in forum post, comes from Nirav, Framework’s CEO, with a very questionable choice of words:

We support open source software (and hardware), and partner with developers and maintainers across the ecosystem. We deliberately create a big tent, because we want open source software to win. We don’t partner based on individual’s or organization’s beliefs, values, or political stances outside of their alignment with us on increasing the adoption of open source software.

I definitely understand that not everyone will agree with taking a big tent approach, but we want to be transparent that bringing in and enabling every organization and community that we can across the Linux ecosystem is a deliberate choice.

Mentioning twice a “big tent” as the official policy and response to complains about supporting Fascist and Racist shitheads, is nothing sort of digging a hole for yourself so deep it that it reemerges in another continent.

Later on, Nirav would mention that they were finalizing sponsorship of the GNOME Foundation (12k/year) and KDE e.V. (10k/year). In the linked page you can also find a listing of Rails World (DHH’s personal conference) for a one time payment of 24k dollars.

There has not been an update since, and at no point have they addressed their support and collaboration with DHH. Can’t lose the money cow and free twitter clout I guess.

While I would personally would like to see the donation be rejected, I am not involved with the ongoing discussion on the GNOME Foundation side nor the Foundation itself. What I can say is that myself and others from the GNOME OS team, were involved in initial discussions with Framework, about future collaborations and hardware support. GNOME OS, much like the GNOME Flatpak runtime, is very useful as a reference point in order to identify if a bug, in hardware or software, is distro-specific or not.

It’s been a month since the initial debacle with Framework. Regardless of what the GNOME Foundation plans on doing, the GNOME OS team certainly does not feel comfortable in further collaboration given how they have handled the situation so far. It’s sad because the people working there understand the issue, but this does not seem to be a trait shared by the management.

A software midlife crisis

During all this, DHH decided that his attention must be devoted to get into a mouth-off with a greek kid that called him a Nazi. Since this is not violence (see “Words are not violence” essay), he decided to respond in kind, by calling for violence against me (see “Words are violence” essay).

To anyone who knows a nerd or two over the age of 35, all of the above is unsurprising. This is not some grand heel turn, or some brainwashing that DHH suffered. This is straight up a midlife crisis turned fash speedrun.

Here’s a dude who barely had any time to confront the world before falling into an infinite money glitch in the form of Ruby on Rails, Jeff Bezos throwing him crazy money, Apple bundling his software as a highlighted feature, becoming a “new work” celebrity and Silicon Valley “Guru”. Is it any surprise that such a person later would find the most minuscule kind of opposition as an all-out attack on his self-image?

DHH has never had the “best” opinions on a range of things, and they have been dutifully documented by others, but neither have many other developers that are also ignorant of topics outside of software. Being insecure about your hairline and masculine aesthetic to the point of adopting the Charles Manson haircut to cover your balding is one thing. However, it is entirely different to become a drop-shipped version of Elon, tweeting all day and stopping only to write opinion pieces that come off as proving others wrong rather than original thoughts.

Case in point: DHH recently wrote about how “men who’d prefer to feel useful over being listened to”. The piece is unironically titled “Building competency is better than therapy”. It is an insane read, and I’ll speculate that it feels as if someone, who DHH can’t outright dismiss, suggested he goes to therapy. It’s a very “I’ll show you off in front of my audience” kind of text.

Add to that a three year speedrun decrying the “theocracy of DEI” and the seemingly authoritarian powers of “the wokes”, all coincidentally starting after he could not get over his employees disagreeing with him on racial sensitivities.

How can someone suggest his workers read Ta-Nehisi Coates’s “Between the World and Me” and Michelle Alexander’s “The New Jim Crow” in the aftermath of George Floyd’s killing and the BLM protests. While a couple of months later writing salivating blogposts after the EDL eugenics rally in England and giving the highest possible praise to Tommy Robinson?

Can these people be redeemed?

It is certainly not going to help that niche celebrities, like DHH, still hold clout and financial power and are able to spout the worst possible takes without any backlash because of their position.

A bunch of Ruby developers recently started a petition to get DHH distanced from the community, and it didn’t go far before getting brigaded by the worst people you didn’t need to know existed. This of course was amplified to oblivion by DHH and a bunch of sycophants chasing the clout provided by being retweeted by DHH. It would shortly be followed by yet another “I’m never wrong” piece.

Is there any chance for these people, who are shielded by their well-paying jobs, their exclusively occupational media diet, and stimuli all happen to reinforce the default world view?

I think there is hope, but it demands more voices in tech spaces to speak up about how having empathy for others, or valuing diversity is not some grand conspiracy but rather enrichment to our lives and spaces. This comes hand in hand with firmly shutting down concern trolling and ridiculous “extreme centrist” takes where someone is expected to find common ground with others advocating for their extermination.

One could argue that the true spirit of FLOSS, which attracted much of the current midlife crisis developers in the first place, is about diversity and empathy for the varied circumstances and opinions that enriched our space.

Conclusion

I do not know if his heart is filled with hate or if he is incredibly lost, but it makes little difference since this is his output in the world.

David, when you read this I hope it will be a wake-up call. It’s not too late, you only need to go offline and let people help you. Stop the pathetic TemuElon speedrun and go take care of your kids. Drop the anti-woke culture wars and pick up a Ta-Nehisi Coates book again.

To everyone else: Push back against their vile and misanthropic rhetoric at every turn. Don’t let their poisonous roots fester into the ground. There is no place for their hate here. Don’t let them find comfort and spew their vomit in any public space.

Crush Fascism. Free Palestine ✊.

Flatpak Happenings

Yesterday I released Flatpak 1.17.0. It is the first version of the unstable 1.17 series and the first release in 6 months. There are a few things which didn’t make it for this release, which is why I’m planning to do another unstable release rather soon, and then a stable release still this year.

Back at LAS this year I talked about the Future of Flatpak and I started with the grim situation the project found itself in: Flatpak was stagnant, the maintainers left the project and PRs didn’t get reviewed.

Some good news: things are a bit better now. I have taken over maintenance, Alex Larsson and Owen Taylor managed to set aside enough time to make this happen and Boudhayan Bhattcharya (bbhtt) and Adrian Vovk also got more involved. The backlog has been reduced considerably and new PRs get reviewed in a reasonable time frame.

I also listed a number of improvements that we had planned, and we made progress on most of them:

  • It is now possible to define which Flatpak apps shall be pre-installed on a system, and Flatpak will automatically install and uninstall things accordingly. Our friends at Aurora and Bluefin already use this to ship core apps from Flathub on their bootc based systems (shout-out to Jorge Castro).
  • The OCI support in Flatpak has been enhanced to support pre-installing from OCI images and remotes, which will be used in RHEL 10
  • We merged the backwards-compatible permission system. This allows apps to use new, more restricting permissions, while not breaking compatibility when the app runs on older systems. Specifically access to input devices such as gamepads, and access to the USB portal can now be granted in this way. It will also help us to transition to PipeWire.
  • We have up-to-date docs for libflatpak again

Besides the changes directly in Flatpak, there are a lot of other things happening around the wider ecosystem:

  • bbhtt released a new version of flatpak-builder
  • Enhanced License Compliance Tools for Flathub
  • Adrian and I have made plans for a service which allows querying running app instances (systemd-appd). This provides a new way of authenticating Flatpak instances and is a prerequisite for nested sandboxing, PipeWire support, and getting rid of the D-Bus proxy. My previous blog post went into a few more details.
  • Our friends at KDE have started looking into the XDG Intents spec, which will hopefully allow us to implement deep-linking, thumbnailing in Flatpak apps, and other interesting features
  • Adrian made progress on the session save/restore Portal
  • Some rather big refactoring work in the Portals frontend, and GDBus and libdex integration work which will reduce the complexity of asynchronous D-Bus

What I have also talked about at my LAS talk is the idea of a Flatpak-Next project. People got excited about this, but I feel like I have to make something very clear:

If we redid Flatpak now, it would not be significantly better than the current Flatpak! You could still not do nested sandboxing, you would still need a D-Bus proxy, you would still have a complex permission system, and so on.

Those problems require work outside of Flatpak, but have to integrate with Flatpak and Flatpak-Next in the future. Some of the things we will be doing include:

  • Work on the systemd-appd concept
  • Make varlink a feasible alternative to D-Bus
  • D-Bus filtering in the D-Bus daemons
  • Network sandboxing via pasta
  • PipeWire policy for sandboxes
  • New Portals

So if you’re excited about Flatpak-Next, help us to improve the Flatpak ecosystem and make Flatpak-Next more feasible!

Rosanna Yuen

@zana

Farewell to these, but not adieu…

– from Farewell to Malta
by Lord Byron

Friday was my last day at the GNOME Foundation. I was informed by the Board a couple weeks ago that my position has been eliminated due to budgetary shortfalls. Obviously, I am sad that the Board felt this decision was necessary. That being said, I wanted to write a little note to say goodbye and share some good memories.

It has been almost exactly twenty years since I started helping out at the GNOME Foundation. (My history with the GNOME Project is even older; I had code in GNOME 0.13, released in March 1998.) Our first Executive Director had just left, and my husband was Board Treasurer at the time. He inherited a large pile of paperwork and an unhappy IRS. I volunteered to help him figure out how to put the pieces together and get our paperwork in order to get the Foundation back in good standing. After several months of this, the Board offered to pay me to keep it organized.

Early on, I used to joke that my title should have been “General Dogsbody” as I often needed to help cover all the little things that needed doing. Over time, my responsibilities within the Foundation grew, but the sentiment remained. I was often responsible for making sure everything that needed doing was done, while putting in many of the processes and procedures Foundation uses to keep running.

People often under-estimate how much hard work it is to keep an international non-profit like the GNOME Foundation going. There is a ton of minutia to be dealt with from ever-changing regulations, requirements, and community needs. Even simple-sounding things like paying people is surprisingly hard the moment it crosses borders. It requires dealing with different payment systems, bank rules, currencies, export regulations, and tax regimes. However, it is a necessary quagmire we have to navigate as it is a crucial tool to further the Foundation’s mission.

Rosanna sitting behind a table at the GNOME booth. Many flyers on top of a blue tablecloth with the GNOME logo. To the left is a stand up banner with GNOME's mission
Working a GNOME booth

Over time, I have filled a multitude of different roles and positions (and had four different official titles doing so). I am proud of all the things I have done.

  • I have been the assistant to six different Executive Directors helping them onboard as they’ve started. I’ve been the bookkeeper, accounts receivable, and accounts payable — keeping our books in order, making sure people are paid, and tracking down funds. I’ve been Vice Treasurer helping put together our budgets, and created the financial slides for the Treasurer, Board, and AGM. I spent countless nights for almost a decade keeping our accounts updated in GnuCash. And every year for the past nineteen years I was responsible for making sure our taxes are done and 990 filed to keep our non-profit status secure.
    As someone who has always been deeply entrenched in GNOME’s finances, I have always been a responsible steward, looking for ways to spend money more prudently while enforcing budgets.
  • When the Foundation expanded after the Endless Grants, I had to help make the Foundation scale. I have done the jobs of Human Resources, Recruiter, Benefits coordinator, and managed the staff. I made sure the Board, Foundation, and staff are insured, and take their legally required training. I have also had to make sure people and contractors are paid and with all the legal formalities taken care of in all the different countries we operate in , so they only have to concern themselves with supporting GNOME’s mission.
  • I have had to be the travel coordinator buying tickets for people (and approving community travel). I have also done the jobs of Project Manager, Project Liaison to all our fiscally sponsored projects and subprojects, Shipping, and Receiving. I have been to countless conferences and tradeshows, giving talks and working booths. I have enjoyed meeting so many users and contributors at these events. I even spent many a weekend at the post-office filling out customs forms and shipping out mouse pads, mugs, and t-shirts to donors (back when we tried to do that in-house.) I tended the Foundation mailbox, logging all the checks we get from our donors and schlepping them to the bank.
  • I have served on five GNOME committees providing stability and continuity as volunteers came and went (Travel, Finance, Engagement, Executive, and Code of Conduct). I was on the team that created GNOME’s Code of Conduct, spending countless hours working with community members to help craft the final draft. I am particularly proud of this work, and I believe it has had a positive impact on our community.
  • Over the past year, I have also focused on providing what stability I could to the staff and Foundation, getting us through our second financial review, and started preparing for our first audit planned for next March.

This was all while doing my best to hold to GNOME’s principles, vision, and commitment to free software.

But it is the great people within this community that kept me loyally working with y’all year after year, and the appreciation of the amazing project y’all create that matters. I am grateful to the many community members who volunteer their time so selflessly through the years. Old-timers like Sri and Federico that have been on this journey with me since the very beginning. Other folks that I met through the years like Matthias, Christian, Meg, PTomato, and German. And Marina, who we all still miss. So many newcomers that add enthusiasm into the community like Deepesha, Michael, and Aaditya. So many Board members. There have been so many more names I could mention that I apologize if your name isn’t listed. Please know that I am grateful for what everyone has brought into the community. I have truly been blessed to know you all.

I am also grateful for the folks on staff that have made GNOME such a wonderful place to work through the years. Our former Executive Directors Stormy, Karen, Neil, Holly, and Richard, all of whom have taught me so much. Other staff members that have come and gone through the years, such as Andrea (who is still volunteering), Molly, Caroline, Emmanuele, and Melissa. And, of course, the current staff of Anisa, Bart, and Kristi, in whose hands I know the Foundation will keep thriving.

As I said, my job has always been to make sure things go as smoothly as possible. In my mind, what I do should quiet any waves so that the waves the Foundation makes go into providing the best programming we can — which is why a moment from GUADEC 2015 still pops up in my head.

Picture this: we are all in Gothenburg, Sweden, in line registering for GUADEC. We start chatting in line as it was long. I introduce myself to the person behind me and he sputters, “Oh! You’re important!” That threw me for a loop. I had never seen myself that way. My intention has always been to make things work seamlessly for our community members behind the scenes, but it was always extremely gratifying to hear from folks who have been touched by my efforts.

Dining room table covered in GNOME folders, letters, booth materials, and t-shirts, with a large suitcase in front filled with more things for the GNOME booths.
GNOME things still to be transferred to the Board. Suitcase in front is full of items for staffing a GNOME Booth.

What’s next for me? I have not had the time to figure this out yet as I have been spending my time transferring what I can to the Board. First things first; I need to figure out how to write a resumé again. I would love to continue working in the nonprofit space, and obviously have a love of free software. But I am open to exploring new ideas. If anyone has any thoughts or opportunities, I would love to hear them!

This is not adieu; my heart will always be with GNOME. I still have my seat on the Code of Conduct committee and, while I plan on taking a month or so away to figure things out, do plan on returning to do my bit in keeping GNOME a safe place.

If you’d like to drop me a line, I’d love to hear from you. Unfortunately the Board has to keep my current GNOME email address for a few months for the transfer, but I can be reached at <rosanna at gnome> for my personal mail. (Thanks, Bart!)

Best of luck to the Foundation.

Andy Wingo

@wingo

wastrel, a profligate implementation of webassembly

Hey hey hey good evening! Tonight a quick note on wastrel, a new WebAssembly implementation.

a wasm-to-native compiler that goes through c

Wastrel compiles Wasm modules to standalone binaries. It does so by emitting C and then compiling that C.

Compiling Wasm to C isn’t new: Ben Smith wrote wasm2c back in the day and these days most people in this space use Bastien Müller‘s w2c2. These are great projects!

Wastrel has two or three minor differences from these projects. Let’s lead with the most important one, despite the fact that it’s as yet vaporware: Wastrel aims to support automatic memory managment via WasmGC, by embedding the Whippet garbage collection library. (For the wingolog faithful, you can think of Wastrel as a Whiffle for Wasm.) This is the whole point! But let’s come back to it.

The other differences are minor. Firstly, the CLI is more like wasmtime: instead of privileging the production of C, which you then incorporate into your project, Wastrel also compiles the C (by default), and even runs it, like wasmtime run.

Unlike wasm2c (but like w2c2), Wastrel implements WASI. Specifically, WASI 0.1, sometimes known as “WASI preview 1”. It’s nice to be able to take the wasi-sdk‘s C compiler, compile your program to a binary that uses WASI imports, and then run it directly.

In a past life, I once took a week-long sailing course on a 12-meter yacht. One thing that comes back to me often is the way the instructor would insist on taking in the bumpers immediately as we left port, that to sail with them was no muy marinero, not very seamanlike. Well one thing about Wastrel is that it emits nice C: nice in the sense that it avoids many useless temporaries. It does so with a lightweight effects analysis, in which as temporaries are produced, they record which bits of the world they depend on, in a coarse way: one bit for the contents of all global state (memories, tables, globals), and one bit for each local. When compiling an operation that writes to state, we flush all temporaries that read from that state (but only that state). It’s a small thing, and I am sure it has very little or zero impact after SROA turns locals into SSA values, but we are vessels of the divine, and it is important for vessels to be C worthy.

Finally, w2c2 at least is built in such a way that you can instantiate a module multiple times. Wastrel doesn’t do that: the Wasm instance is statically allocated, once. It’s a restriction, but that’s the use case I’m going for.

on performance

Oh buddy, who knows?!? What is real anyway? I would love to have proper perf tests, but in the meantime, I compiled coremark using my GCC on x86-64 (-02, no other options), then also compiled it with the current wasi-sdk and then ran with w2c2, wastrel, and wasmtime. I am well aware of the many pitfalls of benchmarking, and so I should not say anything because it is irresponsible to make conclusions from useless microbenchmarks. However, we’re all friends here, and I am a dude with hubris who also believes blogs are better out than in, and so I will give some small indications. Please obtain your own salt.

So on coremark, Wastrel is some 2-5% percent slower than native, and w2c2 is some 2-5% slower than that. Wasmtime is 30-40% slower than GCC. Voilà.

My conclusion is, Wastrel provides state-of-the-art performance. Like w2c2. It’s no wonder, these are simple translators that use industrial compilers underneath. But it’s neat to see that performance is close to native.

on wasi

OK this is going to sound incredibly arrogant but here it is: writing Wastrel was easy. I have worked on Wasm for a while, and on Firefox’s baseline compiler, and Wastrel is kinda like a baseline compiler in shape: it just has to avoid emitting boneheaded code, and can leave the serious work to someone else (Ion in the case of Firefox, GCC in the case of Wastrel). I just had to use the Wasm libraries I already had and make it emit some C for each instruction. It took 2 days.

WASI, though, took two and a half weeks of agony. Three reasons: One, you can be sloppy when implementing just wasm, but when you do WASI you have to implement an ABI using sticks and glue, but you have no glue, it’s all just i32. Truly excruciating, it makes you doubt everything, and I had to refactor Wastrel to use C’s meager type system to the max. (Basically, structs-as-values to avoid type confusion, but via inline functions to avoid overhead.)

Two, WASI is not huge but not tiny either. Implementing poll_oneoff is annoying. And so on. Wastrel’s WASI implementation is thin but it’s still a couple thousand lines of code.

Three, WASI is underspecified, and in practice what is “conforming” is a function of what the Rust and C toolchains produce. I used wasi-testsuite to burn down most of the issues, but it was a slog. I neglected email and important things but now things pass so it was worth it maybe? Maybe?

on wasi’s filesystem sandboxing

WASI preview 1 has this “rights” interface that associated capabilities with file descriptors. I think it was an attempt at replacing and expanding file permissions with a capabilities-oriented security approach to sandboxing, but it was only a veneer. In practice most WASI implementations effectively implement the sandbox via a permissions layer: for example the process has capabilities to access the parents of preopened directories via .., but the WASI implementation has to actively prevent this capability from leaking to the compiled module via run-time checks.

Wastrel takes a different approach, which is to use Linux’s filesystem namespaces to build a tree in which only the exposed files are accessible. No run-time checks are necessary; the system is secure by construction. He says. It’s very hard to be categorical in this domain but a true capabilities-based approach is the only way I can have any confidence in the results, and that’s what I did.

The upshot is that Wastrel is only for Linux. And honestly, if you are on MacOS or Windows, what are you doing with your life? I get that it’s important to meet users where they are but it’s just gross to build on a corporate-controlled platform.

The current versions of WASI keep a vestigial capabilities-based API, but given that the goal is to compile POSIX programs, I would prefer if wasi-filesystem leaned into the approach of WASI just having access to a filesystem instead of a small set of descriptors plus scoped openat, linkat, and so on APIs. The security properties would be the same, except with fewer bug possibilities and with a more conventional interface.

on wtf

So Wastrel is Wasm to native via C, but with an as-yet-unbuilt GC aim. Why?

This is hard to explain and I am still workshopping it.

Firstly I am annoyed at the WASI working group’s focus on shared-nothing architectures as a principle of composition. Yes, it works, but garbage collection also works; we could be building different, simpler systems if we leaned in to a more capable virtual machine. Many of the problems that WASI is currently addressing are ownership-related, and would be comprehensively avoided with automatic memory management. Nobody is really pushing for GC in this space and I would like for people to be able to build out counterfactuals to the shared-nothing orthodoxy.

Secondly there are quite a number of languages that are targetting WasmGC these days, and it would be nice for them to have a good run-time outside the browser. I know that Wasmtime is working on GC, but it needs competition :)

Finally, and selfishly, I have a GC library! I would love to spend more time on it. One way that can happen is for it to prove itself useful, and maybe a Wasm implementation is a way to do that. Could Wastrel on wasm_of_ocaml output beat ocamlopt? I don’t know but it would be worth it to find out! And I would love to get Guile programs compiled to native, and perhaps with Hoot and Whippet and Wastrel that is a possibility.

Welp, there we go, blog out, dude to bed. Hack at y’all later and wonderful wasming to you all!

From VS Code to Helix

I created the website you're reading with VS Code. Behind the scenes I use Astro, a static site generator that gets out of the way while providing nice conveniences.

Using VS Code was a no-brainer: everyone in the industry seems to at least be familiar with it, every project can be opened with it, and most projects can get enhancements and syntactic helpers in a few clicks. In short: VS Code is free, easy to use, and widely adopted.

A Rustacean colleague kept singing Helix's praises. I discarded it because he's much smarter than I am, and I only ever use vim when I need to fiddle with files on a server. I like when things "Just Work" and didn't want to bother learning how to use Helix nor how to configure it.

Today it has become my daily driver. Why did I change my mind? What was preventing me from using it before? And how difficult was it to get there?

Automation is a double-edged sword

Automation and technology make work easier, this is why we produce technology in the first place. But it also means you grow more dependent on the tech you use. If the tech is produced transparently by an international team or a team you trust, it's fine. But if it's produced by a single large entity that can screw you over, it's dangerous.

VS Code might be open source, but in practice it's produced by Microsoft. Microsoft has a problematic relationship to consent and is shoving AI products down everyone's throat. I'd rather use tools that respect me and my decisions, and I'd rather not get my tools produced by already monopolistic organizations.

Microsoft is also based in the USA, and the political climate over there makes me want to depend as little as possible on American tools. I know that's a long, uphill battle, but we have to start somewhere.

I'm not advocating for a ban against American tech in general, but for more balance in our supply chain. I'm also not advocating for European tech either: I'd rather get open source tools from international teams competing in a race to the top, rather than from teams in a single jurisdiction. What is happening in the USA could happen in Europe too.

Why I feared using Helix

I've never found vim particularly pleasant to use but it's everywhere, so I figured I might just get used to it. But one of the things I never liked about vim is the number of moving pieces. By default, vim and neovim are very bare bones. They can be extended and completely modified with plugins, but I really don't like the idea of having extremely customize tools.

I'd rather have the same editor as everyone else, with a few knobs for minor preferences. I am subject to choice paralysis, so making me configure an editor before I've even started editing is the best way to tank my productivity.

When my colleague told me about Helix, two things struck me as improvements over vim.

  1. Helix's philosophy is that everything should work out of the box. There are a few configs and themes, but everything should work similarly from one Helix to another. All the language-specific logic is handled in Language Servers that implement the Language Server Protocol standard.
  2. In Helix, first you select text, and then you perform operations onto it. So you can visually tell what is going to be changed before you apply the change. It fits my mental model much better.

But there are major drawbacks to Helix too:

  1. After decades of vim, I was scared to re-learn everything. In practice this wasn't a problem at all because of the very visual way Helix works.
  2. VS Code "Just Works", and Helix sounded like more work than the few clicks from VS Code's extension store. This is true, but not as bad as I had anticipated.

After a single week of usage, Helix was already very comfortable to navigate. After a few weeks, most of the wrinkles have been ironed out and I use it as my primary editor. So how did I overcome those fears?

What Helped

Just Do It

I tried Helix. It can sound silly, but the very first step to get into Helix was not to overthink it. I just installed it on my mac with brew install helix and gave it a go. I was not too familiar with it, so I looked up the official documentation and noticed there was a tutorial.

This tutorial alone is what convinced me to try harder. It's an interactive and well written way to learn how to move and perform basic operations in Helix. I quickly learned how to move around, select things, surround them with braces or parenthesis. I could see what I was about to do before doing it. This has been epiphany. Helix just worked the way I wanted.

Better: I could get things done faster than in VS Code after a few minutes of learning. Being a lazy person, I never bothered looking up VS Code shortcuts. Because the learning curve for Helix is slightly steeper, you have to learn those shortcuts that make moving around feel so easy.

Not only did I quickly get used to Helix key bindings: my vim muscle-memory didn't get in the way at all!

Better docs

The built-in tutorial is a very pragmatic way to get started. You get results fast, you learn hands on, and it's not that long. But if you want to go further, you have to look for docs. Helix has officials docs. They seem to be fairly complete, but they're also impenetrable as a new user. They focus on what the editor supports and not on what I will want to do with it.

After a bit of browsing online, I've stumbled upon this third-party documentation website. The domain didn't inspire me a lot of confidence, but the docs are really good. They are clearly laid out, use-case oriented, and they make the most of Astro Starlight to provide a great reading experience. The author tried to upstream these docs, but that won't happen. It looks like they are upstreaming their docs to the current website. I hope this will improve the quality of upstream docs eventually.

After learning the basics and finding my way through the docs, it was time to ensure Helix was set up to help me where I needed it most.

Getting the most of Markdown and Astro in Helix

In my free time, I mostly use my editor for three things:

  1. Write notes in markdown
  2. Tweak my website with Astro
  3. Edit yaml to faff around my Kubernetes cluster

Helix is a "stupid" text editor. It doesn't know much about what you're typing. But it supports Language Servers that implement the Language Server Protocol. Language Servers understand the document you're editing. They explain to Helix what you're editing, whether you're in a TypeScript function, typing a markdown link, etc. With that information, Helix and the Language Server can provide code completion hints, errors & warnings, and easier navigation in your code.

In addition to Language Servers, Helix also supports plugging code formatters. Those are pieces of software that will read the document and ensure that it is consistently formatted. It will check that all indentations use spaces and not tabs, that there is a consistent number of space when indenting, that brackets are on the same line as the function, etc. In short: it will make the code pretty.

Markdown

Markdown is not really a programming language, so it might seem surprising to configure a Language Server for it. But if you remember what we said earlier, Language Servers can provide code completion, which is useful when creating links for example. Marksman does exactly that!

Since Helix is pre-configured to use marksman for markdown files we only need to install marksman and make sure it's in our PATH. Installing it with homebrew is enough.

$ brew install marksman

We can check that Helix is happy with it with the following command

$ hx --health markdown
Configured language servers:
  ✓ marksman: /opt/homebrew/bin/marksman
Configured debug adapter: None
Configured formatter: None
Tree-sitter parser: ✓
Highlight queries: ✓
Textobject queries: ✘
Indent queries: ✘

But Language Servers can also help Helix display errors and warnings, and "code suggestions" to help fix the issues. It means Language Servers are a perfect fit for... grammar checkers! Several grammar checkers exist. The most notable are:

  • LTEX+, the Language Server used by Language Tool. It supports several languages but is quite resource hungry.
  • Harper, a grammar checker Language Server developed by Automattic, the people behind WordPress, Tumblr, WooCommerce, Beeper and more. Harper only support English and its variants, but they intend to support more languages in the future.

I mostly write in English and want to keep a minimalistic setup. Automattic is well funded, and I'm confident they will keep working on Harper to improve it. Since grammar checker LSPs can easily be changed, I've decided to go with Harper for now.

To install it, homebrew does the job as always:

$ brew install harper

Then I edited my ~/.config/helix/languages.toml to add Harper as a secondary Language Server in addition to marksman

[language-server.harper-ls]
command = "harper-ls"
args = ["--stdio"]


[[language]]
name = "markdown"
language-servers = ["marksman", "harper-ls"]

Finally I can add a markdown linter to ensure my markdown is formatted properly. Several options exist, and markdownlint is one of the most popular. My colleagues recommended the new kid on the block, a Blazing Fast equivalent: rumdl.

Installing rumdl was pretty simple on my mac. I only had to add the repository of the maintainer, and install rumdl from it.

$ brew tap rvben/rumdl
$ brew install rumdl

After that I added a new language-server to my ~/.config/helix/languages.toml and added it to the language servers to use for the markdown language.

[language-server.rumdl]
command = "rumdl"
args = ["server"]

[...]


[[language]]
name = "markdown"
language-servers = ["marksman", "harper-ls", "rumdl"]
soft-wrap.enable = true
text-width = 80
soft-wrap.wrap-at-text-width = true

Since my website already contained a .markdownlint.yaml I could import it to the rumdl format with

$ rumdl import .markdownlint.yaml
Converted markdownlint config from '.markdownlint.yaml' to '.rumdl.toml'
You can now use: rumdl check --config .rumdl.toml .

You might have noticed that I've added a little quality of life improvement: soft-wrap at 80 characters.

Now if you add this to your own config.toml you will notice that the text is completely left aligned. This is not a problem on small screens, but it rapidly gets annoying on wider screens.

Helix doesn't support centering the editor. There is a PR tackling the problem but it has been stale for most of the year. The maintainers are overwhelmed by the number of PRs making it their way, and it's not clear if or when this PR will be merged.

In the meantime, a workaround exists, with a few caveats. It is possible to add spaces to the left gutter (the column with the line numbers) so it pushes the content towards the center of the screen.

To figure out how many spaces are needed, you need to get your terminal width with stty

$ stty size
82 243

In my case, when in full screen, my terminal is 243 characters wide. I need to remove the content column with from it, and divide everything by 2 to get the space needed on each side. In my case for a 243 character wide terminal with a text width of 80 characters:

(243 - 80) / 2 = 81

As is, I would add 203 spaces to my left gutter to push the rest of the gutter and the content to the right. But the gutter itself has a width of 4 characters, that I need to remove from the total. So I need to subtract them from the total, which leaves me with 76 characters to add.

I can open my ~/.config/helix/config.toml to add a new key binding that will automatically add or remove those spaces from the left gutter when needed, to shift the content towards the center.

[keys.normal.space.t]
z = ":toggle gutters.line-numbers.min-width 76 3"

Now when in normal mode, pressing <kbd>Space</kbd> then <kbd>t</kbd> then <kbd>z</kbd> will add/remove the spaces. Of course this workaround only works when the terminal runs in full screen mode.

Astro

Astro works like a charm in VS Code. The team behind it provides a Language Server and a TypeScript plugin to enable code completion and syntax highlighting.

I only had to install those globally with

$ pnpm install -g @astrojs/language-server typescript @astrojs/ts-plugin

Now we need to add a few lines to our ~/.config/helix/languages.toml to tell it how to use the language server

[language-server.astro-ls]
command = "astro-ls"
args = ["--stdio"]
config = { typescript = { tsdk = "/Users/thibaultmartin/Library/pnpm/global/5/node_modules/typescript/lib" }}

[[language]]
name = "astro"
scope = "source.astro"
injection-regex = "astro"
file-types = ["astro"]
language-servers = ["astro-ls"]

We can check that the Astro Language Server can be used by helix with

$ hx --health astro
Configured language servers:
  ✓ astro-ls: /Users/thibaultmartin/Library/pnpm/astro-ls
Configured debug adapter: None
Configured formatter: None
Tree-sitter parser: ✓
Highlight queries: ✓
Textobject queries: ✘
Indent queries: ✘

I also like to get a formatter to automatically make my code consistent and pretty for me when I save a file. One of the most popular code formaters out there is Prettier. I've decided to go with the fast and easy formatter dprint instead.

I installed it with

$ brew install dprint

Then in the projects I want to use dprint in, I do

$ dprint init

I might edit the dprint.json file to my liking. Finally, I configure Helix to use dprint globally for all Astro projects by appending a few lines in my ~/.config/helix/languages.toml.

[[language]]
name = "astro"
scope = "source.astro"
injection-regex = "astro"
file-types = ["astro"]
language-servers = ["astro-ls"]
formatter = { command = "dprint", args = ["fmt", "--stdin", "astro"]}
auto-format = true

One final check, and I can see that Helix is ready to use the formatter as well

$ hx --health astro
Configured language servers:
  ✓ astro-ls: /Users/thibaultmartin/Library/pnpm/astro-ls
Configured debug adapter: None
Configured formatter:
  ✓ /opt/homebrew/bin/dprint
Tree-sitter parser: ✓
Highlight queries: ✓
Textobject queries: ✘
Indent queries: ✘

YAML

For yaml, it's simple and straightforward: Helix is preconfigured to use yaml-language-server as soon as it's in the PATH. I just need to install it with

$ brew install yaml-language-server

Is it worth it?

Helix really grew on me. I find it particularly easy and fast to edit code with it. It takes a tiny bit more work to get the language support than it does in VS Code, but it's nothing insurmountable. There is a slightly steeper learning curve than for VS Code, but I consider it to be a good thing. It forced me to learn how to move around and edit efficiently, because there is no way to do it inefficiently. Helix remains intuitive once you've learned the basics.

I am a GNOME enthusiast, and I adhere to the same principles: I like when my apps work out of the box, and when I have little to do to configure them. This is a strong stance that often attracts a vocal opposition. I like products that follow those principles better than those who don't.

With that said, Helix sometimes feels like it is maintained by one or two people who have a strong vision, but who struggle to onboard more maintainers. As of writing, Helix has more than 350 PRs open. Quite a few bring interesting features, but the maintainers don't have enough time to review them.

Those 350 PRs mean there is a lot of energy and goodwill around the project. People are willing to contribute. Right now, all that energy is gated, resulting in frustration both from the contributors who feel like they're working in the void, and the maintainers who feel like there at the receiving end of a fire hose.

A solution to make everyone happier without sacrificing the quality of the project would be to work on a Contributor Ladder. CHAOSS' Dr Dawn Foster published a blog post about it, listing interesting resources at the end.

Jakub Steiner

@jimmac

USB MIDI Controllers on the M8

The M8 has extensive USB audio and MIDI capabilities, but it cannot be a USB MIDI host. So you can control other devices through USB MIDI, but cannot sent to it over USB.

Control Surface & Pots for M8

Controlling things via USB devices has to be done through the old TRS (A) jacks. There’s two devices that can aid in that. I’ve used the RK06 which is very featureful, but in a very clumsy plastic case and USB micro cable that has a splitter for the HOST part and USB Power in. It also sometimes doesn’t reset properly when having multiple USB devices attached through a hub. The last bit is why I even bother with this setup.

The Dirtywave M8 has amazing support for the Novation Launchpad Pro MK3. Majority of peolpe hook it up directly to the M8 using the TRS MIDI cables. The Launchpad lacks any sort of pots or encoders though. Thus the need to fuss with USB dongles. What you need is to use the Launchpad Pro as a USB controller and shun at the reliable MIDI connection. The RK06 allows to combine multiple USB devices attached through an unpowered USB hub. Because I am flabbergasted how I did things here’s a schema that works.

Retrokits RK06 schema

If it doesn’t work, unplug the RK06 and turn LPPro off and on in the M8. I hate this setup but it is the only compact one that works (after some fiddling that you absolutely hate when doing a gig).

Launchpar Pro and Intech PO16 via USB handled by RK06

Intech Knot

The Hungarians behind the Grid USB controlles (with first class Linux support) have a USB>MIDI device called Knot. It has one great feature of a switch between TRS A/B for the non-standard devices.

Clean setup with Knot&Grid

It is way less fiddly than the RK06, uses nice aluminium housing and is sturdier. Hoewer it doesn’t seem to work with the Launchpad Pro via USB and it seems to be completely confused by a USB hub, so it’s not useful for my use case of multiple USB controllers.

Non-compact but Reliable

Novation came out with the Launch Control XL, which sadly replaced pots in the old one with encoders (absolute vs relative movement), but added midi in/ou/through with a MIDI mixer even. That way you can avoid USB altogether and get a reliable setup with control surfaces and encoders and sliders.

One day someone comes up with a compact midi capable pots to play along with Launchpad Pro ;) This post has been brought to you by an old man who forgets things.

Colin Walters

@walters

Thoughts on agentic AI coding as of Oct 2025

Sandboxed, reviewed parallel agents make sense

For coding and software engineering, I’ve used and experimented with various frontends (FOSS and proprietary) to multiple foundation models (mostly proprietary) trying to keep up with the state of the art. I’ve come to strongly believe in a few things:

  • Agentic AI for coding needs strongly sandboxed, reproducible environments
  • It makes sense to run multiple agents at once
  • AI output definitely needs human review

Why human review is necessary

Prompt injection is a serious risk at scale

All AI is at risk of prompt injection to some degree, but it’s particularly dangerous with agentic coding. All the state of the art today knows how to do is mitigate it at best. I don’t think it’s a reason to avoid AI, but it’s one of the top reasons to use AI thoughtfully and carefully for products that have any level of criticality.

OpenAI’s Codex documentation has a simple and good example of this.

Disabling the tests and claiming success

Beyond that, I’ve experienced multiple times different models happily disabling the tests or adding a println!("TODO add testing here") and claim success. At least this one is easier to mitigate with a second agent doing code review before it gets to human review.

Sandboxing

The “can I do X” prompting model that various interfaces default to is seriously flawed. Anthropic has a recent blog post on Claude Code changes in this area.

My take here is that sandboxing is only part of the problem; the other part is ensuring the agent has a reproducible environment, and especially one that can be run in IaaS environments. I think devcontainers are a good fit.

I don’t agree with the statement from Anthropic’s blog

without the overhead of spinning up and managing a container.

I don’t think this is overhead for most projects because Where it feels like it has overhead, we should be working to mitigate it.

Running code as separate login users

In fact, one thing I think we should popularize more on Linux is the concept of running multiple unprivileged login users. Personally for the tasks I work on, it often involves building containers or launching local VMs, and isolating that works really well with a full separate “user” identity. An experiment I did was basically useradd ai and running delegated tasks there instead. To log in I added %wheel ALL=NOPASSWD: /usr/bin/machinectl shell ai@ to /etc/sudoers.d/ai-login so that my regular human user could easily get a shell in the ai user’s context.

I haven’t truly “operationalized” this one as juggling separate git repository clones was a bit painful, but I think I could automate it more. I’m interested in hearing from folks who are doing something similar.

Parallel, IaaS-ready agents…with review

I’m today often running 2-3 agents in parallel on different tasks (with different levels of success, but that’s its own story).

It makes total sense to support delegating some of these agents to work off my local system and into cloud infrastructure.

In looking around in this space, there’s quite a lot of stuff. One of them is Ona (formerly Gitpod). I gave it a quick try and I like where they’re going, but more on this below.

Github Copilot can also do something similar to this, but what I don’t like about it is that it pushes a model where all of one’s interaction is in the PR. That’s going to be seriously noisy for some repositories, and interaction with LLMs can feel too “personal” sometimes to have permanently recorded.

Credentials should be on demand and fine grained for tasks

To me a huge flaw with Ona and one shared with other things like Langchain Open-SWE is basically this:

Sorry but: no way I’m clicking OK on that button. I need a strong and clearly delineated barrier between tooling/AI agents acting “as me” and my ability to approve and push code or even do basic things like edit existing pull requests.

Github’s Copilot gets this more right because its bot runs as a distinct identity. I haven’t dug into what it’s authorized to do. I may play with it more, but I also want to use agents outside of Github and I also am not a fan of deepening dependence on a single proprietary forge either.

So I think a key thing agent frontends should help do here is in granting fine-grained ephemeral credentials for dedicated write access as an agent is working on a task. This “credential handling” should be a clearly distinct component. (This goes beyond just git forges of course but also other issue trackers or data sources that may be in context).

Conclusion

There’s so much out there on this, I can barely keep track while trying to do my real job. I’m sure I’m not alone – but I’m interested in other’s thoughts on this!

Slow Fedora VMs

Good morning!

I spent some time figuring out why my build PC was running so slowly today. Thanks to some help from my very smart colleagues I came up with this testcase in Nushell to measure CPU performance:

~: dd if=/dev/random of=./test.in bs=(1024 * 1024) count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.0111184 s, 943 MB/s
~: time bzip2 test.in
0.55user 0.00system 0:00.55elapsed 99%CPU (0avgtext+0avgdata 8044maxresident)k
112inputs+20576outputs (0major+1706minor)pagefaults 0swap

We are copying 10MB of random data into a file and compressing it with bzip2. 0.55 seconds is a pretty good time to compress 10MB of data with bzip2.

But! As soon as I ran a virtual machine, this same test started to take 4 or 5 seconds, both on the host and in the virtual machine.

There is already a new Fedora kernel available and with that version (6.17.4-200.fc42.x86_64) I don’t see any problems. I guess some issue affecting AMD Ryzen virtualization that got fixed already.

Have a fun day!

edit: The problem came back with the new kernel as well. I guess this not going to be a fun day.

Cassidy James Blaede

@cassidyjames

I’ve Joined ROOST

A couple of months ago I shared that I was looking for what was next for me, and I’m thrilled to report that I’ve found it: I’m joining ROOST as OSS Community Manager!

Baby chick being held in a hand

What is ROOST?

I’ll let our website do most of the talking, but I can add some context based on my conversations with the rest of the incredible ROOSTers over the past few weeks. In a nutshell, ROOST is a relatively new nonprofit focused on building, distributing, and maintaining the open source building blocks for online trust and safety. It was founded by tech industry veterans who saw the need for truly open source tools in the space, and were sick of rebuilding the exact same internal tools across multiple orgs and teams.

The way I like to frame it is how you wouldn’t roll your own encryption; why would you roll your own trust and safety tooling? It turns out that currently every platform, service, and community has to reinvent all of the hard work to ensure people are safe and harmful content doesn’t spread. ROOST is teaming up with industry partners to release existing trust and safety tooling as open source and to build the missing pieces together, in the open. The result is that teams will no longer have to start from scratch and take on all of the effort (and risk!) of rolling their own trust and safety tools; instead, they can reach for the open source projects from ROOST to integrate into their own products and systems. And we know open source is the right approach for critical tooling: the tools themselves must be transparent and auditable, while organizations can customize and even help improve this suite of online safety tools to benefit everyone.

This Platformer article does a great job of digging into more detail; give it a read. :) Oh, and why the baby chick? ROOST has a habit of naming things after birds—and I’m a baby ROOSTer. :D

What is trust and safety?

I’ve used the term “trust and safety” a ton in this post; I’m no expert (I’m rapidly learning!), but think about protecting users from harm including unwanted sexual content, misinformation, violent/extremist content, etc. It’s a field that’s much larger in scope and scale than most people probably realize, and is becoming ever more important as it becomes easier to generate massive amounts of text and graphic content using LLMs and related generative “AI” technologies. Add in that those generative technologies are largely trained using opaque data sources that can themselves include harmful content, and you can imagine how we’re at a flash point for trust and safety; robust open online safety tools like those that ROOST is helping to build and maintain are needed more than ever.

What I’ll be doing

My role is officially “OSS Community Manager,” but “community manager” can mean ten different things to ten different people (which is why people in the role often don’t survive long at a company…). At ROOST, I feel like the team knows exactly what they need me to do—and importantly, they have a nice onramp and initial roadmap for me to take on! My work will mostly focus on building and supporting an active and sustainable contributor community around our tools, as well as helping improve the discourse and understanding of open source in the trust and safety world. It’s an exciting challenge to take on with an amazing team from ROOST as well as partner organizations.

My work with GNOME

I’ll continue to serve on the GNOME Foundation board of directors and contribute to both GNOME and Flathub as much as I can; there may be a bit of a transition time as I get settled into this role, but my open source contributions—both to trust and safety and the desktop Linux world—are super important to me. I’ll see you around!

Aryan Kaushik

@aryan20

Balancing Work and Open Source

Work pressure + Burnout == Low contributions?

Over the past few months, I’ve been struggling with a tough question. How do I balance my work commitments and personal life while still contributing to open source?

On the surface, it looks like a weird question. Like I really enjoy contributing and working with contributors, and when I was in college, I always thought... "Why do people ever step back? It is so fun!". It was the thing that brought a smile to my face and took off any "stress". But now that I have graduated, things have taken a turn.

It is now that when work pressure mounts, you use the little time you get to not focus on writing code and instead perform some kind of hobby, learn something new or spend time with family. Or, just endless video scroll and sleep.

This has led me to be on my lowest contributions streak and not able to work on all those cool things I imagined, like reworking the Pitivi timeline in Rust, finishing that one MR in GNOME Settings that is stuck for ages, or fixing some issues in GNOME Extensions website, or work on my own extension's feature request, or contributing to the committees I am a part of.

It’s reached a point where I’m genuinely unsure how to balance things anymore, and hence wanted to give all whom I might not have been able to reply to or have not seen me for a long time an update, that I'm there but just in a dilemma of how to return.

I believe I'm not the only one who faces this. After guiding my juniors for a long while on how to contribute and study at the same time and still manage time for other things, I now am at a road where I am in the same situation. So, if anyone has any insights on how they manage their time, or keep up the motivation and juggle between tasks, do let me know (akaushik [at] gnome [dot] org), I'd really appreciate any insights :)

One of them would probably be to take fewer things on my plate?

Perhaps this is just a new phase of learning? Not about code, but about balance.

Flathub Blog

@flathubblog

Enhanced License Compliance Tools for Flathub

tl;dr: Flathub has improved tooling to make license compliance easier for developers. Distros should rebuild OS images with updated runtimes from Flathub; app developers should ensure they're using up-to-date runtimes and verify that licenses and copyright notices are properly included.

In early August, a concerned community member brought to our attention that copyright notices and license files were being omitted when software was bundled as Flatpaks and distributed via Flathub. This was a genuine oversight across multiple projects, and we're glad we've been able to take the opportunity to correct and improve this for runtimes and apps across the Flatpak ecosystem.

Over the past few months, we've been working to enhance our tooling and infrastructure to better support license compliance. With the support of the Flatpak, freedesktop-sdk, GNOME, and KDE teams, we've developed and deployed significant improvements that make it easier than ever for developers to ensure their applications properly include license and copyright notices.

What's New

In coordination with maintainers of the freedesktop-sdk, GNOME, and KDE runtimes, we've implemented enhanced license handling that automatically includes license and copyright notice files in the runtimes themselves, deduplicated to be as space-efficient as possible. This improvement has been applied to all supported freedesktop-sdk, GNOME, and KDE runtimes, plus backported to freedesktop-sdk 22.08 and newer, GNOME 45 and newer, KDE 5.15-22.08 and newer, and KDE 6.6 and newer. These updated runtimes cover over 90% of apps on Flathub and have already rolled out to users as regular Flatpak updates.

We've also worked with the Flatpak developers to add new functionality to flatpak-builder 1.4.5 that automatically recognizes and includes common license files. This enhancement, now deployed to the Flathub build service, helps ensure apps' own licenses as well as the licenses of any bundled libraries are retained and shipped to users along with the app itself.

These improvements represent an important milestone in the maturity of the Flatpak ecosystem, making license compliance easier and more automatic for the entire community.

App Developers

We encourage you to rebuild your apps with flatpak-builder 1.4.5 or newer to take advantage of the new automatic license detection. You can verify that license and copyright notices are properly included in your Flatpak's /app/share/licenses, both for your app and any included dependencies. In most cases, simply rebuilding your app will automatically include the necessary licenses, but you can also fine-tune which license files are included using the license-files key in your app's Flatpak manifest if needed.

For apps with binary sources (e.g. debs or rpms), we encourage app maintainers to explicitly include relevant license files in the Flatpak itself for consistency and auditability.

End-of-life runtime transition: To focus our resources on maintaining high-quality, up-to-date runtimes, we'll be completing the removal of several end-of-life runtimes in January 2026. Apps using runtimes older than freedesktop-sdk 22.08, GNOME 45, KDE 5.15-22.08 or KDE 6.6 will be marked as EOL shortly. Once these older runtimes are removed, the apps will need to be updated to use a supported runtime to remain available on Flathub. While this won't affect existing app installations, after this date, new users will be unable to install these apps from Flathub until they're rebuilt against a current runtime. Flatpak manifests of any affected apps will remain on the Flathub GitHub organization to enable developers to update them at any time.

If your app currently targets an end-of-life runtime that did receive the backported license improvements, we still strongly encourage you to upgrade to a newer, supported runtime to benefit from ongoing security updates and platform improvements.

Distributors

If you redistribute binaries from Flathub, such as pre-installed runtimes or apps, you should rebuild your distributed images (ISOs, containers, etc.) with the updated runtimes and apps from Flathub. You can verify that appropriate licenses are included with the Flatpaks in the runtime filesystem at /usr/share/licenses inside each runtime.

Get in Touch

App developers, distributors, and community members are encouraged to connect with the team and other members of the community in our Discourse forum and Matrix chat room. If you are an app developer or distributor and have any questions or concerns, you may also reach out to us at admins@flathub.org.

Thank You!

We are grateful to Jef Spaleta from Fedora for his care and confidentiality in bringing this to our attention and working with us collaboratively throughout the process. Special thanks to Boudhayan Bhattcharya (bbhtt) for his tireless work across Flathub, Flatpak and freedesktop-sdk, on this as well as many other important areas. And thank you to Abderrahim Kitouni (akitouni), Adrian Vovk (AdrianVovk), Aleix Pol Gonzalez (apol), Bart Piotrowski (barthalion), Ben Cooksley (bcooksley), Javier Jardón (jjardon), Jordan Petridis (alatiera), Matthias Clasen (matthiasc), Rob McQueen (ramcq), Sebastian Wick (swick), Timothée Ravier (travier), and any others behind the scenes for their hard work and timely collaboration across multiple projects to deliver these improvements.

Our Linux app ecosystem is truly strongest when individuals from across companies and projects come together to collaborate and work towards shared goals. We look forward to continuing to work together to ensure app developers can easily ship their apps to users across all Linux distributions and desktop environments. ♥

Matthias Clasen

@mclasen

SVG in GTK

GTK has been using SVG for symbolic icons since essentially forever. It hasn’t been a perfect relationship, though.

Pre-History

For the longest time (all through the GTK 3 era, and until recently), we’ve used librsvg indirectly, through gdk-pixbuf, to obtain rendered icons, and then we used some pixel tricks to recolor the resulting image according to the theme.

Symbolic icon, with success color

This works, but it gives up on the defining feature of SVG: its scalability.

Once you’ve rasterized your icon at a given size, all you’re left with is pixels. In the GTK 3 era, this wasn’t a problem, but in GTK 4, we have a scene graph and fractional scaling, so we could do *much* better if we don’t rasterize early.

Symbolic icon, pixellated

Unfortunately, librsvg’s API isn’t set up to let us easily translate SVG into our own render nodes. And its rust nature makes for an inconvenient dependency, so we held off on depending on it for a long time.

History

Early this year, I grew tired of this situation, and decided to improve our story for icons, and symbolic ones in particular.

So I set out to see how hard it would be to parse the very limited subset of SVG used in symbolic icons myself. It turned out to be relatively easy. I quickly managed to parse 99% of the Adwaita symbolic icons, so I decided to merge this work for GTK 4.20.

There were some detours and complications along the way. Since my simple parser couldn’t handle 100% of Adwaita (let alone all of the SVGs out there), a fallback to a proper SVG parser was needed. So we added a librsvg dependency after all. Since our new Android backend has an even more difficult time with rust than our other backends, we needed to arrange for a non-rust librsvg branch to be used when necessary.

One thing that this hand-rolled SVG parser improved upon is that it allows stroking, in addition to filling. I documented the format for symbolic icons here.

Starting over

A bit later, I was inspired by Apple’s SF Symbols work to look into how hard it would be to extend my SVG parser with a few custom attributes to enable dynamic strokes.

It turned out to be easy again. With a handful of attributes, I could create plausible-looking animations and transitions. And it was fun to play with. When I showed this work to Jakub and Lapo at GUADEC, they were intrigued, so I decided to keep pushing this forward, and it landed in early GTK 4.21, as GtkPathPaintable.

To make experimenting with this easier, I made a quick editor.  It was invaluable to have Jakub as an early adopter play with the editor while I was improving the implementation. Some very good ideas came out of this rapid feedback cycle, for example dynamic stroke width.

You can get some impression of the new stroke-based icons Jakub has been working on here.

Recent happenings

As summer was turning to fall, I felt that I should attempt to support SVG more completely, including grouping and animations. GTK’s rendering infrastructure has most of the pieces that are required for SVG after all: transforms, filters, clips, paths, gradients are all supported.

This was *not* easy.

But eventually, things started to fall into place. And this week, I’ve replaced  GtkPathPaintable with GtkSvg, which is a GdkPaintable that supports SVG. At least, the subset of SVG that is most relevant for icons. And that includes animations.

 

This is still a subset of full SVG, but converting a few random lottie files to SVG animations gave me a decent success rate for getting things to display mostly ok.

The details are spelled out here.

Summary

GTK 4.22 will natively support SVG, including SVG animations.

If you’d like to help improve this further, here are some some suggestions.

If you would like to support the GNOME foundation, who’s infrastructure and hosting GTK relies on, please donate.

❤

Crosswords 0.3.16: 2025 Internship Results

Time for another GNOME Crosswords release! This one highlights the features our interns did this past summer. We had three fabulous interns — two through GSoC and one through Outreachy. While this release really only has three big features — one from each — they were all fantastic.

Thanks goes to to my fellow GSoC mentors Federico and Tanmay. In addition, Tilda and the folks at Outreachy were extremely helpful. Mentorship is a lot of work, but it’s also super-rewarding. If you’re interested in participating as a mentor in the future and have any questions about the process, let me know. I’ll be happy to speak with you about them.

Dictionary pipeline improvements

First, our Outreachy intern Nancy spent the summer improving the build pipeline to generate the internal dictionaries we use. These dictionaries are used to provide autofill capabilities and add definitions to the Editor, as well as providing near-instant completions for both the Editor and Player. The old pipeline was buggy and hard to maintain. Once we had a cleaned it up, Nancy was able to use it to effortlessly produce a dictionary in her native tongue: Swahili.

A Grid in swahili

We have no distribution story yet, but it’s exciting to have it so much easier to create dictionaries in other languages. It opens the door to the Editor being more universally useful (and fulfills a core GNOME tenet).

You can read about it more details in Nancy’s final report.

Word List

Victor did a ton of research for Crosswords, almost acting like a Product Manager. He installed every crossword editor he could find and did a competitive analysis, noting possible areas for improvement. One of the areas he flagged was the word list in our editor. This list suggests words that could be used in a given spot in the grid. We started with a simplistic implementation that listed every possible word in our dictionary that could fit. This approach— while fast — provided a lot of dead words that would make the grid unsolvable. So he set about trying to narrow down that list.

New Word List showing possible options

It turns out that there’s a lot of tradeoffs to be made here (Victor’s post). It’s possible to find a really good set of words, at the cost of a lot of computational power. A much simpler list is quick but has dead words. In the end, we found a happy medium that let us get results fast and had a stable list across a clue. He’ll be blogging about this shortly.

Victor also cleaned up our development docs, and researched satsolve algorithms for the grid. He’s working on a lovely doc on the AC-3 algorithm, and we can use it to add additional functionality to the editor in the future.

Printing

Toluwaleke implemented printing support for GNOME Crosswords.

This was a tour de force, and a phenomenal addition to the Crosswords codebase. When I proposed it for a GSoC project, I had no idea how much work this project could involve. We already had code to produce an svg of the grid — I thought that we could just quickly add support for the clues and call it a day. Instead, we ended up going on a wild ride resulting in a significantly stronger feature and code base than we had going in.

His blog has more detail and it’s really quite cool (go read it!). But from my perspective, we ended up with a flexible and fast rendering system that can be used in a lot more places. Take a look:

The resulting PDFs are really high quality — they seem to look better than some of the newspaper puzzles I’ve seen. We’ll keep tweaking them as there are still a lot of improvements we’d like to add, such as taking the High Contrast / Large Text A11Y options into account. But it’s a tremendous basis for future work.

Increased Polish

There were a few other small things that happened

  • I hooked Crosswords up to Damned Lies. This led to an increase in our translation quality and count
  • This included a Polish translation, which came with a new downloader!
  • I ported all the dialogs to AdwDialog, and moved on from (most) of the deprecated Gtk4 widgets
  • A lot of code cleanups and small fixes

Now that these big changes have landed, it’s time to go back to working on the rest of the changes proposed for GNOME Circle.

Until next time, happy puzzling!

Toluwaleke Ogundipe

@toluwalekeog

GSoC Final Report: Printing in GNOME Crosswords

A few months ago, I introduced my GSoC project: Adding Printing Support to GNOME Crosswords. Since June, I’ve been working hard on it, and I’m happy to share that printing puzzles is finally possible!

The Result

GNOME Crosswords now includes a Print option in its menu, which opens the system’s print dialog. After adjusting printer settings and page setup, the user is shown a preview dialog with a few crossword-specific options, such as ink-saving mode and whether (and how) to include the solution. The options are intentionally minimal, keeping the focus on a clean and straightforward printing experience.

Below is a short clip showing the feature in action:

The resulting file: output.pdf

Crosswords now also ships with a standalone command-line tool, ipuz2pdf, which converts any IPUZ puzzle file into a print-ready PDF. It offers a similarly minimal set of layout and crossword-specific options.

The Process

  • Studied and profiled the existing code and came up with an overall approach for the project.
  • Built a new grid rendering framework, resulting in a 10× speedup in rendering. Dealt with a ton of details around text placement and rendering, colouring, shapes, and more.
  • Designed and implemented a print layout engine with a templating system, adjusted to work with different puzzle kinds, grid sizes, and paper sizes.
  • Integrated the layout engine with the print dialog and added a live print preview.
  • Bonus: Created ipuz2pdf, a standalone command-line utility (originally for testing) that converts an IPUZ file into a printable PDF.

The Challenges

Working on a feature of this scale came with plenty of challenges. Getting familiar with a large codebase took patience, and understanding how everything fit together often meant careful study and experimentation. Balancing ideas with the project timeline and navigating code reviews pushed me to grow both technically and collaboratively.

On the technical side, rendering and layout had their own hurdles. Handling text metrics, scaling, and coordinate transformations required a mix of technical knowledge, critical thinking, and experimentation. Even small visual glitches could lead to hours of debugging. One notably difficult part was implementing the box layout system that powers the dynamic print layout engine.

The Lessons

This project taught me a lot about patience, focus, and iteration. I learned to approach large problems by breaking them into small, testable pieces, and to value clarity and simplicity in both code and design. Code reviews taught me to communicate ideas better, accept feedback gracefully, and appreciate different perspectives on problem-solving.

On the technical side, working with rendering and layout systems deepened my understanding of graphics programming. I also learned how small design choices can ripple through an entire codebase, and how careful abstraction and modularity can make complex systems easier to evolve.

Above all, I learned the value of collaboration, and that progress in open source often comes from many small, consistent improvements rather than big leaps.

The Conclusion

In the end, I achieved all the goals set out for the project, and even more. It was a long and taxing journey, but absolutely worth it.

The Gratitude

I’m deeply grateful to my mentors, Jonathan Blandford and Federico Mena Quintero, for their guidance, patience, and support throughout this project. I’ve learned so much from working with them. I’m also grateful to the GNOME community and Google Summer of Code for making this opportunity possible and for creating such a welcoming environment for new contributors.

What Comes After

No project is ever truly finished, and this one is no exception. There’s still plenty to be done, and some already have tracking issues. I plan to keep improving the printing system and related features in GNOME Crosswords.

I also hope to stay involved in the GNOME ecosystem and open-source development in general. I’m especially interested in projects that combine design, performance, and system-level programming. More importantly, I’m a recent CS graduate looking for a full-time role in the field of interest stated earlier. If you have or know of any opportunities, please reach out at feyidab01@gmail.com.

Finally, I plan to write a couple of follow-up posts diving into interesting parts of the process in more detail. Stay tuned!

Thank you!

Jussi Pakkanen

@jpakkane

CapyPDF 1.8.0 released

I have just released CapyPDF 1.8. It's mostly minor fixes and tweaks but there are two notable things. The first one is that CapyPDF now supports variable axis fonts. The other one is that CapyPDF will now produce PDF version 2.0 files instead of 1.7 by default. This might seem like a big leap but really isn't. PDF 2.0 is pretty much the same as 1.7, just with documentation updates and deprecating (but not removing) a bunch of things. People using PDF have a tendency to be quite conservative in their versions, but PDF 2.0 has been out since 2017 with most of it being PDF 1.7 from 2008.

It is still possible to create version with older PDF specs. If you specify, say, PDF/X3, CapyPDF will output PDF 1.3 as the spec requires that version and no other even though, for example, Adobe's PDF tools accept PDF/X3 whose version later than 1.3.

The PDF specification is currently undergoing major changes and future versions are expected to have backwards incompatible features such as HDR imaging. But 2.0 does not have those yet.

Things CapyPDF supports

CapyPDF has implemented a fair chunk of the various PDF specs:

  • All paint and text operations
  • Color management
  • Optional content groups
  • PDF/X and PDF/A support
  • Tagged PDF (i.e. document structure and semantic information)
  • TTF, OTF, TTC and CFF fonts
  • Forms (preliminary)
  • Annotations
  • File attachments
  • Outlines
  • Page naming
In theory this should be enough to support things like XRechnung and documents with full accessibility information as per PDF/UA. These have not been actually tested as I don't have personal experience in German electronic invoicing or document accessibility.

Dorothy Kabarozi

@dorothyk

Laravel Mix “Unable to Locate Mix File” Error: Causes and Fixes


Laravel Mix “Unable to Locate Mix File” Error: Causes and Fixes

If you’re working with Laravel and using Laravel Mix to manage your CSS and JavaScript assets, you may have come across an error like this:

Spatie\LaravelIgnition\Exceptions\ViewException  
Message: Unable to locate Mix file: /assets/vendor/css/rtl/core.css

Or in some cases:

Illuminate\Foundation\MixFileNotFoundException
Unable to locate Mix file: /assets/vendor/fonts/boxicons.css

This error can be frustrating, especially when your project works perfectly on one machine but fails on another. Let’s break down what’s happening and how to solve it.


What Causes This Error?

Laravel Mix is a wrapper around Webpack, designed to compile your resources/ assets (CSS, JS, images) into the public/ directory. The mix() helper in Blade templates references these compiled assets using a special file: mix-manifest.json.

This error occurs when Laravel cannot find the compiled asset. Common reasons include:

  1. Assets are not compiled yet
    If you’ve just cloned a project, the public/assets folder might be empty. Laravel is looking for files that do not exist yet.
  2. mix-manifest.json is missing or outdated
    This file maps original asset paths to compiled paths. If it’s missing, Laravel Mix won’t know where to find your assets.
  3. Incorrect paths in Blade templates
    If your code is like: <link rel="stylesheet" href="{{ asset(mix('assets/vendor/css/rtl/core.css')) }}" /> but the RTL folder or the file doesn’t exist, Mix will throw an exception.
  4. Wrong configuration
    Some themes use variables like $configData['rtlSupport'] to toggle right-to-left CSS. If it’s set incorrectly, Laravel will try to load files that don’t exist.

How to Fix It

Here’s a step-by-step solution:

1. Install NPM dependencies

Make sure you have Node.js installed, then run:

npm install

2. Compile your assets

  • Development mode (fast, unminified):
npm run dev

  • Production mode (optimized, minified):
npm run build

This will generate your CSS and JS files in the public folder and update mix-manifest.json.

3. Check mix-manifest.json

Ensure the manifest contains the file Laravel is looking for:

"/assets/vendor/css/rtl/core.css": "/assets/vendor/css/rtl/core.css"

4. Adjust Blade template paths

If you don’t use RTL, you can set:

$configData['rtlSupport'] = '';

so the code doesn’t try to load /rtl/core.css unnecessarily.

5. Clear caches

Laravel may cache old views and configs. Clear them:

php artisan view:clear
php artisan config:clear
php artisan cache:clear


Pro Tips

  • Always check if the file exists in public/assets/... after compiling.
  • If you move your project to another machine or server, you must run npm install and npm run dev again.
  • For production, make sure your server has Node.js and NPM installed, otherwise Laravel Mix cannot build the assets.

Conclusion

The “Unable to locate Mix file” error is not a bug in Laravel, but a result of missing compiled assets or misconfigured paths. Once you:

  1. Install dependencies.
  2. Compile assets,
  3. Correct Blade paths, and
  4. Clear caches; your Laravel project should load CSS and JS files without issues.

GNOME Tour in openSUSE and welcome app

As a follow up of the Hackweek 24 project, I've continued working on the gnome-tour fork for openSUSE with custom pages to replace the welcome application for openSUSE distributions.

GNOME Tour modifications

All the modifications are on top of upstream gnome-tour and stored in the openSUSE/gnome-tour repo

  • Custom initial page

  • A new donations page. In openSUSE we remove the popup from GNOME shell for donations, so it's fair to add it in this place.

  • Last page with custom openSUSE links, this one is the used for opensuse-welcome app.

opensuse-welcome package

The original opensuse-welcome is a qt application, and this one is used for all desktop environments, but it's more or less unmaintained and looking for a replacement, we can use the gnome-tour fork as the default welcome app for all desktop without a custom app.

To do a minimal desktop agnostic opensuse-welcome application, I've modified the gnome-tour to also generate a second binary but just with the last page.

The new opensuse-welcome rpm package is built as a subpackage of gnome-tour. This new application is minimal and it doesn't have lots of requirements, but as it's a gtk4 application, it requires gtk and libadwaita, and also depends on gnome-tour-data to get the resoures of the app.

To improve this welcome app we need to review the translations, because I added three new pages to the gnome-tour and that specific pages are not translated, so I should regenerate the .po files for all languages and upload to openSUSE Weblate for translations.

Where are we on X Chat security?

AWS had an outage today and Signal was unavailable for some users for a while. This has confused some people, including Elon Musk, who are concerned that having a dependency on AWS means that Signal could somehow be compromised by anyone with sufficient influence over AWS (it can't). Which means we're back to the richest man in the world recommending his own "X Chat", saying The messages are fully encrypted with no advertising hooks or strange “AWS dependencies” such that I can’t read your messages even if someone put a gun to my head.

Elon is either uninformed about his own product, lying, or both.

As I wrote back in June, X Chat genuinely end-to-end encrypted, but ownership of the keys is complicated. The encryption key is stored using the Juicebox protocol, sharded between multiple backends. Two of these are asserted to be HSM backed - a discussion of the commissioning ceremony was recently posted here. I have not watched the almost 7 hours of video to verify that this was performed correctly, and I also haven't been able to verify that the public keys included in the post were the keys generated during the ceremony, although that may be down to me just not finding the appropriate point in the video (sorry, Twitter's video hosting doesn't appear to have any skip feature and would frequently just sit spinning if I tried to seek to far and I should probably just download them and figure it out but I'm not doing that now). With enough effort it would probably also have been possible to fake the entire thing - I have no reason to believe that this has happened, but it's not externally verifiable.

But let's assume these published public keys are legitimately the ones used in the HSM Juicebox realms[1] and that everything was done correctly. Does that prevent Elon from obtaining your key and decrypting your messages? No.

On startup, the X Chat client makes an API call called GetPublicKeysResult, and the public keys of the realms are returned. Right now when I make that call I get the public keys listed above, so there's at least some indication that I'm going to be communicating with actual HSMs. But what if that API call returned different keys? Could Elon stick a proxy in front of the HSMs and grab a cleartext portion of the key shards? Yes, he absolutely could, and then he'd be able to decrypt your messages.

(I will accept that there is a plausible argument that Elon is telling the truth in that even if you held a gun to his head he's not smart enough to be able to do this himself, but that'd be true even if there were no security whatsoever, so it still says nothing about the security of his product)

The solution to this is remote attestation - a process where the device you're speaking to proves its identity to you. In theory the endpoint could attest that it's an HSM running this specific code, and we could look at the Juicebox repo and verify that it's that code and hasn't been tampered with, and then we'd know that our communication channel was secure. Elon hasn't done that, despite it being table stakes for this sort of thing (Signal uses remote attestation to verify the enclave code used for private contact discovery, for instance, which ensures that the client will refuse to hand over any data until it's verified the identity and state of the enclave). There's no excuse whatsoever to build a new end-to-end encrypted messenger which relies on a network service for security without providing a trustworthy mechanism to verify you're speaking to the real service.

We know how to do this properly. We have done for years. Launching without it is unforgivable.

[1] There are three Juicebox realms overall, one of which doesn't appear to use HSMs, but you need at least two in order to obtain the key so at least part of the key will always be held in HSMs

comment count unavailable comments

Dorothy Kabarozi

@dorothyk

Deploying a Simple HTML Project on Linode Using Nginx


Deploying a Simple HTML Project on Linode Using Nginx: My Journey and Lessons Learned

Deploying web projects can seem intimidating at first, especially when working with a remote server like Linode. Recently, I decided to deploy a simple HTML project (index.html) on a Linode server using Nginx. Here’s a detailed account of the steps I took, the challenges I faced, and the solutions I applied.


Step 1: Accessing the Linode Server

The first step was to connect to my Linode server via SSH:

ssh root@<your-linode-ip>

Initially, I encountered a timeout issue, which reminded me to check network settings and ensure SSH access was enabled for my Linode instance. Once connected, I had access to the server terminal and could manage files and services.


Step 2: Preparing the Project

My project was simple—it only contained an index.html file. I uploaded it to the server under:

/var/www/hng13-stage0-devops

I verified the project folder structure with:

ls -l /var/www/hng13-stage0-devops

Since there was no public folder or PHP files, I knew I needed to adjust the Nginx configuration to serve directly from this folder.


Step 3: Setting Up Nginx

I opened the Nginx configuration for my site:

sudo nano /etc/nginx/sites-available/hng13

Initially, I mistakenly pointed root to a non-existent folder (public), which caused a 404 Not Found error. The correct configuration looked like this:

server {
    listen 80;
    server_name <your_linode-ip>;

    root /var/www/hng13-stage0-devops;  # points to folder containing index.html
    index index.html index.htm;

    location / {
        try_files $uri $uri/ =404;
    }
}


Step 4: Enabling the Site and Testing

After creating the configuration file, I enabled the site:

sudo ln -s /etc/nginx/sites-available/hng13 /etc/nginx/sites-enabled/

I also removed the default site to avoid conflicts:

sudo rm /etc/nginx/sites-enabled/default

Then I tested the configuration:

sudo nginx -t

If the syntax was OK, I reloaded Nginx:

sudo systemctl reload nginx


Step 5: Checking Permissions

Nginx must have access to the project files. I ensured the correct permissions:

sudo chown -R www-data:www-data /var/www/hng13-stage0-devops
sudo chmod -R 755 /var/www/hng13-stage0-devops


Step 6: Viewing the Site

Finally, I opened my browser and navigated to

http://<your-linode-ip>

And there it was—my index.html page served perfectly via Nginx. ✅


Challenges and Lessons Learned

  1. Nginx server_name Error
    • Error: "server_name" directive is not allowed here
    • Lesson: Always place server_name inside a server { ... } block.
  2. 404 Not Found
    • Cause: Nginx was pointing to a public folder that didn’t exist.
    • Solution: Update root to the folder containing index.html.
  3. Permissions Issues
    • Nginx could not read files initially.
    • Solution: Ensure ownership by www-data and proper read/execute permissions.
  4. SSH Timeout / Connection Issues
    • Double-check firewall rules and Linode network settings.

Key Takeaways

  • For static HTML projects, Nginx is simple and effective.
  • Always check the root folder matches your project structure.
  • Testing the Nginx config (nginx -t) before reload saves headaches.
  • Proper permissions are crucial for serving files correctly.

Deploying my project was a learning experience. Even small mistakes like pointing to the wrong folder or placing directives in the wrong context can break the site—but step-by-step debugging and understanding the errors helped me fix everything quickly.This has kick started my devOps journey and I truly loved the challenge

Status update, 17/10/2025

Greetings readers. I’m writing to you from a hotel room in Manchester which I’m currently sharing with a variant of COVID 19. We are listening to disco funk music.

This virus prevents me from working or socializing, but I at least I have time to do some cyber-janitorial tasks like updating my “dotfiles” (which holds configuration for all the programs i use on Linux, stored in Git… for those who aren’t yet converts).

I also caught up with some big upcoming changes in the GNOME 50 release cycle — more on that below.

nvim

I picked up Vim as my text editor ten years ago while working on a very boring project. This article by Jon Beltran de Heredia, “Why, oh WHY, do those #?@! nutheads use vi?” sold me on the key ideas: you use “normal mode” for everything, which gives you powerful and composable edit operations. I printed out this Vim quick reference card by Michael Goerz and resolved to learn one new operation every day.

It worked and I’ve been a convert ever since. Doing consultancy work makes you a nomad: often working via SSH or WSL on other people’s computers. So I never had the luxury of setting up an IDE like GNOME Builder, or using something that isn’t packaged in 99% of distros. Luckily Vim is everywhere.

Over the years, I read a newletter named Vimtricks and I picked up various Vim plugins like ALE, ctrlp, and sideways. But there’s a problem: some of these depend on extra Vim features like Python support. If a required feature is missing, you get an error message that appears on like… every keystroke:

In this case, on a Debian 12 build machine, I could work around by installing the vim-gtk3 package. But it’s frustrating enough that I decided it was time to try Neovim.

The Neovim project began around the time I was switching to Vim, and is based on the premise that “Vim is, without question, the worst C codebase I have seen.”.

So far its been painless to switch and everything works a little better. The :terminal feels better integrated. I didn’t need to immediately disable mouse mode. I can link to online documentation! The ALE plugin (which provides language server integration) is even ready packaged in Fedora.

I’d send a screenshot but my editor looks… exactly the same as before. Boring!

I also briefly tried out Helix, which appears to take the good bits of Vim (modal editing) and run in a different direction (visible selection and multiple cursors). I need a more boring project before I’ll be able to learn a completely new editor. Give me 10 years.

Endless OS 7

I’ve been working flat out on Endless OS 7, as last month. Now that the basics work and the system boots, we were mainly looking at integrating Endless-specific Pay as you Go functionality that they use for affordable laptop programs.

I learned more than I wanted to about Linux early boot process, particularly the dracut-ng initramfs generator (one of many Linux components that seems to be named after a town in Massachusetts).

GNOME OS actually dropped Dracut altogether, in “vm-secure: Get rid of dracut and use systemd’s ukify” by Valentin David, and now uses a simple Python script. A lot of Dracut’s features aren’t necessary for building atomic, image-based distros. For EOS we decided to stick with Dracut, at least for now.

So we get to deal with fun changes such as the initramfs growing from 90MB to 390MB after we updated to latest Dracut. Something which is affecting Fedora too (LWN: “Last-minute /boot boost for Fedora 43”).

I requested time after the contract finishes to write up a technical article on the work we did, so I won’t go into more details yet. Watch this space!

GNOME 50

I haven’t had a minute to look at upstream GNOME this month, but there are some interesting things cooking there.

Jordan merged the GNOME OS openQA tests into the main gnome-build-meta repo. This is a simple solution to a number of basic questions we had around testing, such as, “how do we target tests to specific versions of GNOME?”.

We separated the tests out of gnome-build-meta because, at the time, each new CI pipeline would track new versions of each GNOME module. This meant, firstly that pipelines could take anywhere from 10 minutes to 4 hours rebuilding a disk image before the tests even started, and secondly that the system under test would change every time you ran the pipeline.

While that sounds dumb, it worked this way for historical reasons: GNOME OS has been an under-resourced ad-hoc project ongoing since 2011, whose original goal was simply to continuously build: already a huge challenge if you remember GNOME in the early 2010s. Of course, such as CI pipeline is highly counterproductive if you’re trying to develop and review changes to the tests, and not the system: so the separate openqa-tests repo was a necessary step.

Thanks to Abderrahim’s work in 2022 (“Commit refs to the repository” and “Add script to update refs”), plus my work on a tool to run the openQA tests locally before pushing to CI (ssam_openqa), I hope we’re not going to have those kinds of problems any more. We enter a brave new world of testing!

The next thing the openQA tests need, in my opinion, is dedicated test infrastructure. The shared Gitlab CI runners we have are in high demand. The openQA tests have timeouts, as they ultimately are doing this in a loop:

  • Send an input event
  • Wait for the system under test to react

If a VM is running on a test runner with overloaded CPU or IO then tests will start to time out in unhelpful ways. So, if you want to have better testing for GNOME, finding some dedicated hardware to run tests would be a significant help.

There are also some changes cooking in Localsearch thanks to Carlos Garnacho:

The first of these is a nicely engineered way to allow searching files on removable disks like external HDs. This should be opt-in: so you can opt in to indexing your external hard drive full of music, but your machine wouldn’t be vulnerable to an attack where someone connects a malicious USB stick while your back is turned. (The sandboxing in localsearch makes it non-trivial to construct such an attack, but it would require a significantly greater level of security auditing before I’d make any guarantees about that).

The second of these changes is pretty big: in GNOME 50, localsearch will now consider everything in your homedir for indexing.

As Carlos notes in the commit message, he has spent years working on performance optimisations and bug fixes in localsearch to get to a point where he considers it reasonable to enable by default. From a design point of view, discussed in the issue “Be more encompassing about what get indexed“, it’s hard to justify a search feature that only surfaces a subset of your files.

I don’t know if it’s a great time to do this, but nothing is perfect and sometimes you have to take a few risks to move forwards.

There’s a design, testing and user support element to all of this, and it’s going to require help from the GNOME community and our various downstream distributors, particularly around:

  • Widely testing the new feature before the GNOME 50 release.
  • Making sure users are aware of the change and how to manage the search config.
  • Handling an expected increase in bug reports and support requests.
  • Highlighting how privacy-focused localsearch is.

I never got time to extend the openQA tests to cover media indexing; it’s not a trivial job. We will rely on volunteers and downstream testers to try out the config change as widely as possible over the next 6 months.

One thing that makes me support this change is that the indexer in Android devices already works like this: everything is scanned into a local cache, unless there’s a .nomedia file. Unfortunately Google don’t document how the Android media scanner works. But it’s not like this is GNOME treading a radical new path.

The localsearch index lives in the same filesystem as the data, and never leaves your PC. In a world where Microsoft Windows can now send your boss screenshots of everything you looked at, GNOME is still very much on your side. Let’s see if we can tell that story.

Mid-October News

Misc news about the gedit text editor, mid-October edition! (Some sections are a bit technical).

Rework of the file loading and saving (continued)

The refactoring continues in the libgedit-gtksourceview module, this time to tackle a big class that takes too much responsibilities. A utility is in development which will permit to delegate a part of the work.

The utility is about character encoding conversion, with support of invalid bytes. It takes as input a single GBytes (the file content), and transforms it into a list of chunks. A chunk contains either valid (successfully converted) bytes, or invalid bytes. The output format - the "list of chunks" - is subject to change to improve memory consumption and performances.

Note that invalid bytes are allowed, to be able to open really any kind of files with gedit.

I must also note that this is quite sensitive work, at the heart of document loading for gedit. Normally all these refactorings and improvements will be worth it!

Progress in other modules

There has been some progress on other modules:

  • gedit: version 48.1.1 has been released with a few minor updates.
  • The Flatpak on Flathub: update to gedit 48.1.1 and the GNOME 49 runtime.
  • gspell: version 1.14.1 has been released, mainly to pick up the updated translations.

GitHub Sponsors

In addition to Liberapay, you can now support the work that I do on GitHub Sponsors. See the gedit donations page.

Thank you ❤️

Victor Ma

@victorma

This is a test post

Over the past few weeks, I’ve been working on improving some test code that I had written.

Refactoring time!

My first order of business was to refactor the test code. There was a lot of boilerplate, which made it difficult to add new tests, and also created visual clutter.

For example, have a look at this test case:

static void
test_egg_ipuz (void)
{
 g_autoptr (WordList) word_list = NULL;
 IpuzGrid *grid;
 g_autofree IpuzClue *clue = NULL;
 g_autoptr (WordArray) clue_matches = NULL;

 word_list = get_broda_word_list ();
 grid = create_grid (EGG_IPUZ_FILE_PATH);
 clue = get_clue (grid, IPUZ_CLUE_DIRECTION_ACROSS, 2);
 clue_matches = word_list_find_clue_matches (word_list, clue, grid);

 g_assert_cmpint (word_array_len (clue_matches), ==, 3);
 g_assert_cmpstr (word_list_get_indexed_word (word_list,
 word_array_index (clue_matches, 0)),
 ==,
 "EGGS");
 g_assert_cmpstr (
 word_list_get_indexed_word (word_list,
 word_array_index (clue_matches, 1)),
 ==,
 "EGGO");
 g_assert_cmpstr (
 word_list_get_indexed_word (word_list,
 word_array_index (clue_matches, 2)),
 ==,
 "EGGY");
}

That’s an awful lot of code just to say:

  1. Use the EGG_IPUZ_FILE_PATH file.
  2. Run the word_list_find_clue_matches() function on the 2-Across clue.
  3. Assert that the results are ["EGGS", "EGGO", "EGGY"].

And this was repeated in every test case, and needed to be repeated in every new test case I added. So, I knew that I had to refactor my code.

Fixtures and functions

My first step was to extract all of this setup code:

g_autoptr (WordList) word_list = NULL;
IpuzGrid *grid;
g_autofree IpuzClue *clue = NULL;
g_autoptr (WordArray) clue_matches = NULL;

word_list = get_broda_word_list ();
grid = create_grid (EGG_IPUZ_FILE_PATH);
clue = get_clue (grid, IPUZ_CLUE_DIRECTION_ACROSS, 2);
clue_matches = word_list_find_clue_matches (word_list, clue, grid);

To do this, I used a fixture:

typedef struct {
 WordList *word_list;
 IpuzGrid *grid;
} Fixture;

static void fixture_set_up (Fixture *fixture, gconstpointer user_data)
{
 const gchar *ipuz_file_path = (const gchar *) user_data;

 fixture->word_list = get_broda_word_list ();
 fixture->grid = create_grid (ipuz_file_path);
}

static void fixture_tear_down (Fixture *fixture, gconstpointer user_data)
{
 g_object_unref (fixture->word_list);
}

My next step was to extract all of this assertion code:

g_assert_cmpint (word_array_len (clue_matches), ==, 3);
g_assert_cmpstr (word_list_get_indexed_word (word_list,
 word_array_index (clue_matches, 0)),
 ==,
 "EGGS");
g_assert_cmpstr (
 word_list_get_indexed_word (word_list,
 word_array_index (clue_matches, 1)),
 ==,
 "EGGO");
g_assert_cmpstr (
 word_list_get_indexed_word (word_list,
 word_array_index (clue_matches, 2)),
 ==,
 "EGGY");

To do this, I created a new function that runs word_list_find_clue_matches() and asserts that the result equals an expected_words parameter.

static void
test_clue_matches (WordList *word_list,
 IpuzGrid *grid,
 IpuzClueDirection clue_direction,
 guint clue_index,
 const gchar *expected_words[])
{
 const IpuzClue *clue = NULL;
 g_autoptr (WordArray) clue_matches = NULL;
 g_autoptr (WordArray) expected_word_array = NULL;

 clue = get_clue (grid, clue_direction, clue_index);
 clue_matches = word_list_find_clue_matches (word_list, clue, grid);
 expected_word_array = str_array_to_word_array (expected_words, word_list);

 g_assert_true (word_array_equals (clue_matches, expected_word_array));
}

After all that, here’s what my test case looked like:

static void
test_egg_ipuz (Fixture *fixture, gconstpointer user_data)
{
 test_clue_matches (fixture->word_list,
 fixture->grid,
 IPUZ_CLUE_DIRECTION_ACROSS,
 2,
 (const gchar*[]){"EGGS", "EGGO", "EGGY", NULL});
}

Much better!

Macro functions

But as great as that was, I knew that I could take it even further, with macro functions.

I created a macro function to simplify test case definitions:

#define ASSERT_CLUE_MATCHES(DIRECTION, INDEX, ...) \
 test_clue_matches (fixture->word_list, \
 fixture->grid, \
 DIRECTION, \
 INDEX, \
 (const gchar*[]){__VA_ARGS__, NULL})

Now, test_egg_ipuz() looked like this:

static void
test_egg_ipuz (Fixture *fixture, gconstpointer user_data)
{
 ASSERT_CLUE_MATCHES (IPUZ_CLUE_DIRECTION_ACROSS, 2, "EGGS", "EGGO", "EGGY");
}

I also made a macro function for the test case declarations:

#define ADD_IPUZ_TEST(test_name, file_name) \
 g_test_add ("/clue_matches/" #test_name, \
 Fixture, \
 "tests/clue-matches/" #file_name, \
 fixture_set_up, \
 test_name, \
 fixture_tear_down)

Which turned this:

g_test_add ("/clue_matches/test_egg_ipuz",
 Fixture,
 EGG_IPUZ,
 fixture_set_up,
 test_egg_ipuz,
 fixture_tear_down);

Into this:

ADD_IPUZ_TEST (test_egg_ipuz, egg.ipuz);

An unfortunate bug

So, picture this: You’ve just finished refactoring your test code. You add some finishing touches, do a final test run, look over the diff one last time…and everything seems good. So, you open up an MR and start working on other things.

But then, the unthinkable happens—the CI pipeline fails! And apparently, it’s due to a test failure? But you ran your tests locally, and everything worked just fine. (You run them again just to be sure, and yup, they still pass.) And what’s more, it’s only the Flatpak CI tests that failed. The native CI tests succeeded.

So…what, then? What could be the cause of this? I mean, how do you even begin debugging a test failure that only happens in a particular CI job and nowhere else? Well, let’s just try running the CI pipeline again and see what happens. Maybe the problem will go away. Hopefully, the problem goes away.

Nope. Still fails.

Rats.

Well, I’ll spare you the gory details that it took for me to finally figure this one out. But the cause of the bug was me accidentally freeing an object that I should never have freed.

This meant that the corresponding memory segment could be—but, importantly, did not necessarily have to be—filled with garbage data. And this is why only the Flatpak job’s test run failed…well, at first, anyway. By changing around some of the test cases, I was able to get the native CI tests and local tests to fail. And this is what eventually clued me into the true nature of this bug.

So, after spending the better part of two weeks, here is the fix I ended up with:

@@ -94,7 +94,7 @@ test_clue_matches (WordList *word_list,
 guint clue_index,
 const gchar *expected_words[])
 {
- g_autofree IpuzClue *clue = NULL;
+ const IpuzClue *clue = NULL;
 g_autoptr (WordArray) clue_matches = NULL;
 g_autoptr (WordArray) expected_word_array = NULL;

Jordan Petridis

@alatiera

Nightly Flatpak CI gets a cache

Recently I got around tackling a long standing issue for good. There were multiple attempts in the past 6 years to cache flatpak-builder artifacts with Gitlab but none had worked so far.

On the technical side of things, flatpak-builder relies heavily on extended attributes (xattrs) on files to do cache validation. Using gitlab’s built-in cache or artifacts mechanisms results in a plain zip archive which strips all the attributes from the files, causing the cache to always be invalid once restored. Additionally the hardlinks/symlinks in the cache break. One workaround for this is to always tar the directories and then manually extract them after they are restored.

On the infrastructure of things we stumble once again into Gitlab. When a cache or artifact is created, it’s uploaded into the Gitlab’s instance storage so it can later be reused/redownloaded into any runner. While this is great, it also quickly ramps up the network egress bill we have to pay along with storage.
 And since its a public gitlab instance that anyone can make request against repositories, it gets out of hand fast.

Couple weeks ago Bart pointed me out to Flathub’s workaround for this same problem. It comes down to making it someone else problem, and ideally one someone who is willing to fund FOSS infrastructure. We can use ORAS to wrap files and directories into an OCI wrapper and publish it to public registries. And it worked. Quite handy! OCI images are the new tarballs.

Now when a pipeline run against your default branch (and assuming it’s protected) it will create a cache artifact and upload to the currently configured OCI registry. Afterwards, any build, including Merge Request pipelines, will download the image, extract the artifacts and check how much of it is still valid.

From some quick tests and numbers, GNOME Builder went from a ~16 minute build to 6 minutes for our x86_64 runners. While on the AArch64 runner the impact was even bigger, going from 50 minutes to 16 minutes. Not bad. The more modules you are building in your manifest, the more noticeable it is.

Unlike Buildstream, there is no Content Addressable Server and flatpak-builder itself isn’t aware of the artifacts we publish or can associate them with the cache keys. The OCI/ORAS cache artifacts are manual and a bit hacky of a solution but works well in practice and until we have better tooling. To optimize a bit better for less cache-misses consider building modules from pinned commits/tags/tarballs and building modules from moving branches as late as possible.

If you are curious in the details, take a look at the related Merge Request in the templates repository and the follow up commits.

Free Palestine ✊

Bilal Elmoussaoui

@belmoussaoui

Testing a Rust library - Code Coverage

It has been a couple of years since I started working on a Rust library called oo7 as a Secret Service client implementation. The library ended up also having support for per-sandboxed app keyring using the Secret portal with a seamless API for end-users that makes usage from the application side straightforward.

The project, with time, grew support for various components:

  • oo7-cli: A secret-tool replacement but much better, as it allows not only interacting with the Secret service on the DBus session bus but also with any keyring. oo7-cli --app-id com.belmoussaoui.Authenticator list, for example, allows you to read the sandboxed app with app-id com.belmoussaoui.Authenticator's keyring and list its contents, something that is not possible with secret-tool.
  • oo7-portal: A server-side implementation of the Secret portal mentioned above. Straightforward, thanks to my other library ASHPD.
  • cargo-credential-oo7: A cargo credential provider built using oo7 instead of libsecret.
  • oo7-daemon: A server-side implementation of the Secret service.

The last component was kickstarted by Dhanuka Warusadura, as we already had the foundation for that in the client library, especially the file backend reimplementation of gnome-keyring. The project is slowly progressing, but it is almost there!

The problem with replacing such a very sensitive component like gnome-keyring-daemon is that you have to make sure the very sensitive user data is not corrupted, lost, or inaccessible. For that, we need to ensure that both the file backend implementation in the oo7 library and the daemon implementation itself are well tested.

That is why I spent my weekend, as well as a whole day off, working on improving the test suite of the wannabe core component of the Linux desktop.

Coverage Report

One metric that can give the developer some insight into which lines of code or functions of the codebase are executed when running the test suite is code coverage.

In order to get the coverage of a Rust project, you can use a project like Tarpaulin, which integrates with the Cargo build system. For a simple project, a command like this, after installing Tarpaulin, can give you an HTML report:

cargo tarpaulin \
  --package oo7 \
  --lib \
  --no-default-features \
  --features "tracing,tokio,native_crypto" \
  --ignore-panics \
  --out Html \
  --output-dir coverage

Except in our use case, it is slightly more complicated. The client library supports switching between Rust native cryptographic primitives crates or using OpenSSL. We must ensure that both are tested.

For that, we can export our report in LCOV for native crypto and do the same for OpenSSL, then combine the results using a tool like grcov.

mkdir -p coverage-raw
cargo tarpaulin \
  --package oo7 \
  --lib \
  --no-default-features \
  --features "tracing,tokio,native_crypto" \
  --ignore-panics \
  --out Lcov \
  --output-dir coverage-raw
mv coverage-raw/lcov.info coverage-raw/native-tokio.info

cargo tarpaulin \
  --package oo7 \
  --lib \
  --no-default-features \
  --features "tracing,tokio,openssl_crypto" \
  --ignore-panics \
  --out Lcov \
  --output-dir coverage-raw
mv coverage-raw/lcov.info coverage-raw/openssl-tokio.info

and then combine the results with

cat coverage-raw/*.info > coverage-raw/combined.info

grcov coverage-raw/combined.info \
  --binary-path target/debug/ \
  --source-dir . \
  --output-type html \
  --output-path coverage \
  --branch \
  --ignore-not-existing \
  --ignore "**/portal/*" \
  --ignore "**/cli/*" \
  --ignore "**/tests/*" \
  --ignore "**/examples/*" \
  --ignore "**/target/*"

To make things easier, I added a bash script to the project repository that generates coverage for both the client library and the server implementation, as both are very sensitive and require intensive testing.

With that script in place, I also used it on CI to generate and upload the coverage reports at https://bilelmoussaoui.github.io/oo7/coverage/. The results were pretty bad when I started.

Testing

For the client side, most of the tests are straightforward to write; you just need to have a secret service implementation running on the DBus session bus. Things get quite complicated when the methods you have to test require a Prompt, a mechanism used in the spec to define a way for the user to be prompted for a password to unlock the keyring, create a new collection, and so on. The prompter is usually provided by a system component. For now, we just skipped those tests.

For the server side, it was mostly about setting up a peer-to-peer connection between the server and the client:

let guid = zbus::Guid::generate();
let (p0, p1) = tokio::net::UnixStream::pair().unwrap();

let (client_conn, server_conn) = tokio::try_join!(
    // Client
    zbus::connection::Builder::unix_stream(p0).p2p().build(),
    // Server
    zbus::connection::Builder::unix_stream(p1)
        .server(guid)
        .unwrap()
        .p2p()
        .build(),
)
.unwrap();

Thanks to the design of the client library, we keep the low-level APIs under oo7::dbus::api, which allowed me to straightforwardly write a bunch of server-side tests already.

There are still a lot of tests that need to be written and a few missing bits to ensure oo7-daemon is in an acceptable shape to be proposed as an alternative to gnome-keyring.

Don't overdo it

The coverage report is not meant to be targeted at 100%. It’s not a video game. You should focus only on the critical parts of your code that must be tested. Testing a Debug impl or a From trait (if it is straightforward) is not really useful, other than giving you a small dose of dopamine from "achieving" something.

Till then, may your coverage never reach 100%.

Dev Log September 2025

Not as much as I wanted to do was done in September.

libopenraw

Extracting more of the calibration values for colour correction on DNG. Currently work on fixing the purple colour cast.

Added Nikon ZR and EOS C50.

ExifTool

Submitted some metadata updates to ExifTool. Because it nice to have, and also because libopenraw uses some of these autogenerated: I have a Perl script to generate Rust code from it (it used to do C++).

Niepce

Finally merged the develop branch with all the import dialog work after having requested that it be removed from Damned Lies to not strain the translator is there is a long way to go before we can freeze the strings.

Supporting cast

Among the number of packages I maintain / update on flathub, LightZone is a digital photo editing application written in Java1. Updating to the latest runtime 25.08 cause it to ignore the HiDPI setting. It will honour GDK_SCALE environment but this isn't set. So I wrote the small command line too gdk-scale to output the value. See gdk-scale on gitlab. And another patch in the wrapper script.

HiDPI support remains a mess across the board. Fltk just recently gained support for it (it's used by a few audio plugins).

1

Don't try this at home.

SO_PEERPIDFD Gets More Useful

A while ago I wrote about the limited usefulness of SO_PEERPIDFD. for authenticating sandboxed applications. The core problem was simple: while pidfds gave us a race-free way to identify a process, we still had no standardized way to figure out what that process actually was - which sandbox it ran in, what application it represented, or what permissions it should have.

The situation has improved considerably since then.

cgroup xattrs

Cgroups now support user extended attributes. This feature allows arbitrary metadata to be attached to cgroup inodes using standard xattr calls.

We can change flatpak (or snap, or any other container engine) to create a cgroup for application instances it launches, and attach metadata to it using xattrs. This metadata can include the sandboxing engine, application ID, instance ID, and any other information the compositor or D-Bus service might need.

Every process belongs to a cgroup, and you can query which cgroup a process belongs to through its pidfd - completely race-free.

Standardized Authentication

Remember the complexity from the original post? Services had to implement different lookup mechanisms for different sandbox technologies:

  • For flatpak: look in /proc/$PID/root/.flatpak-info
  • For snap: shell out to snap routine portal-info
  • For firejail: no solution

All of this goes away. Now there’s a single path:

  1. Accept a connection on a socket
  2. Use SO_PEERPIDFD to get a pidfd for the client
  3. Query the client’s cgroup using the pidfd
  4. Read the cgroup’s user xattrs to get the sandbox metadata

This works the same way regardless of which sandbox engine launched the application.

A Kernel Feature, Not a systemd One

It’s worth emphasizing: cgroups are a Linux kernel feature. They have no dependency on systemd or any other userspace component. Any process can manage cgroups and attach xattrs to them. The process only needs appropriate permissions and is restricted to a subtree determined by the cgroup namespace it is in. This makes the approach universally applicable across different init systems and distributions.

To support non-Linux systems, we might even be able to abstract away the cgroup details, by providing a varlink service to register and query running applications. On Linux, this service would use cgroups and xattrs internally.

Replacing Socket-Per-App

The old approach - creating dedicated wayland, D-Bus, etc. sockets for each app instance and attaching metadata to the service which gets mapped to connections on that socket - can now be retired. The pidfd + cgroup xattr approach is simpler: one standardized lookup path instead of mounting special sockets. It works everywhere: any service can authenticate any client without special socket setup. And it’s more flexible: metadata can be updated after process creation if needed.

For compositor and D-Bus service developers, this means you can finally implement proper sandboxed client authentication without needing to understand the internals of every container engine. For sandbox developers, it means you have a standardized way to communicate application identity without implementing custom socket mounting schemes.

Jiri Eischmann

@jeischma

Fedora & CentOS at LinuxDays 2025

Another edition of LinuxDays took place in Prague last weekend – the country’s largest Linux event drawing more than 1200 attendees and as every yearm we had a Fedora booth there – this time we also representing CentOS.

I was really glad that Tomáš Hrčka helped me staff the booth. I’m focused on the desktop part of Fedora and don’t follow the rest of the project in such detail. As a member of FESCo and Fedora infra team he has a great overview of what is going on in the project and our knowledge complemented each other very well when answering visitors’ questions. I’d also like to thank Adellaide Mikova who helped us tremendously despite not being a technical person.

This year I took our heavy 4K HDR display and showcased HDR support in Fedora Linux whose implementation was a multi-year effort for our team. I played HDR videos in two different video players (one that supports HDR and one that doesn’t), so that people could see a difference, and explained what needed to be implemented to make it work.

Another highlight of our booth were the laptops that run Fedora exceptionally well: Slimbook and especially Framework Laptop. Visitors were checking them out and we spoke about how the Fedora community works with the vendors to make sure Fedora Linux runs flawlessly on their laptops.

We also got a lot of questions about CentOS. We met quite a few people who were surprised that CentOS still exists. We explained to them that it lives on in the form of CentOS Stream and tried to dispel some of common misconceptions surrounding it.

Exhausting as it is, I really enjoy going to LinuxDays, but it’s also a great opportunity to explain things and get direct feedback from the community.

Servo GTK

I just checked and it seems that it has been 9 years since my last post in this blog :O

As part of my job at Amazon I started working in a GTK widget which will allow embedding a Servo Webview inside a GTK application. This was mostly a research project just to understand the current state of Servo and whether it was at a good enough state to migrate from WebkitGTK to it. I have to admit that it is always a pleasure to work with Rust and the great gtk-rs bindings. Instead, Servo while it is not yet ready for production, or at least not for what we need in our product, it was simple to embed and to get something running in just a few days. The community is also amazing, I had some problems along the way and they were providing good suggestions to get me unblocked in no time.

This project can be found in the following git repo: https://github.com/nacho/servo-gtk

I also created some Issues with some tasks that can be done to improve the project in case that anyone is interested.

Finally I leave you here a the usual mandatory screenshot:

Debarshi Ray

@rishi

Ollama on Fedora Silverblue

I found myself dealing with various rough edges and questions around running Ollama on Fedora Silverblue for the past few months. These arise from the fact that there are a few different ways of installing Ollama, /usr is a read-only mount point on Silverblue, people have different kinds of GPUs or none at all, the program that’s using Ollama might be a graphical application in a Flatpak or part of the operating system image, and so on. So, I thought I’ll document a few different use-cases in one place for future reference or maybe someone will find it useful.

Different ways of installing Ollama

There are at least three different ways of installing Ollama on Fedora Silverblue. Each of those have their own nuances and trade-offs that we will explore later.

First, there’s the popular single command POSIX shell script installer:

$ curl -fsSL https://ollama.com/install.sh | sh

There is a manual step by step variant for those who are uncomfortable with running a script straight off the Internet. They both install Ollama in the operating system’s /usr/local or /usr or / prefix, depending on which one comes first in the PATH environment variable, and attempts to enable and activate a systemd service unit that runs ollama serve.

Second, there’s a docker.io/ollama/ollama OCI image that can be used to put Ollama in a container. The container runs ollama serve by default.

Finally, there’s Fedora’s ollama RPM.

Surprise

Astute readers might be wondering why I mentioned the shell script installer in the context of Fedora Silverblue, because /usr is a read-only mount point. Won’t it break the script? Not really, or the script breaks but not in the way one might expect.

Even though, /usr is read-only on Silverblue, /usr/local is not, because it’s a symbolic link to /var/usrlocal, and Fedora defaults to putting /usr/local/bin earlier in the PATH environment variable than the other prefixes that the installer attempts to use, as long as pkexec(1) isn’t being used. This happy coincidence allows the installer to place the Ollama binaries in their right places.

The script does fail eventually when attempting to create the systemd service unit to run ollama serve, because it tries to create an ollama user with /usr/share/ollama as its home directory. However, this half-baked installation works surprisingly well as long as nobody is trying to use an AMD GPU.

NVIDIA GPUs work, if the proprietary driver and nvidia-smi(1) are present in the operating system, which are provided by the kmod-nvidia and xorg-x11-drv-nvidia-cuda packages from RPM Fusion; and so does CPU fallback.

Unfortunately, the results would be the same if the shell script installer is used inside a Toolbx container. It will fail to create the systemd service unit because it can’t connect to the system-wide instance of systemd.

Using AMD GPUs with Ollama is an important use-case. So, let’s see if we can do better than trying to manually work around the hurdles faced by the script.

OCI image

The docker.io/ollama/ollama OCI image requires the user to know what processing hardware they have or want to use. To use it only with the CPU without any GPU acceleration:

$ podman run \
    --name ollama \
    --publish 11434:11434 \
    --rm \
    --security-opt label=disable \
    --volume ~/.ollama:/root/.ollama \
    docker.io/ollama/ollama:latest

This will be used as the baseline to enable different kinds of GPUs. Port 11434 is the default port on which the Ollama server listens, and ~/.ollama is the default directory where it stores its SSH keys and artificial intelligence models.

To enable NVIDIA GPUs, the proprietary driver and nvidia-smi(1) must be present on the host operating system, as provided by the kmod-nvidia and xorg-x11-drv-nvidia-cuda packages from RPM Fusion. The user space driver has to be injected into the container from the host using NVIDIA Container Toolkit, provided by the nvidia-container-toolkit package from Fedora, for Ollama to be able to use the GPUs.

The first step is to generate a Container Device Interface (or CDI) specification for the user space driver:

$ sudo nvidia-ctk cdi generate --output /etc/cdi/nvidia.yaml
…
…

Then the container needs to be run with access to the GPUs, by adding the --gpus option to the baseline command above:

$ podman run \
    --gpus all \
    --name ollama \
    --publish 11434:11434 \
    --rm \
    --security-opt label=disable \
    --volume ~/.ollama:/root/.ollama \
    docker.io/ollama/ollama:latest

AMD GPUs don’t need the driver to be injected into the container from the host, because it can be bundled with the OCI image. Therefore, instead of generating a CDI specification for them, an image that bundles the driver must be used. This is done by using the rocm tag for the docker.io/ollama/ollama image.

Then container needs to be run with access to the GPUs. However, the --gpus option only works for NVIDIA GPUs. So, the specific devices need to be spelled out by adding the --devices option to the baseline command above:

$ podman run \
    --device /dev/dri \
    --device /dev/kfd \
    --name ollama \
    --publish 11434:11434 \
    --rm \
    --security-opt label=disable \
    --volume ~/.ollama:/root/.ollama \
    docker.io/ollama/ollama:rocm

However, because of how AMD GPUs are programmed with ROCm, it’s possible that some decent GPUs might not be supported by the docker.io/ollama/ollama:rocm image. The ROCm compiler needs to explicitly support the GPU in question, and Ollama needs to be built with such a compiler. Unfortunately, the binaries in the image leave out support for some GPUs that would otherwise work. For example, my AMD Radeon RX 6700 XT isn’t supported.

This can be verified with nvtop(1) in a Toolbx container. If there’s no spike in the GPU and its memory then its not being used.

It will be good to support as many AMD GPUs as possible with Ollama. So, let’s see if we can do better.

Fedora’s ollama RPM

Fedora offers a very capable ollama RPM, as far as AMD GPUs are concerned, because Fedora’s ROCm stack supports a lot more GPUs than other builds out there. It’s possible to check if a GPU is supported either by using the RPM and keeping an eye on nvtop(1), or by comparing the name of the GPU shown by rocminfo with those listed in the rocm-rpm-macros RPM.

For example, according to rocminfo, the name for my AMD Radeon RX 6700 XT is gfx1031, which is listed in rocm-rpm-macros:

$ rocminfo
ROCk module is loaded
=====================    
HSA System Attributes    
=====================    
Runtime Version:         1.1
Runtime Ext Version:     1.6
System Timestamp Freq.:  1000.000000MHz
Sig. Max Wait Duration:  18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count)
Machine Model:           LARGE                              
System Endianness:       LITTLE                             
Mwaitx:                  DISABLED
DMAbuf Support:          YES

==========               
HSA Agents               
==========               
*******                  
Agent 1                  
*******                  
  Name:                    AMD Ryzen 7 5800X 8-Core Processor 
  Uuid:                    CPU-XX                             
  Marketing Name:          AMD Ryzen 7 5800X 8-Core Processor 
  Vendor Name:             CPU                                
  Feature:                 None specified                     
  Profile:                 FULL_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        0(0x0)                             
  Queue Min Size:          0(0x0)                             
  Queue Max Size:          0(0x0)                             
  Queue Type:              MULTI                              
  Node:                    0                                  
  Device Type:             CPU                                
…
…
*******                  
Agent 2                  
*******                  
  Name:                    gfx1031                            
  Uuid:                    GPU-XX                             
  Marketing Name:          AMD Radeon RX 6700 XT              
  Vendor Name:             AMD                                
  Feature:                 KERNEL_DISPATCH                    
  Profile:                 BASE_PROFILE                       
  Float Round Mode:        NEAR                               
  Max Queue Number:        128(0x80)                          
  Queue Min Size:          64(0x40)                           
  Queue Max Size:          131072(0x20000)                    
  Queue Type:              MULTI                              
  Node:                    1                                  
  Device Type:             GPU
…
…

The ollama RPM can be installed inside a Toolbx container, or it can be layered on top of the base registry.fedoraproject.org/fedora image to replace the docker.io/ollama/ollama:rocm image:

FROM registry.fedoraproject.org/fedora:42
RUN dnf --assumeyes upgrade
RUN dnf --assumeyes install ollama
RUN dnf clean all
ENV OLLAMA_HOST=0.0.0.0:11434
EXPOSE 11434
ENTRYPOINT ["/usr/bin/ollama"]
CMD ["serve"]

Unfortunately, for obvious reasons, Fedora’s ollama RPM doesn’t support NVIDIA GPUs.

Conclusion

From the puristic perspective of not touching the operating system’s OSTree image, and being able to easily remove or upgrade Ollama, using an OCI container is the best option for using Ollama on Fedora Silverblue. Tools like Podman offer a suite of features to manage OCI containers and images that are far beyond what the POSIX shell script installer can hope to offer.

It seems that the realities of GPUs from AMD and NVIDIA prevent the use of the same OCI image, if we want to maximize our hardware support, and force the use of slightly different Podman commands and associated set-up. We have to create our own image using Fedora’s ollama RPM for AMD, and the docker.io/ollama/ollama:latest image with NVIDIA Container Toolkit for NVIDIA.

Hans de Goede

@hansdg

Fedora 43 will ship with FOSS Meteor, Lunar and Arrow Lake MIPI camera support

Good news the just released 6.17 kernel has support for the IPU7 CSI2 receiver and the missing USBIO drivers have recently landed in linux-next. I have backported the USBIO drivers + a few other camera fixes to the Fedora 6.17 kernel.

I've also prepared an updated libcamera-0.5.2 Fedora package with support for IPU7 (Lunar Lake) CSI2 receivers as well as backporting a set of upstream SwStats and AGC fixes, fixing various crashes as well as the bad flicker MIPI camera users have been hitting with libcamera 0.5.2.

Together these 2 updates should make Fedora 43's FOSS MIPI camera support work on most Meteor Lake, Lunar Lake and Arrow Lake laptops!

If you want to give this a try, install / upgrade to Fedora 43 beta and install all updates. If you've installed rpmfusion's binary IPU6 stack please run:

sudo dnf remove akmod-intel-ipu6 'kmod-intel-ipu6*'

to remove it as it may interfere with the FOSS stack and finally reboot. Please first try with qcam:

sudo dnf install libcamera-qcam
qcam

which only tests libcamera and after that give apps which use the camera through pipewire a try like gnome's "Camera" app (snapshot) or video-conferencing in Firefox.

Note snapshot on Lunar Lake triggers a bug in the LNL Vulkan code, to avoid this start snapshot from a terminal with:

GSK_RENDERER=gl snapshot

If you have a MIPI camera which still does not work please file a bug following these instructions and drop me an email with the bugzilla link at hansg@kernel.org.

comment count unavailable comments

Investigating a forged PDF

I had to rent a house for a couple of months recently, which is long enough in California that it pushes you into proper tenant protection law. As landlords tend to do, they failed to return my security deposit within the 21 days required by law, having already failed to provide the required notification that I was entitled to an inspection before moving out. Cue some tedious argumentation with the letting agency, and eventually me threatening to take them to small claims court.

This post is not about that.

Now, under Californian law, the onus is on the landlord to hold and return the security deposit - the agency has no role in this. The only reason I was talking to them is that my lease didn't mention the name or address of the landlord (another legal violation, but the outcome is just that you get to serve the landlord via the agency). So it was a bit surprising when I received an email from the owner of the agency informing me that they did not hold the deposit and so were not liable - I already knew this.

The odd bit about this, though, is that they sent me another copy of the contract, asserting that it made it clear that the landlord held the deposit. I read it, and instead found a clause reading SECURITY: The security deposit will secure the performance of Tenant’s obligations. IER may, but will not be obligated to, apply all portions of said deposit on account of Tenant’s obligations. Any balance remaining upon termination will be returned to Tenant. Tenant will not have the right to apply the security deposit in payment of the last month’s rent. Security deposit held at IER Trust Account., where IER is International Executive Rentals, the agency in question. Why send me a contract that says you hold the money while you're telling me you don't? And then I read further down and found this:
Text reading ENTIRE AGREEMENT: The foregoing constitutes the entire agreement between the parties and may bemodified only in writing signed by all parties. This agreement and any modifications, including anyphotocopy or facsimile, may be signed in one or more counterparts, each of which will be deemed anoriginal and all of which taken together will constitute one and the same instrument. The followingexhibits, if checked, have been made a part of this Agreement before the parties’ execution:۞Exhibit 1:Lead-Based Paint Disclosure (Required by Law for Rental Property Built Prior to 1978)۞Addendum 1 The security deposit will be held by (name removed) and applied, refunded, or forfeited in accordance with the terms of this lease agreement.
Ok, fair enough, there's an addendum that says the landlord has it (I've removed the landlord's name, it's present in the original).

Except. I had no recollection of that addendum. I went back to the copy of the contract I had and discovered:
The same text as the previous picture, but addendum 1 is empty
Huh! But obviously I could just have edited that to remove it (there's no obvious reason for me to, but whatever), and then it'd be my word against theirs. However, I'd been sent the document via RightSignature, an online document signing platform, and they'd added a certification page that looked like this:
A Signature Certificate, containing a bunch of data about the document including a checksum or the original
Interestingly, the certificate page was identical in both documents, including the checksums, despite the content being different. So, how do I show which one is legitimate? You'd think given this certificate page this would be trivial, but RightSignature provides no documented mechanism whatsoever for anyone to verify any of the fields in the certificate, which is annoying but let's see what we can do anyway.

First up, let's look at the PDF metadata. pdftk has a dump_data command that dumps the metadata in the document, including the creation date and the modification date. My file had both set to identical timestamps in June, both listed in UTC, corresponding to the time I'd signed the document. The file containing the addendum? The same creation time, but a modification time of this Monday, shortly before it was sent to me. This time, the modification timestamp was in Pacific Daylight Time, the timezone currently observed in California. In addition, the data included two ID fields, ID0 and ID1. In my document both were identical, in the one with the addendum ID0 matched mine but ID1 was different.

These ID tags are intended to be some form of representation (such as a hash) of the document. ID0 is set when the document is created and should not be modified afterwards - ID1 initially identical to ID0, but changes when the document is modified. This is intended to allow tooling to identify whether two documents are modified versions of the same document. The identical ID0 indicated that the document with the addendum was originally identical to mine, and the different ID1 that it had been modified.

Well, ok, that seems like a pretty strong demonstration. I had the "I have a very particular set of skills" conversation with the agency and pointed these facts out, that they were an extremely strong indication that my copy was authentic and their one wasn't, and they responded that the document was "re-sealed" every time it was downloaded from RightSignature and that would explain the modifications. This doesn't seem plausible, but it's an argument. Let's go further.

My next move was pdfalyzer, which allows you to pull a PDF apart into its component pieces. This revealed that the documents were identical, other than page 3, the one with the addendum. This page included tags entitled "touchUp_TextEdit", evidence that the page had been modified using Acrobat. But in itself, that doesn't prove anything - obviously it had been edited at some point to insert the landlord's name, it doesn't prove whether it happened before or after the signing.

But in the process of editing, Acrobat appeared to have renamed all the font references on that page into a different format. Every other page had a consistent naming scheme for the fonts, and they matched the scheme in the page 3 I had. Again, that doesn't tell us whether the renaming happened before or after the signing. Or does it?

You see, when I completed my signing, RightSignature inserted my name into the document, and did so using a font that wasn't otherwise present in the document (Courier, in this case). That font was named identically throughout the document, except on page 3, where it was named in the same manner as every other font that Acrobat had renamed. Given the font wasn't present in the document until after I'd signed it, this is proof that the page was edited after signing.

But eh this is all very convoluted. Surely there's an easier way? Thankfully yes, although I hate it. RightSignature had sent me a link to view my signed copy of the document. When I went there it presented it to me as the original PDF with my signature overlaid on top. Hitting F12 gave me the network tab, and I could see a reference to a base.pdf. Downloading that gave me the original PDF, pre-signature. Running sha256sum on it gave me an identical hash to the "Original checksum" field. Needless to say, it did not contain the addendum.

Why do this? The only explanation I can come up with (and I am obviously guessing here, I may be incorrect!) is that International Executive Rentals realised that they'd sent me a contract which could mean that they were liable for the return of my deposit, even though they'd already given it to my landlord, and after realising this added the addendum, sent it to me, and assumed that I just wouldn't notice (or that, if I did, I wouldn't be able to prove anything). In the process they went from an extremely unlikely possibility of having civil liability for a few thousand dollars (even if they were holding the deposit it's still the landlord's legal duty to return it, as far as I can tell) to doing something that looks extremely like forgery.

There's a hilarious followup. After this happened, the agency offered to do a screenshare with me showing them logging into RightSignature and showing the signed file with the addendum, and then proceeded to do so. One minor problem - the "Send for signature" button was still there, just below a field saying "Uploaded: 09/22/25". I asked them to search for my name, and it popped up two hits - one marked draft, one marked completed. The one marked completed? Didn't contain the addendum.

comment count unavailable comments

Arun Raghavan

@arunsr

Asymptotic on hiatus

Asymptotic was started 6 years ago, when I wanted to build something that would be larger than just myself.

We’ve worked with some incredible clients in this time, on a wide range of projects. I would be remiss to not thank all the teams that put their trust in us.

In addition to working on interesting challenges, our goal was to make sure we were making a positive impact on the open source projects that we are part of. I think we truly punched above our weight class (pardon the boxing metaphor), on this front – all the upstream work we have done stands testament to that.

Of course, the biggest single contributor to what we were able to achieve is our team. My partner, Deepa, was instrumental in shaping how the company was formed and run. Sanchayan (who took a leap of faith in joining us first), and Taruntej were stellar colleagues and friends on this journey.

It’s been an incredibly rewarding experience, but the time has come to move on to other things, and we have now paused operations. I’ll soon write about some recent work and what’s next.

Alley Chaggar

@AlleyChaggar

Final Report

Intro:

Hi everyone, it’s the end of GSoc! I had a great experience throughout this whole process. I’ve learned so much. This is essentially the ‘final report’ for GSoC, but not my final report for this project in general by a long shot. I still have so much more I want to do, but here is what I’ve done so far.

Project:

JSON, YAML, and/or XML emitting and parsing integration into Vala’s compiler.

Mentor:

I would like to thank Lorenz Wildberg for being my mentor for this project, as well as the Vala community.

Description:

The main objective of this project is to integrate direct syntax support for parsing and emitting JSON, XML, and/or YAML formats in Vala. This will cut back the boilerplate code, making it more user-friendly and efficient for developers working with these formatting languages.

What I’ve done:

Research

  • I’ve done significant research in both JSON and YAML parsing and emitting in various languages like C#, Java, Rust and Python.
  • Looked into how Vala currently handles JSON using JSON GLib classes, and I then modelled the C code after the examples I collected.
  • Modelled the JSON module after other modules in the codegen, specifically, mainly after Dbus, Gvariant, GObject, and GTK.

Custom JSON Overrides and Attribute

  • Created Vala syntax sugar specifically making a [JSON] attribute to do serialization.
  • Built support for custom overrides as in mapping JSON keys to differently named fields/properties.
  • Reduced boilerplate by generating C code behind the scenes.

Structs

  • I’ve created both Vala functions to deserialize and serialize structs using JSON boxed functions.
  • I created a Vala generate_struct_serialize_func function to create a C code function called _%s_serialize_func to serialize fields.
  • I then created a Vala function generate_struct_to_json to create a C code function called _json_%s_serialize_mystruct to fully serialize the struct by using boxed serialize functions.

  • I created a Vala generate_struct_deserialize_func function to create a C code function called _%s_deserialize_func to deserialize fields.
  • I then created a Vala function generate_struct_to_json to create a C code function called _json_%s_deserialize_mystruct to fully deserialize the struct by using boxed deserialized functions.

GObjects

  • I’ve created both Vala functions to deserialize and serialize GObjects using json_gobject_serialize and JSON generator.
  • I then created a Vala function generate_gclass_to_json to create a C code function called _json_%s_serialize_gobject_myclass to fully serialize GObjects.

  • I created a Vala generate_gclass_from_json function to create a C code function called _json_%s_deserialize_class to deserialize fields.

Non-GObjects

  • I’ve done serializing of non-GObjects using JSON GLib’s builder functions.
  • I then created a Vala function generate_class_to_json to create a C code function called _json_%s_serialize_myclass to fully serialize non-objects that aren’t inheriting from Object or Json.Serializable.

Future Work:

Research

  • Research still needs to be put into integrating XML and determining which library to use.
  • The integration of YAML and other formatting languages not only JSON, YAML, or XML.

Custom Overrides and Attributes

  • I want to create more specialized attributes for JSON that only do serialization or deserialization. Such as [JsonDeserialize] and [JsonSerialize] or something similar.
  • [JSON] attribute needs to do both deserializing and serializing, and at the moment, the deserializing code has problems.
  • XML, YAML, and other formating languages will follow very similar attribute patterns: [Yaml], [Xml], [Json].

Bugs

  • unref c code functions are calling nulls, which shouldn’t be the cause. They need proper types going through.
  • Deserializing prompts a redefinition that needs to be corrected.
  • Overridden GObject properties need to have setters made to be able to get the values.

Links