Looking back at this month its been very busy indeed. In fact, so busy that I’m going to split this status update into two parts.
Possibly this is the month where I worked the whole time yet did almost no actual software development. I guess it’s part of getting old that you spend more time organising people and sharing knowledge than actually making new things yourself. But I did still manage one hack I’m very proud of, which I go into below.
This month we wrapped up the Outreachy internship on the GNOME OS end-to-end tests. I published a writeup here on the Codethink blog. You can also read first hand from Dorothy and Tanju. We had a nice end-of-internship party on meet.gnome.org last week, and we are trying to arrange US visas & travel sponsorship so we can all meet up at GUADEC in Denver this summer.
There are still a few loose ends from the internship. Firstly, if you need people for Linux QA testing work, or anything else, please contact Dorothy and Tanju who are both now looking for jobs, remote or otherwise.
Secondly, the GNOME OS openQA tests currently fail about 50% of the time, so they’re not very useful. This is due to a race condition that causes the initial setup process to fail so the machine doesn’t finish booting.
Its the sort of problem that takes days rather than hours to diagnose so I haven’t made much of a dent in it so far. The good news is, as announced on Friday, there is now a team from Codethink working full time on GNOME OS, funded partly by the STF grant and partly by Codethink, and this issue is high on the list of priorities to fix.
I’m not directly involved in this work, as I am tied up on a multi-year client project that has nothing to do with open source desktops, but of course I am helping where I can, and hopefully the end-to-end tests will be back on top form soon.
If you’ve looked at the GNOME openQA tests you’ll see that I wrote a small CLI tool to drive the openQA test runner and egotistically named it after myself. (The name also makes it clear that it’s not something built or supported by the main openQA project).
I used Rust to write ssam_openqa so its already pretty reliable, but until now it lacked proper integration tests. The blocker was this: openQA is for testing whole operating systems, which are large and slow to deploy. We need to use the real openQA test runner in the ssam_openqa tests, otherwise the tests don’t really prove that things are working, but how can we get a suitably minimal openQA scenario?
During some downtime on my last trip to Manchester I got all the pieces in place. First, a Buildroot project to build a minimal Linux ISO image, with just four components:
The output is 6MB – small enough to commit straight to Git. (By the way, using the default GNU libc the image size increases to 15MB!)
The next step was to make a minimal test suite. This is harder than it sounds because there isn’t documentation on writing openQA test suites “from the ground up”. By copying liberally from the openSUSE tests I got it down to the following pieces:
config/scenario_definitions.yaml
, with QEMU and machine config.lib/minimaldistribution.pm
, defining two consoles that use QEMU’s virtio terminal interfacelib/serial_terminal.pm
, which implements a helper function to login as root on the virtio terminaltests/minimal.pm
, a simple test that runs a command and asserts that it exits with code 0 (success)main.pm
, the entry point for the test runner.That’s it – a fully working test that can boot Linux and run commands.
The whole point is we don’t need the openQA web UI in this workflow, so its much more comfortable for commandline-driven development. You can of course run the same testsuite with the web UI when you want.
The final piece for ssam_openqa is some integration tests that trigger these tests and assert they run, and we get suitable messages from the frontend, for now implemented in tests/real.rs.
I’m happy that ssam_openqa is less likely to break now when I hack on it, but i’m actually more excited about having figured out how to make a super minimal openQA test.
The openQA test runner, isotovideo, does something similar in its test suite using TinyCore Linux. I didn’t reuse this; firstly because it uses the Linux framebuffer console instead of virtio terminal, which makes it impossible to read output of the commands in tests; and secondly because it’s got a lot of pieces that I’m not interested in. You can anyway see it here.
Have you used ssam_openqa yet ? Let me know if you have! It’s a fun to be able to interact with and debug an entire GNOME OS VM using this tool, and I have a few more feature ideas to make this even more fun in future.
BPF Performance Tools author and all around profiling expert Brendan Gregg wrote a blog post that sums up what was in my Fedora Magazine article quite well.
Though he has this to say on the topic of Fedora who made this ground breaking change and Ubuntu who followed along afterwards:
The main users of this change are enterprise Linux. Back-end servers.
Which is true in the sense of absolute numbers. But I must say it’s been extremely valuable on the desktop.
I can’t imagine having contributed to making VTE (a code-base I was unfamiliar with) twice as fast without it. Especially when that work happened over the course of about two weeks. It’s so much easier to do performance work when one monitor has usable profiler flamegraphs and the other code.
The wash/rinse/repeat cycle has gotten really good on Fedora and our performance future is bright.
We hit a major milestones this week with the long worked on adoption of PipeWire Camera support finally starting to land!
Not long ago Firefox was released with experimental PipeWire camera support thanks to the great work by Jan Grulich.
Then this week OBS Studio shipped with PipeWire camera support thanks to the great work of Georges Stavracas, who cleaned up the patches and pushed to get them merged based on earlier work by himself, Wim Taymans and Colulmbarius. This means we now have two major applications out there that can use PipeWire for camera handling and thus two applications whose video streams that can be interacted with through patchbay applications like Helvum and qpwgraph.
These applications are important and central enough that having them use PipeWire are in itself useful, but they will now also provide two examples of how to do it for application developers looking at how to add PipeWire camera support to their own applications; there is no better documentation than working code.
The PipeWire support is also paired with camera portal support. The use of the portal also means we are getting closer to being able to fully sandbox media applications in Flatpaks which is an important goal in itself. Which reminds me, to test out the new PipeWire support be sure to grab the official OBS Studio Flatpak from Flathub.
For those wondering work is also underway to bring this into Chromium and Google Chrome browsers where Michael Olbrich from Pengutronix has been pushing to get patches written and merged, he did a talk about this work at FOSDEM last year as you can see from these slides with this patch being the last step to get this working there too.
The move to PipeWire also prepared us for the new generation of MIPI cameras being rolled out in new laptops and helps push work on supporting those cameras towards libcamera, the new library for dealing with the new generation of complex cameras. This of course ties well into the work that Hans de Goede and Kate Hsuan has been doing recently, along with Bryan O’Donoghue from Linaro, on providing an open source driver for MIPI cameras and of course the incredible work by Laurent Pinchart and Kieran Bingham from Ideas on board on libcamera itself.
The PipeWire support is of course fresh and I am sure we will find bugs and corner cases that needs fixing as more people test out the functionality in both Firefox and OBS Studio and there are some interface annoyances we are working to resolve. For instance since PipeWire support both V4L and libcamera as a backend you do atm get double entries in your selection dialogs for most of your cameras. Wireplumber has implemented de-deplucation code which will ensure only the libcamera listing will show for cameras supported by both v4l and libcamera, but is only part of the development version of Wireplumber and thus it will land in Fedora Workstation 40, so until that is out you will have to deal with the duplicate options.
Another recent good PipeWire new tidbit that became available with the PipeWire 1.0.4 release PipeWire maintainer Wim Taymans also fixed up the FireWire FFADO support. The FFADO support had been in there for some time, but after seeing Venn Stone do some thorough tests and find issues we decided it was time to bite the bullet and buy some second hand Firewire hardware for Wim to be able to test and verify himself.
.So all in all its been a great few weeks for PipeWire and for Linux Audio AND Video, and if you are an application maintainer be sure to look at how you can add PipeWire camera support to your application and of course get that application packaged up as a Flatpak for people using Fedora Workstation and other distributions to consume.
Well, another cycle has passed.
This one was fairly slow, but nevertheless has a major new feature.
The biggest feature this time is the new dialog widgetry.
Traditionally, dialogs have been separate windows. While this approach generally works, we never figured out how to reasonably support that on mobile. There was a downstream patch for auto-maximizing dialogs, which in turn required them to be resizable, which is not great on desktop, and the patch was hacky and never really supported upstream.
Another problem is close buttons – we want to keep them in dialogs instead of needing to go to overview to close every dialog, and that’s why mobile gnome-shell doesn’t hide close buttons at all atm. Ideally we want to keep them in dialogs, but be able to remove them everywhere else.
While it would be possible to have shell present dialogs differently, another approach is to move them to the client instead. That’s not a new approach, here are some existing examples:
This has both upsides and downsides. One upside is that the toolkit/app has much more control over them. For example, it’s very easy to ensure their size doesn’t exceed the parent window. While this is possible with windows (AdwMessageDialog
does this), it’s hacky and can still break fairly easily with e.g. maximize – in fact, I’m not confident it works across compositors and in both Wayland and X11.
Having dialogs not exceed the parent’s size means not needing to limit their size quite so aggressively – previously it was needed so that the dialog doesn’t get ridiculously large on top of a small window.
The dimming behind the dialog can also vary between light and dark styles – shell cannot do that because it doesn’t know if this particular window is light or dark, only what the whole system prefers.
In future this should also allow to support per-tab dialogs. For apps like web browsers, a background tab spawning a dialog that takes over the whole window is not great.
Meanwhile the main downside is the same thing as was listed in upsides: these dialogs cannot exceed the parent window’s size. Sometimes it’s still needed, e.g. if the parent window is really small.
So, how does that help on mobile? Well, aside from just implementing the existing size constraints on AdwMessageDialog
more cleanly, it allows to present these dialogs as bottom sheets on mobile, instead of centered floating sheets.
A previous design has presented dialogs as pages with back buttons, but that had many other problems, especially on small windows on desktop. For example, what happens if you close the window? A dialog and a “regular” subpage would look identical, so you’d probably expect the close button to close the entire window? But if it’s floating above a larger window?
Bottom sheets avoid this issue – you still see the parent window with its own close button, so it’s obvious that they are closed separately – while still being allowed to take full width like a subpage.
They can also be swiped down, though because of GTK limitations this does not work together with scrolling content. It’s still possible to swipe down from header bar or the empty space above the sheet.
And the fact they are attached to the bottom edge makes them easier to reach on huge phones.
Meanwhile, AdwHeaderBar
always shows a close button within dialogs, regardless of the system layout. The only hint it takes from the system is whether to display the close button on the right or left side.
For the most part they are used similarly to GtkWindow
. The main differences are with presenting and closing dialogs.
The :transient-for
property has been replaced with a parameter in adw_dialog_present()
. It also doesn’t necessarily take a window anymore, but can accept any widget within that window as well. Currently it just fetches the root widget, but once we have per-tab dialogs, that can be controlled with a simple flag instead of needing a new variant of adw_tab_present()
that would take a tab page instead of a window.
The ::close-request
signal has been replaced as well. Because the dialogs can be swiped down on mobile, we need to know if they can be closed before the gesture starts. So, instead there’s a :can-close
property that apps set ahead of time if there’s unsaved data or some other reason to prevent closing.
For close confirmation, there’s a ::close-attempt
signal, which will be fired when trying to close a dialog using a close button or a shortcut while :can-close
is set to FALSE
(or calling adw_dialog_close()
). For actual closing, there’s ::closed
instead.
Finally, adw_dialog_force_close()
closes the dialog while ignoring :can-close
. It can be used to close the dialog after confirmation without needing to fiddle with :can-close
or repeat ::close-attempt
emissions.
If this works well, AdwWindow
may have something similar in future.
The rest is fairly straightforward and is modelled after GtkWindow
. See AdwDialog
docs and migration guide for more details.
Since AdwPreferencesWindow
and other widgets can’t be ported to new dialogs without a significant API break, they have been replaced:
AdwPreferencesWindow
with AdwPreferencesDialog
AdwAboutWindow
with AdwAboutDialog
AdwMessageDialog
with AdwAlertDialog
For the most part they are identical, with a few differences:
AdwPreferencesDialog
has search disabled by default, and gets rid of deprecated subpage APIAdwAlertDialog
can scroll contents, so apps that add their own scrolled windows may want to remove themSince the new widgets landed right at the end of the cycle, the old widgets are not deprecated yet. However, they will be deprecated next cycle, so it’s recommended to migrate your apps anyway.
Standalone bottom sheets (like in audio players) are not available yet either, but will be in future.
Traditionally, dialogs have been done via GtkDialog
which handled this automatically. But for the last few years, apps have been steadily moving away from GtkDialog
and by now it’s deprecated. While that’s not really a problem on its own, one thing that GtkDialog
was doing automatically and custom dialogs don’t is closing when pressing Esc. While it’s pretty easy to add that manually, a lot of apps forget to do so.
But since we have dedicated dialog API again, Esc to close is once again automatic.
Some dialogs don’t have a parent window. Those are still presented as a window. Note that it still doesn’t work well on mobile: while there will be a close button, the sizing will work just as badly as before, so it’s recommended to avoid them.
Dialogs will also be presented as a window if you try to ad them to a parent that can’t host dialogs (anything that’s not an AdwWindow
or AdwApplicationWindow
), or the parent is not resizable. The reason for the last one is to accommodate apps like Emblem, which has a small non-resizable window, where dialogs won’t fully fit, and since it’s non-resizable, it doesn’t work on mobile anyway.
Since we have the window-backed mode, it would be fairly easy to support that preference… except there’s no way to read it from sandboxed apps.
This approach obviously doesn’t work for portals, since they are in a separate process. We do have a plan for them, involving a private protocol in mutter, but it didn’t make it for 46. So, next time.
Those will be replaced as well, but it takes time. For now yes, GtkShortcutsWindow
etc won’t match other dialogs.
As usual, there are some smaller changes.
:text-length
property to AdwEntryRow
.AdwMessageDialog
now has remove_response()
. While this widget is to be deprecated, AdwAlertDialog
has an equivalent as well.AdwBreakpointBin
now allows to programmatically remove breakpoints.AdwSwipeTracker
now has a flag to allow swiping over header bars – used in bottom sheets.As always, thanks to all the contributors who helped to make this release happen.
It's that time again, a new GNOME release is just around the corner.
The vector map still needs to be explicitly enabled via the “layers menu” (the second headerbar button from the left). This also require the backing installation of libshumate to be built with vector renderer support (which is the case when using the Flatpak from Flathub, and also libshumate will default to building the vector renderer from the 1.2.0 release, so distributions should likely have it enabled in their 46 installations).
The current plan looks like we're leaning towards flipping it on by default after the 46 release, so by 47 it will probably mean the old raster tiles from openstreetmap.org will be retired.
Also icons on the map (such as POIs) are now directly clickable. And labels should be localized to the user's language (when the appropriate language tags are available in the OpenStreetMap data).
The favorites menu has also gotten a revamp. Instead of just showing a greyed-out inactive button when there's no favored places it now has an “empty state” hinting on the ability to “star” places.
And favorites can be removed directly from the list without having to open them (and animate to that place to show the bubble).
Update on what happened across the GNOME project in the week from March 08 to March 15.
Sonny announces
As part of the GNOME STF (Sovereign Tech Fund) initiative, a number of community members are working on infrastructure related projects.
Here are the highlights for the past week:
We have been working hard on helping with and solving last minute issues for GNOME 46. This is the first GNOME release since we started the GNOME STF initiative and are very excited about our work rolling to millions of users.
Sophie opened a PR to support git dependencies in the Cargo buildstream plugin. This will make it much easier to work with GNOME core applications written in Rust.
Julian drafted images/sound support for notification portal V2.
Matt now has an end to end prototype for the Wayland-native accessibility stack he’s been working on. He published an update and instructions to run it.
Jonas landed New gestures (part 2): Introduce ClutterGesture. This is one of the building blocks present in the GNOME Shell mobile project that we are working on upstreaming.
Alice released libadwaita 1.5
Sam made a website for Orca to replace the wiki page.
This week we welcome and thank Codethink for partnering with us. Codethink has been a long time supporter of the GNOME project and will be helping us improve developer and quality assurance tooling; with a focus on immutable / image based operating systems.
Building blocks for modern GNOME apps using GTK4.
Alice (she/her) announces
Libadwaita 1.5.0 is out! See the announcement blog post for details
A simple calendar application.
Hari Rana | TheEvilSkeleton (any/all) announces
Daniel Garcia Moreno submitted several merge requests to GNOME Calendar that allowed us to close 25 timezone-related issues! All of these changes are expected to land in GNOME 46.
Convert and manipulate images.
Khaleel Al-Adhami says
Switcheroo now supports exporting multiple images into one PDF file in update 2.1.0!
Keep your data safe.
Sophie (she/her) announces
Pika Backup 0.7.1 is out. It fixes a bug that prevented backup processes from lowering their CPU priority. A UI issue with scheduled backups was fixed as well.
If you missed the 0.7 release because we missed posting it on TWIG, you can learn more about it in my blog post. There is also a great video by Dreams of Autonomy that gives a wonderful introduction to Pika Backup.
You can support Pika’s development on Open Collective. Note that we are not affected by the Open Collective Foundation shutting down since our financial host is the Open Source Collective. The same is the case for almost all other open source projects. So please continue supporting them.
Create bootable drives.
Khaleel Al-Adhami reports
Impression has received a new update 3.1.0 to support .xz compressed file format and fix a bug that was causing slow download speeds
Arjan says
This week @lazka (Christoph Reiter) released PyGObject 3.48.1.
This release contains a couple of noteworthy changes:
- This is the first release using meson-python, and thus meson, instead of setuptools for PEP-517 installations. I.e. when installing via pip or similar.
- PyGObject finally has proper support for fundamental types. That means that you can now work with things like GSK nodes directly from Python.
- The documentation for PyGObject is now hosted on our GNOME hosting environment at https://gnome.pages.gitlab.gnome.org/pygobject/. We aim to have all PyGObject related documentation in one place.
Nokse says
I have released a new version of ASCII Draw with many improvements:
- Greatly improved performance, now you can use a bigger canvases
- Improved design to better match the GNOME style
- Added stepped line and merged all lines and arrows in one tool
- Added move tool to easily move part of your drawings
- Improved default character list dividing them into palettes
- Added custom palettes
- Added primary and secondary character
Guido says
I’ve released livi 0.1.0. Thanks to Robert Mader the mobile focused video player now supports DMABuf import and can use GTK’s new GraphicsOffload widget to render videos more efficiently (given all other components in the stack support this properly already).
Aaron Erhardt announces
Version 0.8 of Relm4, an idiomatic GUI library based on gtk4-rs, was released on Wednesday with many improvements. The release includes several unifications in our API, more idiomatic abstractions and updated gtk-rs dependencies. Find out more details in our release blog post.
Martín Abente Lahaye says
Gameeky 0.6.0 is out! This new release comes with improved compatibility with other platforms, several usability additions and improvements like:
- An integrated development environment for Python.
- An easier way to share projects.
- New desktop icon thanks to @jimmac and @bertob.
- Improved compatibility with other platforms.
- And more…
Check the release blog post to learn more.
Rosanna announces
This week, in between the minutiae of everyday things, I have also been looking into some of our policies and looking into updating them. Things like the employee handbook and travel policy are high on the list of things to update, to both keep aligned with regulations and best practices as well as streamlined for practicality.
I am currently in Pasadena, California attending SCaLE. I am going to be on a panel (https://www.socallinuxexpo.org/scale/21x/presentations/where-does-linux-desktop-go-here) on Saturday at 2:30PM. It’s going to be a great time! I will also be staffing the GNOME booth there. Drop on by to discuss all things GNOME.
We also posted an opening for an Administrative Support Contractor (https://foundation.gnome.org/careers/). This person would be working with me to keep GNOME running and I am very much looking forward to reading all the applications!
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
After a busy month, a new release is out! This new release comes with improved compatibility with other platforms, several usability additions and improvements.
It’s no longer necessary to run terminal commands. The most noticeable change in release is the addition of a properly-integrated development environment for Python. With this, the LOGO-like user experience was greatly improved.
The LOGO-like programming interface is also bit richer. A new Rotate action was added and the general interface was simplified to further improve the user experience.
It’s easier to share projects. A simple dialog to export and import projects was added, available through the redesigned project cards in the launcher.
As shown above, Gameeky now has a cute desktop icon thanks to @jimmac and @bertob.
Should be easier to run Gameeky on other platforms now. Under the hood, many things have changed to support other platforms, e.g., macOS. The sound backend was changed to GStreamer, the communication protocol was simplified, and the use of WebKit is now optional.
There are no installers for other platforms yet but, if anyone is experienced and interested in making these, that would be an awesome contribution.
As a small addition, it’s now possible to select a different entity as the user’s character. Recently, my nephews decided they wanted to their character to be a small boulder. They had a blast with their boulder-hero narrative, and it convinced me there should be more additions like that.
There’s more, so check the full list of changes.
On the community side of things, I already started building alliances with different organizations, e.g., the first-ever Gameeky workshop is planned for March 23 in Encarnación, Paraguay and it’s being organized by the local Python community.
If you’re in Paraguay or nearby in Argentina, feel free to contact me to participate!
Embarking on an Outreachy internship is a great start into the heart of open-source , a journey I’ve longed to undertake. December 2023 to March 2024 marked this exhilarating chapter of my life, where I had the honor of diving deep into the GNOME world as an Outreachy intern. In this blog, I’m happy to share my experiences, painting a vivid picture of the growth, challenges, and invaluable experiences that have shaped my journey.
Discovering GNOME: A Gateway to Open-Source Excellence
At its core, GNOME (GNU Network Object Model Environment) is a graphical user interface (GUI) and set of computer desktop applications for users of the Linux operating system.GNOME brings companies, volunteers, professionals, and non-profits together from around the world.
We make GNOME, a completely free software solution for everyone.
Why GNOME Captured My Heart
The Outreachy internship presented a couple of projects to choose from, but my fascination with operating system functionalities—booting, scheduling, memory management, user interface and beyond—drew me irresistibly to GNOME. My mission? To work on the implementation of end-to-end tests, a challenge I embraced head on as i dived into the project documentation to understand the project better.
From the moment I introduced myself on the GNOME community channel in the first days of contribution phase, the warmth and promptness of their welcome were unmatched, shattering the myth of the “busy, distant mentor.” This immediate sense of belonging fueled my determination, despite the initial difficulties of setup procedures and technical trials.
My advice to future Outreachy aspirants
From my experience is to start early, Zero down on a project, try to set up early as this took me almost 2 weeks to finally make a merge request to the project.
Secondly ask questions publicly as this helps you easily get unblocked faster in cases when your mentor is busy.
Milestones and Mastery: The GNOME Journey
Our collective goal for the internship was to implement tests for accessibility features for GNOME desktop and also test some core apps on mobile. The creation of the gnome_accessibility test suite marked our first victory, followed by the genesis of gnome-locales and gnome_mobile test suites. Daily stand ups and weekly mentor meetings became our compass, guiding our efforts and honing our focus on the different tasks.Check out for more details here and share any feedback with us on discourse.
Technically ,I learned a lot about version control and Git workflows, how to actually contribute to a project with a large code base, writing clean, readable and efficient code and ensuring code is thoroughly tested for bugs and errors before pushing it. Some of the soft skills I learned were collaboration, communication skills and the continuous desire to learn new things and being teachable.
Overcoming Obstacles: Hardware Hurdles and Beyond
The revelation that my iOS-based machine was ill-equipped for the task at hand was a stark challenge. The lesson was clear: understanding project specifications is crucial, and adaptability is key. This obstacle, while daunting, taught me the value of preparation and the importance of choosing the right tools for the task.
Beyond Coding: Community, Engagement, and Impact
I have not only interacted with my mentors for the project but also participated in sharing about the work we have done on TWIG where I highlighted the work we had done writing tests for accessibility features ie, High contrast,Large text,Overlay scrollbars, Screen reader, Zoom, Over amplification,Visual alerts and On Screen Keyboard features and added more details on the discourse channel too.
I have had public engagements on contributing to Outreachy over twitter spaces in my community where I shared about how to apply to Outreachy and how to prepare for in the contribution phase and shared more about my internship with GNOME during the GNOME AFRICA Preparatory Boot camp for GSoC & Outreachy, check out my presentation here where I shared more about how to stand out as an Outreachy applicant and my experience working with GNOME .These experiences have not only boosted my technical skills but have also embedded in me a sense of community and courage to tackle the unknown.
A Heartfelt Thank You
As this chapter of my journey with GNOME and Outreachy draws to a close, I am overwhelmed with gratitude.To my selfless mentors , Sam Thursfield and Sonny Piers for the guidance and mentorship . I appreciate you all for what you have planted in us. To Tanjuate you have been an amazing co- intern I could ever ask for. To Kristi Progri and Felipe Borges for coordinating this internship with Outreachy and the GNOME Community.
To Outreachy, thank you for this opportunity. And to every soul who has walked this path with me: your support has been amazing. As I look forward to converging paths at GUADEC in July and beyond, I carry with me not just skills and knowledge, but a heart full of memories, ready to embark on new adventures in the open-source world.
Here’s to infinite learning, enduring friendships, and the unwavering spirit of contribution. May the journey continue to unfold, with success, learning, and boundless possibilities.
Here are some of the accessibility tests for gnome_accessibility testsuite, we added during the internship with GNOME .
Click here to take a more detailed look.
Touchscreens are quite prevalent by now but one of the not-so-hidden secrets is that they're actually two devices: the monitor and the actual touch input device. Surprisingly, users want the touch input device to work on the underlying monitor which means your desktop environment needs to somehow figure out which of the monitors belongs to which touch input device. Often these two devices come from two different vendors, so mutter needs to use ... */me holds torch under face* .... HEURISTICS! :scary face:
Those heuristics are actually quite simple: same vendor/product ID? same dimensions? is one of the monitors a built-in one? [1] But unfortunately in some cases those heuristics don't produce the correct result. In particular external touchscreens seem to be getting more common again and plugging those into a (non-touch) laptop means you usually get that external screen mapped to the internal display.
Luckily mutter does have a configuration to it though it is not exposed in the GNOME Settings (yet). But you, my $age $jedirank, can access this via a commandline interface to at least work around the immediate issue. But first: we need to know the monitor details and you need to know about gsettings relocatable schemas.
Finding the right monitor information is relatively trivial: look at $HOME/.config/monitors.xml and get your monitor's vendor, product and serial from there. e.g. in my case this is:
<monitors version="2"> <configuration> <logicalmonitor> <x>0</x> <y>0</y> <scale>1</scale> <monitor> <monitorspec> <connector>DP-2</connector> <vendor>DEL</vendor> <--- this one <product>DELL S2722QC</product> <--- this one <serial>59PKLD3</serial> <--- and this one </monitorspec> <mode> <width>3840</width> <height>2160</height> <rate>59.997</rate> </mode> </monitor> </logicalmonitor> <logicalmonitor> <x>928</x> <y>2160</y> <scale>1</scale> <primary>yes</primary> <monitor> <monitorspec> <connector>eDP-1</connector> <vendor>IVO</vendor> <product>0x057d</product> <serial>0x00000000</serial> </monitorspec> <mode> <width>1920</width> <height>1080</height> <rate>60.010</rate> </mode> </monitor> </logicalmonitor> </configuration> </monitors>Well, so we know the monitor details we want. Note there are two monitors listed here, in this case I want to map the touchscreen to the external Dell monitor. Let's move on to gsettings.
gsettings is of course the configuration storage wrapper GNOME uses (and the CLI tool with the same name). GSettings follow a specific schema, i.e. a description of a schema name and possible keys and values for each key. You can list all those, set them, look up the available values, etc.:
$ gsettings list-recursively ... lots of output ... $ gsettings set org.gnome.desktop.peripherals.touchpad click-method 'areas' $ gsettings range org.gnome.desktop.peripherals.touchpad click-method enum 'default' 'none' 'areas' 'fingers'Now, schemas work fine as-is as long as there is only one instance. Where the same schema is used for different devices (like touchscreens) we use a so-called "relocatable schema" and that requires also specifying a path - and this is where it gets tricky. I'm not aware of any functionality to get the specific path for a relocatable schema so often it's down to reading the source. In the case of touchscreens, the path includes the USB vendor and product ID (in lowercase), e.g. in my case the path is:
/org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/In your case you can get the touchscreen details from lsusb, libinput record, /proc/bus/input/devices, etc. Once you have it, gsettings takes a schema:path argument like this:
$ gsettings list-recursively org.gnome.desktop.peripherals.touchscreen:/org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/ org.gnome.desktop.peripherals.touchscreen output ['', '', '']Looks like the touchscreen is bound to no monitor. Let's bind it with the data from above:
$ gsettings set org.gnome.desktop.peripherals.touchscreen:/org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/ output "['DEL', 'DELL S2722QC', '59PKLD3']"Note the quotes so your shell doesn't misinterpret things.
And that's it. Now I have my internal touchscreen mapped to my external monitor which makes no sense at all but shows that you can map a touchscreen to any screen if you want to.
[1] Probably the one that most commonly takes effect since it's the vast vast majority of devices
Just keeping up to date with a talk I gave twice past year. I’m very proud of the work but never shared here neither in my Wikidata User:Olea page.
As a brief introduction, for some time I did a significant work importing to Wikidata the CDDA database of European protected areas as I found we have them completely infra represented. I have previous experience with historical heritage but this showed to be a harder work. I have collected some thoughts about lessons learned and a potential standarizing proposal for natural protected areas but never structured in a comprensive way until being invited to give a couple talks about this.
The first talk was in Lisboa, invited and sponsored by our friends of Wikimedia Portugal, in their Wikidata Days 2023. To be honest, my talk was a little disaster because I didn’t prepared the talk with time enough, but at least I could present a complete draft of the idea.
Then I have the opportunity to talk again about the same issue in the next Data Modelling Days 2023 virtual event.
My participation was sharing the session with VIGNERON, he talk about historical heritage and I did about natural heritage/natural protected area. For this session I was able to rewrite my proposal with the quality a communication requires. Now you have available the video recording of the full session:
And my slides, that would be the final text of my intended proposal:
As a conclusion: yes, I should promote this in Wikidata, but the amount of work it requires (editions and discussions) is, for the moment, outside my interest for my freetime.
Related with the previous post, I have pending too to publish my notes about my experience importing, sorting and cleaning the descriptions of the rich Andalusian historical heritage. Our friends of Wikimedia Portugal invited and sponsored my travel to Lisboa to share my, sometimes sad, practical experience in their excellent Wikidata Days 2023 meeting.
I used this invitation as a motivation to finally compile all the experience I learned when importing the Digital Guide to the Cultural Heritage of Andalusia into Wikidata.
The presentation format is really suboptimal. To be honest I’m a bit tired of working a lot in slides with very short live (many times just one ocassion). But I still think the contents are useful enough.
GTK 4.14 brings various improvements on the accessibility front, especially for applications showing complex, formatted text; for WebKitGTK; and for notifications.
The accessibility rewrite for 4.0 provided an implementation for complex, selectable, and formatted text in widgets provided by GTK, like GtkTextView
, but out of tree widgets would not be able to do the same, as the API was kept private while we discussed what ATs (assistive technologies) actually needed, and while we were looking at non-Linux implementations. For GTK 4.14 we finally have a public interface that out of tree widgets can implement to provide complex, formatted text to ATs: GtkAccessibleText.
GtkAccessibleText
allows widgets to provide the text contents at given offsets; the text attributes applied to the contents; and to notify assistive technologies of changes in the text, caret position, or selection boundaries.
Text widgets implementing GtkAccessibleText
should notify ATs in these cases:
gtk_accessible_text_update_caret_position()
every time the caret movesgtk_accessible_text_update_selection_bound()
every time the selection changesgtk_accessible_text_update_contents()
with the description of what changed, and the boundaries of the changeText attributes are mainly left to applications to implement—both in naming and serialization; GTK provides support for common text attributes already in use by various toolkits and assistive technologies, and they are available as constants under the GTK_ACCESSIBLE_ATTRIBUTE_*
prefix in the API reference.
The GtkAccessibleText
interface is a requirement for implementing the accessibility of virtual terminals; the most common GTK-based library for virtual terminals, VTE, has been ported to GTK4 thanks to the efforts of Christian Hergert and in GNOME 46 will support accessibility through the new GTK interface.
There are cases when a library or an application implements its own accessible tree using AT-SPI, whether in the same process or out of process. One such library is WebKitGTK, which generates the accessible object tree from the web tree inside separate processes. These processes do not use GTK, so they cannot use the GtkAccessible
API to describe their contents.
Thanks to the work of Georges Stavracas GTK now can bridge those accessibility object trees under the GTK widget’s own, allowing ATs to navigate into a web page using WebKit from the UI.
Currently, like the rest of the accessibility API in GTK, this is specific to the AT-SPI protocol on Linux, which means it requires libraries and applications that wish to take advantage of it to ensure that the API is available at compile time, through the use of a pkg-config file and a separate C header, similarly to how the printing API is exposed.
Applications using in-app notifications that are decoupled by the current widget’s focus, like AdwToast
in libadwaita, can now raise the notification message to ATs via the gtk_accessible_announce()
method, thanks to Lukáš Tyrychtr, in a way that is respectful of the current ATs output.
GTK 4.12 ensured that the computed accessible labels and descriptions were up to date with the ARIA specification; GTK 4.14 iterates on those improvements, by removing special cases and duplicates.
Thanks to the work of Michael Weghorn from The Document Foundation, there are new roles for text-related accessible objects, like paragraphs and comments, as well as various fixes in the AT-SPI implementation of the accessibility API.
The accessibility support in GTK4 is incrementally improving with every cycle, thanks to the contributions of many people; ideally, these improvements should also lead to a better, more efficient protocol for toolkits and assistive technologies to share.
We are still exploring the possibility of adding backends for other accessibility platforms, like UIAutomation; and for other libraries, like AccessKit.
Update on what happened across the GNOME project in the week from March 01 to March 08.
Sonny reports
As part of the GNOME STF (Sovereign Tech Fund) project, a number of community members are working on infrastructure related projects.
Here are the highlights of the last 2 weeks.
Accessibility
Dorotha joined the team to work on global shortcuts portal for GNOME and better screen reader support on Wayland.
Andy landed Spiel support in Orca #182
Spiel is a speech synthesis (TTS) API and framework
Orca is the screen reader of the Linux desktop
Hardware Support
Ivan published experiments on the extra frame of latency in GNOME Shell
Ivan benchmarked and measured latency of VTE (with Ptyxis), comparing together GTK’s three renderers (current GL, new GL, new Vulkan)
Jonas landed h264 (software) encoding for screencasts - recordings in GNOME 46 will be smoother on slow hardware and have better compatibility on the web
Dor landed variable refresh rate support
Platform
Tobias submitted a mockup for extending background apps with
- Dynamic actions that can change at runtime
- Show background apps in the dash with a dimmed dot indicator
- Show the status string and actions in the dash menu
Julian opened a draft for notifications API/portal v2
Julian landed expandable notifications in calendar drawer - coming to GNOME 46
Julian solved unecessary cases of
<Application> is ready
notifications 1, 2, 3, 4Flatpak
Hub landed supports for the settings portal in libportal
Hub opened a draft to implement fallback devices in Flatpak
Georges submitted a patch to support geolocation in sandboxed Flatpak WebKit applications
Georges submitted a patch to support Drag’n Drop in Flatpak WebKit applications
Home Encryption
Adrian landed Freeze user sessions for all types of sleep in systemd
Adrian landed session fields for user-record in systemd
Use the GNOME platform libraries in your JavaScript programs. GJS powers GNOME Shell, Polari, GNOME Documents, and many other apps.
ptomato says
This week GJS 1.79.90 was released, the release candidate for GNOME 46. In this release we have a crash fix and some preparations for performance improvements.
Also, have you ever tried using
WeakRef
orFinalizationRegistry
in GJS and noticed … that they don’t actually work? Due to not realizing that we had to do something on our end to enable these when Mozilla added them to the JS engine, it turns out theWeakRef
would actually create a strong ref, and theFinalizationRegistry
’s callbacks would never be called. This is fixed now and you can use them with confidence because the functionality is now covered by tests! We also wrote documentation for Mozilla’s JS embedders repo to prevent problems like this in the future.
Hugo Olabera announces
I just published version 3 of Wike, which is renewed to adapt to the new design styles in GNOME applications. It also adds some new features and the usual bunch of improvements and fixes.
- Side panel redesign that is now located on the left.
- New left bar that provides quick access to all side panel elements.
- Search moves to side panel.
- New design for the languages selection window.
- Selection mode added to the bookmarks and history lists.
- New option to hide the tab bar in desktop mode.
- New option to show all languages in language links.
- Added match counter for text searches.
- Switching to libsoup for Wikipedia queries.
- New and updated translations.
Thanks to all contributors and translators!
Fast and secure file transfer.
Fina announces
Warp 0.7 beta 1 was released to Flathub beta. It includes experimental support for QR code scanning via PipeWire and the camera portal. This feature allows to initiate a file transfer just by scanning a code on the receiving device. Any feedback is much appreciated. 📸️
To install the beta, follow the instructions in the beta announcement.
A distraction free Markdown editor.
Manu says
Alice has done a bunch of work on Apostrophe to port widgets to their new libadwaita counterparts, as well as improving the overall styling
slomo says
The GStreamer team is thrilled to announce a new major feature release of your favourite cross-platform multimedia framework!
The 1.24 release series adds new features on top of the 1.22 series and is part of the API and ABI-stable 1.x release series.
As always, this release is again packed with new features, bug fixes and many other improvements.
Highlights
- New Discourse forum and Matrix chat space
- New Analytics and Machine Learning abstractions and elements
- Playbin3 and decodebin3 are now stable and the default in gst-play-1.0, GstPlay/GstPlayer
- The va plugin is now preferred over gst-vaapi and has higher ranks
- GstMeta serialization/deserialization and other GstMeta improvements
- New GstMeta for SMPTE ST-291M HANC/VANC Ancillary Data
- New unixfd plugin for efficient 1:N inter-process communication on Linux
- cudaipc source and sink for zero-copy CUDA memory sharing between processes
- New intersink and intersrc elements for 1:N pipeline decoupling within the same process
- Qt5 + Qt6 QML integration improvements including qml6glsrc, qml6glmixer, qml6gloverlay, and qml6d3d11sink elements
- DRM Modifier Support for dmabufs on Linux
- OpenGL, Vulkan and CUDA integration enhancements
- Vulkan H.264 and H.265 video decoders
- RTP stack improvements including new RFC7273 modes and more correct header extension handling in depayloaders
- WebRTC improvements such as support for ICE consent freshness, and a new webrtcsrc element to complement webrtcsink
- WebRTC signallers and webrtcsink implementations for LiveKit and AWS Kinesis Video Streams
- WHIP server source and client sink, and a WHEP source
- Precision Time Protocol (PTP) clock support for Windows and other additions
- Low-Latency HLS (LL-HLS) support and many other HLS and DASH enhancements
- New W3C Media Source Extensions library
- Countless closed caption handling improvements including new cea608mux and cea608tocea708 elements
- Translation support for awstranscriber
- Bayer 10/12/14/16-bit depth support
- MPEG-TS support for asynchronous KLV demuxing and segment seeking, plus various new muxer features
- Capture source and sink for AJA capture and playout cards
- SVT-AV1 and VA-API AV1 encoders, stateless AV1 video decoder
- New uvcsink element for exporting streams as UVC camera
- DirectWrite text rendering plugin for windows
- Direct3D12-based video decoding, conversion, composition, and rendering
- AMD Advanced Media Framework AV1 + H.265 video encoders with 10-bit and HDR support
- AVX/AVX2 support and NEON support on macOS on Apple ARM64 CPUs via new liborc
- GStreamer C# bindings have been updated
- Rust bindings improvements and many new and improved Rust plugins
- Rust plugins now shipped in packages for all major platforms including Android and iOS
- Lots of new plugins, features, performance improvements and bug fixes
Full release notes can be found at: https://gstreamer.freedesktop.org/releases/1.24/
Can Lehmann announces
This week, Owlkettle 3.0.0 has been released! Owlkettle is a declarative GUI framework for the Nim programming language based on GTK 4. This release wraps 27 new widgets and improves on the documentation:
- 27 new GTK & libadwaita widgets
- Support for custom CSS classes & inline stylesheets
- Generate interactive widget examples with the
owlkettle/playground
moduleprivate
andonlyState
modifiers- Documentation website with guides on installation, application architecture and wrapping new widgets
See the full changelog here. This is a major release which contains breaking changes. A migration guide can be found here.
Thanks to all contributors!
rdbende reports
After almost two years, we released Cozy 1.3 last week! This release brings an updated user interface along with numerous bug fixes and improved performance.
The user interface has been ported to GTK4 and Libadwaita. Thus, Cozy benefits from the new style sheet, automatic dark mode, and utilizes the latest and greatest UI elements throughout the application.
Other changes include:
- Improved mobile support
- Smaller visual refinements to match the state of the art of GNOME apps
- Dozens of bug fixes and performance improvements
- Significant cleanup and improvements to the codebase
- As always, updated translations thanks to all translators!
Many thanks to Julian Geywitz for the great app and codebase, and to all the contributors who helped make this release happen!
A pure wayland shell for mobile devices.
Guido says
Phosh 0.37.0 is out:
- Wi-Fi networks can now be selected from quick settings
- Add your own Custom quick settings via plugins
- There’s a new caffeine quick setting using that
- Support cutouts and notches of 16 more phones
There’s more. Check the full details here
A tweak tool to customize the GNOME Shell and to disable UI elements.
Just Perfection reports
Just Perfection extension is ported to GNOME Shell 46. We have a new feature in this version called window maximized by default to open all windows in maximized automatically. This version is named after English artist Edward Lear. https://www.youtube.com/watch?v=FBMM8s2J2zI
See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!
GTK 4.14 will be released very soon, with new renderers that were introduced earlier this year.
The new renderers have much improved support for fractional scaling—on my system, I now use 125% scaling instead of the ‘Large Text’ setting, and I find that works fine for my needs.
Ever since 4.0, GTK has been advocating for linear layout.
The idea is that we just place glyphs where the coordinates tell us, and if that is a fractional position somewhere between pixels, so be it, we can render the outline at that offset just fine. This approach works—if your output device has a high-enough resolution (anything above 240 dpi should be ok). Sadly, we don’t live in a world where most laptop screens have that kind of resolution, so we can’t just ignore pixels.
Consequently, we added the gtk-hint-font-metrics setting that forces text layout to round things to integer positions. This is not a great fit for fractional scaling, since the rounding happens in application pixels, and we really need integral device pixel positions to produce crisp results.
The common fractional scales are 125%, 150%, 175%, 200% and 225%. At these scales (with the exception of 200%), most application pixel boundaries do not align with device pixel boundaries.
The new renderers gave us an opportunity to revisit the topic of font rendering and do some research on the mechanics of hinting options, and how they get passed down the stack from GTK through Pango and cairo, and then end up in freetype as a combination of render target + load flags.
The new renders recognize that there’s two basic modes of operation when it comes to glyphs:
The former leads to subpixel positioning and unhinted rendering, the latter to hinted rendering and glyphs that are placed at integral pixel positions (since that is what the autohinter expects).
We determine which case we’re in by looking at the font options. If they tell us to do hinting, we round the glyph position to an integral device pixel in the y direction. Why only y? The autohinter only applies hinting in the vertical direction and the horizontal direction is where the increased resolution of subpixel positions helps most. If we are not hinting, then we use subpixel positions for both x and y, just like the old renderer (with the notable difference that the new renderer uses subpixel positions in device pixels).
Text rendering differences are always subtle and, to some degree, a matter a taste and preference. So these screenshots should be taken with a grain of salt—it is much better to try the new renderers for yourself.
Both of these renderings were done at a scale of 125%, with hinting enabled (but note that the old renderer handles 125% by rendering at 200% and relying on the compositor to scale things down).
Here is a look at some details: the horizontal bars of T and e are consistent across lines, even though we still allow the glyphs to shift by subpixel positions horizontally.
The new renderers in GTK 4.14 should produce more crisp font rendering, in particular with fractional scaling.
Please try it out and tell us what you think.
I should have anticipated that this question would come up, so here is a quick answer:
We are not using subpixel rendering (aka Cleartype, or rgb antialiasing) in GTK 4, since our compositing does not have component alpha. Our antialiasing for fonts is always grayscale. Note that subixel rendering is something separate from subpixel positioning.
GNOME 46 is on its final stretch to be released. It’s been a custom to blog a little about the wallpaper selection, which is a big part of GNOME’s visual identity.
The first notable change in 46 is that we’re finally delivering on the promise of bringing you a next generation image file format. Lots of performance issues had to be addressed first, apologies for the delay. While efficiency and filesize requirements might not be too high on the list outside of the geek crowd, there is one aspect of JPEG-XL that I am very excited about.
JPEG-XL allows the use of client-side synthesized grain. A method pioneered by Netflix/AV1 I believe. Compression algorithms struggle with high frequency detail, which often introduce visible artifacts. JPEG-XL allows to decouple the grain component from the actual image data. This allows for significantly more efficient compression of images that inherently require noise, such as those in gnome-backgrounds
— smooth gradients that would otherwise be susceptible to color banding. To achieve similar fidelity of the grain if it were baked in, a classic format like JPEG would need an order of magnitude larger filesize. Having the grain in the format itself also allows to skip various techniques in the rendering or compositing in the 3D software.
Instead of compressing a noisy image, JPEG-XL allows to generate film-like grain as part of the decoding process. This synthesized grain combats issues like color banding while allowing a much more efficient compression on the original image data.
In essence, client-side grain in JPEG-XL isn’t simply added noise, but a sophisticated strategy for achieving both efficient compression and visually pleasing image quality, especially for images that would otherwise require inherent noise.
The fresh batch of wallpapers includes evolutions of the existing assets as well as new additions. A few material/shape studies have been added as well as simple 2D shape textures. Thanks to the lovely JPEG-XL grain described earlier, it’s not just Inkscape and Blender that were used.
I hope you’re going to pick at least one of the wallpapers when GNOME 46 releases later next week as your favorite. Let me know on fediverse!
I have just released CapyPDF 0.9.0. It can be obtained either via Github or PyPI.
There is no major big feature for this release. The most notable is probably the ability to create structured (or "tagged") PDF files. The code supports using both the builtin tags as well as defining your own. Feel free to try it, just know that the API is guaranteed to change.
As a visual example, here is the full Python source code for one of the unit tests.
When run it creates a tagged PDF file. Adobe Acrobat reports that the document has the following logical structure.
As you can (hopefully) tell, structure and content are the same in both of them.
Recently I was looking at a VTE performance issue so I added a bunch of Sysprof timing marks to be picked up by the profiler. I combined that with GTK frame timing information and GNOME Shell timing information because Sysprof will just do that for you. I noticed a curious thing in that almost every ClutterFrameClock.dispatch()
callback was rougly 1 millisecond late.
A quick look at the source code shows that ClutterFrameClock
uses g_source_set_ready_time()
to specify it’s next deadline to awaken. That is in µsec using the synchronized monotonic clock (CLOCK_MONOTONIC
).
Except, for various reasons, GLib still uses poll()
internally which only provides 1 millisecond timeout resolution. So whatever µsec deadline was requested by the ClutterFrameClock
doesn’t really matter if nothing else wakes up around the same time. And since the GLib GSource
code will always round up (to avoid spinning the CPU) that means a decent amount late.
With the use of ppoll()
out of question, the next thing to use on Linux would be a timerfd(2)
.
Here is a patch to make GLib do that. I don’t know if that is something we should have there as it will create an extra timerfd
for every GMainContext
you have, but it doesn’t seem insane to do it there either.
If that isn’t to be, then here is a patch to ClutterFrameClock
which does the same thing there.
And finally, here is a graph of how the jitter looks when not using timerfd
and when using timerfd
.
Pika Backup is an app focused on backups of personal data. It’s internally based on BorgBackup and provides fast incremental backups.
Last year, Pika Backup crossed the mark of 100,000 downloads on Flatub. These are numbers I couldn’t have imagined when submitting Pika Backup to Flathub only about three years ago. Thanks to everyone who has supported the project along the way. Be it with incredibly kind feedback, helpful issue reports, or financial contributions on Open Collective. It has been a blast so far. A special thanks goes to BorgBase who generously has been giving financial support to the project development for over a year now.
While we still have a bunch of features planned for Pika Backup, our focus remains stability and keeping the codebase maintainable. The project was started over five years ago. Since these were still the early ages of Rust as a programming language within GNOME, a lot has changed in the way app code is commonly structured. This means that we are also planning some refactoring work to help with the maintainability and readability of the code for future contributors.
After being blocked by a nasty bug for a while, we are finally releasing Pika Backup 0.7 today. Like the previous release, the new release has substantially been driven by Fina since I have been busy with other projects including moving flats. I’m thrilled that the project has two maintainers who are familiar with the codebase. The new release contains over 20 significant changes and fixes. The most noticeable new features are:
You can financially support development on Open Collective or GitHub. If you want to support my general GNOME endeavors and get some behind-the-scenes updates, you can support me on my new Patreon.
If you want to try out BorgBase for hosting your backup you can get 10 GB storage for free on borgbase.com. A guide for setting up Pika Backup with BorgBase is available as well.
This is the second instalment of my 2023 retrospective series on Toolbx. 1
One very important thing that we did behind the scenes was to make Toolbx a release blocker for Fedora 39 and onwards. This means that the registry.fedoraproject.org/fedora-toolbox
OCI image is considered a release-blocking deliverable, and there are release-blocking test criteria to ensure that the toolbox
RPM is usable.
Earlier, there was no formal requirement for Toolbx to be usable when a new Fedora was released. That was a problem for a tool that’s so popular and provides something as fundamental as an interactive command line environment for software development and troubleshooting the host operating system. Everybody expects their CLI environment to just work even under very adverse conditions, and Toolbx should be no different. Except that Toolbx is slightly more complicated than running Bash or Z shell directly on the host OS, and, therefore, requires a bit more diligence.
Toolbx has two parts — an OCI image, which defaults to registry.fedoraproject.org/fedora-toolbox
on Fedora hosts, and the toolbox
RPM. The OCI image is pulled by the RPM to set up a containerized interactive CLI environment.
Let’s look at each separately.
First, we wanted to ensure that there is an up to date fedora-toolbox
OCI image published on registry.fedoraproject.org as a release-blocking deliverable at critical points in the development schedule, just like the installation ISOs for the Editions from download.fedoraproject.org. For example, when an upcoming Fedora release is branched from Rawhide, and for the Beta and Final releases.
One of the recurring complaints that we used to get were from users of Fedora Rawhide Toolbx containers, when Rawhide gets branched in preparation for the Beta for the next Fedora release. At this point, the previous Rawhide version becomes the Branched version, and the current Rawhide version increases by one. If the fedora-toolbox
images aren’t part of the mass branching performed by Fedora Release Engineering, then someone has to quickly step in after they have finished to refresh the images to ensure consistency. This sort of ad hoc manual co-ordination rarely works, and it left users in the lurch.
With this change, the fedora-toolbox
image is part of the nightly Fedora composes, and the branching is handled by Fedora Release Engineering just like any other release-blocking deliverable. This makes the image as readily available and updated as the fedora and fedora-minimal OCI images or any other deliverable, and we hope that it will improve the user experience for Rawhide Toolbx containers.
If someone installs the Fedora Beta or the Final on their host, and creates a Toolbx container using the default image, then, barring exceptions, the host and the container now have the same RPM versions for all packages. Just like Fedora Silverblue and Workstation are released with the same versions. This ensures greater consistency in terms of bug-fixes, features and pending updates.
In the past, this wasn’t the case and it led to occasional surprises. For example, the change to make RPM use a Sequoia based OpenPGP parser made it impossible to install third party RPMs in the fedora-toolbox
image, even long after the actual bug was fixed.
Second, we wanted to have release-blocking test criteria to ensure that the toolbox
RPM is usable at critical points in the development schedule. This is to ensure that changes in the Toolbx stack, and future changes in other parts of the operating system do not break Toolbx — at least not for the Beta and Final releases. It’s good to have the fedora-toolbox image be more readily available and updated, but it’s better if Toolbx works more reliably as a whole.
Examples of changes in the Toolbx stack causing breakage can be FUSE preventing RPMs with file capabilities from being installed inside Toolbx containers, Toolbx bind mounts preventing RPMs with %attr()
from being installed or causing systemd-tmpfiles(8) to throw errors, etc.. Examples of changes in other parts of the OS can be changes to Fedora’s Kerberos stack causing Kerberos to stop working inside Toolbx containers, changes to the sysctl(8)
configuration breaking ping(8)
, changes in Mutter breaking graphical applications, etc..
The test criteria for the toolbox
RPM also implicitly tests the fedora-toolbox
image, and co-ordinates several disparate groups of developers to ensure that the containerized interactive command line Toolbx environments on Fedora are just as reliable as those running directly on the host OS.
This does come with a significant tooling change that isn’t obvious at first. The fedora-toolbox
OCI image is no longer defined as a layered image through a Container/Dockerfile. Instead, it’s built as a base image through Kickstarts and Pungi, just like the fedora
and fedora-minimal
images.
This was necessary because the nightly Fedora composes work with Kickstarts and Pungi, not Container/Dockerfiles. Moreover, building Fedora OCI images from a Dockerfile with fedpkg container-build
uses an ancient unmaintained version of OpenShift Build Service that requires equally unmaintained ancient versions of Fedora to run, and the fedora-toolbox
image was the only thing using Container/Dockerfiles in Fedora.
We either had to update the Fedora infrastructure to use OpenShift Build Service 2.x; or use Kickstarts and Pungi, which uses Image Factory, to build the fedora-toolbox
image. We chose the latter, because updating the infrastructure would be a significant effort, and by using Kickstarts and Pungi we get to stay close to the fedora
and fedora-minimal
images and simplify the infrastructure.
The Fedora Flatpaks were also being built using the same ancient and unmaintained version of OpenShift Build Service, and they too are in the process being migrated. However, that’s outside the scope of this post.
One big benefit of fedora-toolbox
not being a layered image based on top of the fedora
image is that it removes the constant fight against the efforts to minimize the size of the latter. The fedora-toolbox
image is designed for interactive command line use in long-lived containers, and not for deploying server-side applications and services in ephemeral ones. This means that dictionaries, documentation, locales, iconv converter modules, translations, etc. are more important than reducing the size of the images. Now that the image is built from scratch, it has full control over what goes into it.
Unfortunately, Image Factory is weakly maintained and setting it up on one’s local machine is a lot more complicated than using podman build
. One can do scratch builds on the Fedora infrastructure with koji image-build --scratch
, but only if they have been explicitly granted permissions, and then they have to download the tarball and use skopeo copy
to place them in containers-storage
so that Podman can see it. All that is again more complicated than doing a podman build
.
Due to this difficulty of untangling the image build from the Fedora infrastructure, we haven’t published the sources of the fedora-toolbox
image for recent Fedora versions upstream. We do have a fedora-toolbox:39
image defined through a Container/Dockerfile, but that was done purely as a contingency during the Fedora 39 development cycle.
This does degrade the developer experience of working on the fedora-toolbox
image, but, given all the other advantages, we think that it’s worth it.
As of this writing, there’s a Fedora 40 Change to switch to using KIWI to build the OCI images, including fedora-toolbox
, instead of Image Factory. KIWI seems more strongly maintained and a lot easier to set up locally, which is fantastic. So, it should be all rainbows and unicorns, once we soldier through another port of the fedora-toolbox
image to a different tooling and source language.
Last but not the least, getting all this done on time required a good deal of co-ordination and help from several different individuals. I must thank Sumantro for leading the effort; Kevin, Tomáš and Samyak for all the infrastructure and release engineering work; and Adam and Kamil for all the testing and validation.
While poking the other day at making a Guile binding for Harfbuzz, I remembered why I don’t much do this any more: it is impossible to compose GC with explicit ownership.
Allow me to illustrate with an example. Harfbuzz has a concept of blobs, which are refcounted sequences of bytes. It uses these in a number of places, for example when loading OpenType fonts. You can get a peek at the blob’s contents back with hb_blob_get_data, which gives you a pointer and a length.
Say you are in LuaJIT. (To think that for a couple years, I wrote LuaJIT all day long; now I can hardly remember.) You get a blob from somewhere and want to get its data. You define a wrapper for hb_blob_get_data:
local hb = ffi.load("harfbuzz") ffi.cdef [[ typedef struct hb_blob_t hb_blob_t; const char * hb_blob_get_data (hb_blob_t *blob, unsigned int *length); ]]
Presumably you then arrange to release LuaJIT’s reference on the blob when GC collects a Lua wrapper for a blob:
ffi.cdef [[ void hb_blob_destroy (hb_blob_t *blob); ]] function adopt_blob(ptr) return ffi.gc(ptr, hb.hb_blob_destroy) end
OK, so let’s say we get a blob from somewhere, and want to copy out its contents as a byte string.
function blob_contents(blob) local len_out = ffi.new('unsigned int') local contents = hb.hb_blob_get_data(blob, len_out) local len = len_out[0]; return ffi.string(contents, len) end
The thing is, this code is as correct as you can get it, but it’s not correct enough. In between the call to hb_blob_get_data and, well, anything else, GC could run, and if blob is not used in the future of the program execution (the continuation), then it could be collected, causing the hb_blob_destroy finalizer to release the last reference on the blob, freeing contents: we would then be accessing invalid memory.
Among GC implementors, it is a truth universally acknowledged that a program containing finalizers must be in want of a segfault. The semantics of LuaJIT do not prescribe when GC can happen and what values will be live, so the GC and the compiler are not constrained to extend the liveness of blob to, say, the entirety of its lexical scope. It is perfectly valid to collect blob after its last use, and so at some point a GC will evolve to do just that.
I chose LuaJIT not to pick on it, but rather because its FFI is very straightforward. All other languages with GC that I am aware of have this same issue. There are but two work-arounds, and neither are satisfactory: either develop a deep and correct knowledge of what the compiler and run-time will do for a given piece of code, and then pray that knowledge does not go out of date, or attempt to manually extend the lifetime of a finalizable object, and then pray the compiler and GC don’t learn new tricks to invalidate your trick.
This latter strategy takes the form of “remember-this” procedures that are designed to outsmart the compiler. They have mostly worked for the last few decades, but I wouldn’t bet on them in the future.
Another way to look at the problem is that once you have a system working—though, how would you know it’s correct?—then you either never update the compiler and run-time, or you become fast friends with whoever maintains your GC, and probably your compiler too.
For more on this topic, as always Hans Boehm has the first and last word; see for example the 2002 Destructors, finalizers, and synchronization. These considerations don’t really apply to destructors, which are used in languages with ownership and generally run synchronously.
Happy hacking, and be safe out there!
We're gearing up to launch curated banners on the Flathub home page! However, before we can do that there's one more blocker: Banners need a background color for each app, and many apps don't provide this metadata yet. This is why today we're expanding our MetaInfo quality guidelines and quality checks on the website; If you haven't yet, please add these colors to your app's MetaInfo file using the <branding/>
appstream tag, and read on to learn more about brand colors.
App brand colors are an easy and effective way for app developers to give their listing a bit more personality in app stores. In combination with the app icon and name, they allow setting a tone for the app without requiring a lot of extra work, unlike e.g. creating and maintaining additional image assets.
This idea was first implemented in elementary AppCenter, and later standardized as part of the AppStream specification.
While it has been in AppStream itself for a few years, it was unfortunately not possible for Flathub's backend to pick it up until the recent port to libappstream. This is why many apps are still not providing this metadata—even if it was available from the app side we were unable to display it until now.
Now that we can finally pick these colors up from AppStream MetaInfo files, we want to make use of them—and they are essential for the new banners.
Apps are expected to provide two different brand colors for light and dark. Here's an example of a MetaInfo file in the wild including brand colors.
This is the snippet you need to include in your MetaInfo file:
<branding>
<color type="primary" scheme_preference="light">#faa298</color>
<color type="primary" scheme_preference="dark">#7f2c22</color>
</branding>
In choosing the colors, try to make sure the colors work well in their respective context (e.g. don't use a light yellow for the dark color scheme), and look good as a background behind the app icon (e.g. avoid using exactly the same color to maintain contrast). In most cases it's recommended to pick a lighter tint of a main color from the icon for the light color scheme, and a darker shade for the dark color scheme. Alternatively you can also go with a complementary color that goes well with the icon's colors.
Today we've updated the MetaInfo quality guidelines with a new section on app brand colors. Going forward, brand colors will be required as part of the MetaInfo quality review.
If you have an app on Flathub, check out the guidelines and update your MetaInfo with brand colors as soon as possible. This will help your app look as good as possible, and will make it eligible to be featured when the new banners ship. Let's make Flathub a more colorful, exciting place to find new apps!
There are many open source PDF generators available. Unfortunately they all have some limitations when it comes to generating tagged PDFs
There does not seem to be a library that provides for all of this with a "plain C" API that can be used to easily generated tagged PDFs using almost any programming language.
There still isn't, but at least now CapyPDF can generate simple tagged PDF documents. A sample document can be downloaded via this link. Here is a picture showing the document structure in Acrobat Pro.
It should also work in things like screen readers and other accessibility tools, but I have not tested it.
None of this is exposed in the C API, because this has a fairly large API surface and I have not yet come up with a good way to represent it.
Open source embodies collaboration, innovation, and accessibility within the technological realm. Seeking personal insights behind the collaborative efforts, I engaged in conversations with individuals integral to the open source community, revealing the diversity, challenges, and impacts of their work.
Venturing beyond my comfort zone, I connected with seasoned open source contributors, each offering unique perspectives and experiences. Their roles varied from project maintainers to mentors, working on everything from essential libraries to innovative technologies.
These interactions showcased the wide-ranging backgrounds and motivations within the open source community and have deepened my respect for the open source community and its contributors. I have some homework to do with my resume and the links to opportunities that were shared with me.Open source welcomes contributors at all levels, offering a platform for innovation and collective achievement.
Feel free to be an Outreachy intern on the upcoming cohorts to start your journey.
Best of luck.
Flathub's automatic build validation is more thorough now, and includes checks for issues we previously would have only flagged manually. There is a chance that if your app has been passing the continuous integration checks previously, it will fail now; here's why, and what to do about it.
If your application no longer passes the build validation stage in either Buildbot (for apps maintained on GitHub) or flat-manager (for direct uploads), make sure to look for specific messages in the log. Explanations for various error messages can be found in the documentation. If you are interested in running the linter locally or integrating it with your own CI, please refer to the project page.
We have also started moderating all permission changes and some critical MetaInfo changes. For example, if a build adds or removes a static permission (as seen in the finish-args
array in the manifest) or changes the app’s user-facing name, it will be withheld for manual human review. Reviewers may reject a build and reach out for clarification about the change.
Flathub has also switched to a modern, well-maintained AppStream library, known as libappstream
. This enables developers to use all features described in the AppStream 1.0 specification, including specifying supported screen sizes for mobile devices, or video snippets to accompany static screenshots. It also improves the validation of AppStream metadata. Many thanks to Philip Withnall, Luna Dragon and Hubert Figuière for their work on this across the Flatpak stack, and Matthias Klumpp for implementing knobs needed by Flathub in the library itself.
This work has been ongoing since 2021. At one point along the way we briefly switched over to libappstream
, but had to revert due to unexpected breakage; however, today we are finally ready with all blocking issues addressed! While we were focused on closing the gaps to prevent potentially broken builds from being published, we regret that we failed to provide a heads-up about the coming validation changes. Any future breaking changes will be properly announced on this blog, and going forward we will also inform maintainers of affected apps about required changes in advance.
In the previous post I talked about the plans of the WebKit ports currently using Cairo to switch to Skia for 2D rendering. Apple ports don’t use Cairo, so they won’t be switching to Skia. I understand the post title was confusing, I’m sorry about that. The original post has been updated for clarity.
In recent years we have had an ongoing effort to improve graphics performance of the WebKit GTK and WPE ports. As a result of this we shipped features like threaded rendering, the DMA-BUF renderer, or proper vertical retrace synchronization (VSync). While these improvements have helped keep WebKit competitive, and even perform better than other engines in some scenarios, it has been clear for a while that we were reaching the limits of what can be achieved with a CPU based 2D renderer.
There was an attempt at making Cairo support GPU rendering, which did not work particularly well due to the library being designed around stateful operation based upon the PostScript model—resulting in a convenient and familiar API, great output quality, but hard to retarget and with some particularly slow corner cases. Meanwhile, other web engines have moved more work to the GPU, including 2D rendering, where many operations are considerably faster.
We checked all the available 2D rendering libraries we could find, but none of them met all our requirements, so we decided to try writing our own library. At the beginning it worked really well, with impressive results in performance even compared to other GPU based alternatives. However, it proved challenging to find the right balance between performance and rendering quality, so we decided to try other alternatives before continuing with its development. Our next option had always been Skia. The main reason why we didn’t choose Skia from the beginning was that it didn’t provide a public library with API stability that distros can package and we can use like most of our dependencies. It still wasn’t what we wanted, but now we have more experience in WebKit maintaining third party dependencies inside the source tree like ANGLE and libwebrtc, so it was no longer a blocker either.
In December 2023 we made the decision of giving Skia a try internally and see if it would be worth the effort of maintaining the project as a third party module inside WebKit. In just one month we had implemented enough features to be able to run all MotionMark tests. The results in the desktop were quite impressive, getting double the score of MotionMark global result. We still had to do more tests in embedded devices which are the actual target of WPE, but it was clear that, at least in the desktop, with this very initial implementation that was not even optimized (we kept our current architecture that is optimized for CPU rendering) we got much better results. We decided that Skia was the option, so we continued working on it and doing more tests in embedded devices. In the boards that we tried we also got better results than CPU rendering, but the difference was not so big, which means that with less powerful GPUs and with our current architecture designed for CPU rendering we were not that far from CPU rendering. That’s the reason why we managed to keep WPE competitive in embeeded devices, but Skia will not only bring performance improvements, it will also simplify the code and will allow us to implement new features . So, we had enough data already to make the final decision of going with Skia.
In February 2024 we reached a point in which our Skia internal branch was in an “upstreamable” state, so there was no reason to continue working privately. We met with several teams from Google, Sony, Apple and Red Hat to discuss with them about our intention to switch from Cairo to Skia, upstreaming what we had as soon as possible. We got really positive feedback from all of them, so we sent an email to the WebKit developers mailing list to make it public. And again we only got positive feedback, so we started to prepare the patches to import Skia into WebKit, add the CMake integration and the initial Skia implementation for the WPE port that already landed in main.
We will continue working on the Skia implementation in upstream WebKit, and we also have plans to change our architecture to better support the GPU rendering case in a more efficient way. We don’t have a deadline, it will be ready when we have implemented everything currently supported by Cairo, we don’t plan to switch with regressions. We are focused on the WPE port for now, but at some point we will start working on GTK too and other ports using cairo will eventually start getting Skia support as well.
It has been, what, like four years since librsvg got fully rustified, and now it is time to move another piece of critical infrastructure to a memory-safe language.
I'm talking about libipuz, the GObject-based C library that GNOME Crosswords uses underneath. This is a library that parses the ipuz file …
It has been, what, like four years since librsvg got fully rustified, and now it is time to move another piece of critical infrastructure to a memory-safe language.
I'm talking about libipuz, the GObject-based C library that GNOME Crosswords uses underneath. This is a library that parses the ipuz file format and is able to represent various kinds of puzzles.
Libipuz is an interesting beast. The ipuz format is JSON with a lot of hair: it needs to represent the actual grid of characters and their solutions, the grid's cells' numbers, the puzzle's clues, and all the styling information that crossword puzzles can have (it's more than you think!).
{
"version": "http://ipuz.org/v2",
"kind": [ "http://ipuz.org/crossword#1", "https://libipuz.org/barred#1" ],
"title": "Mephisto No 3228",
"styles": {
"L": {"barred": "L" },
"T": {"barred": "T" },
"TL": {"barred": "TL" }
},
"puzzle": [ [ 1, 2, 0, 3, 4, {"cell": 5, "style": "L"}, 6, 0, 7, 8, 0, 9 ],
[ 0, {"cell": 0, "style": "L"}, {"cell": 10, "style": "TL"}, 0, 0, 0, 0, {"cell": 0, "style": "T"}, 0, 0, {"cell": 0, "style": "T"}, 0 ]
# the rest is omitted
],
"clues": {
"Across": [ {"number":1, "clue":"Having kittens means losing heart for home day", "enumeration":"5", "cells":[[0,0],[1,0],[2,0],[3,0],[4,0]] },
{"number":5, "clue":"Mostly allegorical poet on writing companion poem, say", "enumeration":"7", "cells":[[5,0],[6,0],[7,0],[8,0],[9,0],[10,0],[11,0]] },
]
# the rest is omitted
}
}
Libipuz uses json-glib, which works fine to ingest the JSON into memory, but then it is a complete slog to distill the JSON nodes into C data structures. You need iterate through each node in the JSON tree and try to fit its data into yours.
Get me the next node. Is the node an array? Yes? How many elements? Allocate my own array. Iterate the node's array. What's in this element? Is it a number? Copy the number to my array. Or is it a string? Do I support that, or do I throw an error? Oh, don't forget the code to meticulously free the partially-constructed thing I was building.
This is not pleasant code to write and test.
Ipuz also has a few mini-languages within the format, which live inside string properties. Parsing these in C unpleasant at best.
While librsvg has a very small GObject-based API, and a medium-sized library underneath, libipuz has a large API composed of GObjects, boxed types, and opaque and public structures. Using libipuz involves doing a lot of calls to its functions, from loading a crossword to accessing each of its properties via different functions.
I want to use this rustification as an exercise in porting a moderately large C API to Rust. Fortunately, libipuz does have a good test suite that is useful from the beginning of the port.
Also, I want to see what sorts of idioms appear when exposing things
from Rust that are not GObjects. Mutable, opaque structs can just
be passed as a pointer to a heap allocation, i.e. a Box<T>
. I want
to take the opportunity to make more things in libipuz immutable;
currently it has a bunch of reference-counted, mutable objects, which
are fine in single-threaded C, but decidedly not what Rust would
prefer. For librsvg it was very beneficial to be able to notice parts
of objects that remain immutable after construction, and to
distinguish those parts from the mutable ones that change when the
object goes through its lifetime.
In the ipuz format, crosswords have a character set or charset: it is the set of letters that appear in the puzzle's solution. Internally, GNOME Crosswords uses the charset as a histogram of letter counts for a particular puzzle. This is useful information for crossword authors.
Crosswords uses the histogram of letter counts in various important
algorithms, for example, the one that builds a database of words
usable in the crosswords editor. That database has a clever
format which allows answering questions like the following
quickly: What words in the database match ?OR??
— WORDS
and
CORES
will match.
IPuzCharset
is one of the first pieces of code I worked on in
Crosswords, and it later got moved to libipuz. Originally it didn't
even keep a histogram of character counts; it was just an ordered set
of characters that could answer the question, "what is the index of
the character ch
within the ordered set?".
I implemented that ordered set with a GTree
, a balanced
binary tree. The keys in the key/value tree were the characters, and
the values were just unused.
Later, the ordered set was turned into an actual histogram with character counts: keys are still characters, but each value is now a count of the coresponding character.
Over time, Crosswords started using IPuzCharset
for different
purposes. It is still used while building and accessing the database
of words; but now it is also used to present statistics in the
crosswords editor, and as part of the engine in an acrostics
generator.
In particular, the acrostics generator has been running into some
performance problems with IPuzCharset
. I wanted to take the port to
Rust as an opportunity to change the algorithm and make it faster.
IPuzCharset
started out with these basic operations:
/* Construction; memory management */
IPuzCharset *ipuz_charset_new (void);
IPuzCharset *ipuz_charset_ref (IPuzCharset *charet);
void ipuz_charset_unref (IPuzCharset *charset);
/* Mutation */
void ipuz_charset_add_text (IPuzCharset *charset,
const char *text);
gboolean ipuz_charset_remove_text (IPuzCharset *charset,
const char *text);
/* Querying */
gint ipuz_charset_get_char_index (const IPuzCharset *charset,
gunichar c);
guint ipuz_charset_get_char_count (const IPuzCharset *charset,
gunichar c);
gsize ipuz_charset_get_n_chars (const IPuzCharset *charset);
gsize ipuz_charset_get_size (const IPuzCharset *charset);
All of those are implemented in terms of the key/value binary tree that stores a character in each node's key, and a count in the node's value.
I read the code in Crosswords that uses the ipuz_charset_*()
functions and noticed that in every case, the code first constructs
and populates the charset using ipuz_charset_add_text()
, and then
doesn't modify it anymore — it only does queries afterwards. The only
place that uses ipuz_charset_remove_text()
is the acrostics
generator, but that one doesn't do any queries later: it uses the
remove_text()
operation as part of another algorithm, but only that.
So, I thought of doing this:
Split things into a mutable IPuzCharsetBuilder
that has the
add_text
/ remove_text
operations, and also has a build()
operation that consumes the builder and produces an immutable
IPuzCharset
.
IPuzCharset
is immutable; it can only be queried.
IPuzCharsetBuilder
can work with a hash table, which turns the
"add a character" operation from O(log n) to O(1) amortized.
build()
is O(n) on the number of unique characters and is only
done once per charset.
Make IPuzCharset
work with a different hash table that also allows
for O(1) operations.
IPuzCharsetBuilder
IPuzCharsetBuilder
is mutable, and it can live on the Rust side as a
Box<T>
so it can present an opaque pointer to C.
#[derive(Default)]
pub struct CharsetBuilder {
histogram: HashMap<char, u32>,
}
// IPuzCharsetBuilder *ipuz_charset_builder_new (void); */
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_builder_new() -> Box<CharsetBuilder> {
Box::new(CharsetBuilder::default())
}
For extern "C"
, Box<T>
marshals as a pointer. It's nominally what
one would get from malloc()
.
Then, simple functions to create the character counts:
impl CharsetBuilder {
/// Adds `text`'s character counts to the histogram.
fn add_text(&mut self, text: &str) {
for ch in text.chars() {
self.add_character(ch);
}
}
/// Adds a single character to the histogram.
fn add_character(&mut self, ch: char) {
self.histogram
.entry(ch)
.and_modify(|e| *e += 1)
.or_insert(1);
}
}
The C API wrappers:
use std::ffi::CStr;
// void ipuz_charset_builder_add_text (IPuzCharsetBuilder *builder, const char *text);
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_builder_add_text(
builder: &mut CharsetBuilder,
text: *const c_char,
) {
let text = CStr::from_ptr(text).to_str().unwrap();
builder.add_text(text);
}
CStr
is our old friend that takes a char *
and can wrap it as a
Rust &str
after validating it for UTF-8 and finding its length.
Here, the unwrap()
will panic if the passed string is not UTF-8, but
that's what we want; it's the equivalent of an assertion that what was
passed in is indeed UTF-8.
// void ipuz_charset_builder_add_character (IPuzCharsetBuilder *builder, gunichar ch);
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_builder_add_character(builder: &mut CharsetBuilder, ch: u32) {
let ch = char::from_u32(ch).unwrap();
builder.add_character(ch);
}
Somehow, the glib-sys crate doesn't have gunichar
, which is just a
guint32
for a Unicode code point. So, we take in a u32
, and check
that it is in the appropriate range for Unicode code points with
char::from_u32()
. Again, a panic in the unwrap()
means that the
passed number is out of range; equivalent to an assertion.
IPuzCharset
pub struct Charset {
/// Histogram of characters and their counts plus derived values.
histogram: HashMap<char, CharsetEntry>,
/// All the characters in the histogram, but in order.
ordered: String,
/// Sum of all the counts of all the characters.
sum_of_counts: usize,
}
/// Data about a character in a `Charset`. The "value" in a key/value pair where the "key" is a character.
#[derive(PartialEq)]
struct CharsetEntry {
/// Index of the character within the `Charset`'s ordered version.
index: u32,
/// How many of this character in the histogram.
count: u32,
}
impl CharsetBuilder {
fn build(self) -> Charset {
// omitted for brevity; consumes `self` and produces a `Charset` by adding
// the counts for the `sum_of_counts` field, and figuring out the sort
// order into the `ordered` field.
}
}
Now, on the C side, IPuzCharset
is meant to also be immutable and
reference-counted. We'll use Arc<T>
for such structures. One
cannot return an Arc<T>
to C code; it must first be converted to a
pointer with Arc::into_raw()
:
// IPuzCharset *ipuz_charset_builder_build (IPuzCharsetBuilder *builder);
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_builder_build(
builder: *mut CharsetBuilder,
) -> *const Charset {
let builder = Box::from_raw(builder); // get back the Box from a pointer
let charset = builder.build(); // consume the builder and free it
Arc::into_raw(Arc::new(charset)) // Wrap the charset in Arc and get a pointer
}
Then, implement ref()
and unref()
:
// IPuzCharset *ipuz_charset_ref (IPuzCharset *charet);
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_ref(charset: *const Charset) -> *const Charset {
Arc::increment_strong_count(charset);
charset
}
// void ipuz_charset_unref (IPuzCharset *charset);
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_unref(charset: *const Charset) {
Arc::decrement_strong_count(charset);
}
The query functions need to take a pointer to what really is the
Arc<Charset>
on the Rust side. They reconstruct the Arc
with
Arc::from_raw()
and wrap it in ManuallyDrop
so that the Arc
doesn't lose a reference count when the function exits:
// gsize ipuz_charset_get_n_chars (const IPuzCharset *charset);
#[no_mangle]
pub unsafe extern "C" fn ipuz_charset_get_n_chars(charset: *const Charset) -> usize {
let charset = ManuallyDrop::new(Arc::from_raw(charset));
charset.get_n_chars()
}
The C tests remain intact; these let us test all the #[no_mangle]
wrappers.
The Rust tests can just be for the internals, simliar to this:
#[test]
fn supports_histogram() {
let mut builder = CharsetBuilder::default();
let the_string = "ABBCCCDDDDEEEEEFFFFFFGGGGGGG";
builder.add_text(the_string);
let charset = builder.build();
assert_eq!(charset.get_size(), the_string.len());
assert_eq!(charset.get_char_count('A').unwrap(), 1);
assert_eq!(charset.get_char_count('B').unwrap(), 2);
assert_eq!(charset.get_char_count('C').unwrap(), 3);
assert_eq!(charset.get_char_count('D').unwrap(), 4);
assert_eq!(charset.get_char_count('E').unwrap(), 5);
assert_eq!(charset.get_char_count('F').unwrap(), 6);
assert_eq!(charset.get_char_count('G').unwrap(), 7);
assert!(charset.get_char_count('H').is_none());
}
Libipuz uses meson, which is not particularly fond of
cargo
. Still, cargo
can be used from meson with a wrapper script
and a bit of easy hacks. See the merge request for details.
I've left the original C header file ipuz-charset.h
intact, but
ideally I'd like to automatically generate the headers from Rust with
cbindgen
. Doing it that way lets me check that my
assumptions of the extern "C"
ABI are correct ("does foo: &mut Foo
appear as Foo *foo
on the C side?"), and it's one fewer C-ism to
write by hand. I need to see what to do about inline documentation;
gi-docgen
can consume C header files just fine, but I'm
not yet sure about how to make it work with generated headers from
cbindgen
.
I still need to modify the CI's code coverage scripts to work with the mixed C/Rust codebase. Fortunately I can copy those incantations from librsvg.
Maybe! I haven't benchmarked the acrostics generator yet. Stay tuned!
Hi, I am happy to announce Cambalache’s Gtk4 port has a beta release!
Version 0.17.2 features minors improvements and a brand new UI ported to Gtk 4!
The port was easier than expected, still lots of changes as you can see here…
64 files changed, 2615 insertions(+), 2769 deletions(-)
I specially like the new GtkDialog API and the removal of gtk_dialog_run()
With so many changes I expect some new bugs so please if you find any file them here.
You can get it from Flathub Beta
flatpak remote-add --if-not-exists flathub-beta https://flathub.org/beta-repo/flathub-beta.flatpakrepo flatpak install flathub-beta ar.xjuan.Cambalache
or checkout main branch at gitlab
git clone https://gitlab.gnome.org/jpu/cambalache.git
Have any question? come chat with us at #cambalache:gnome.org
Follow me in Mastodon @xjuan to get news related to Cambalache development.
Happy coding!
Some months you work very hard and there is little to show for any of it… so far this is one of those very draining months. I’m looking forward to spring, and spending less time online and at work.
Rather than ranting I want to share a couple of things from elsewhere in the software world to show what we should be aiming towards in the world of GNOME and free operating systems.
Firstly, from “Is something bugging you?”:
Before we even started writing the database, we first wrote a fully-deterministic event-based network simulation that our database could plug into. … if one particular simulation run found a bug in our application logic, we could run it over and over again with the same random seed, and the exact same series of events would happen in the exact same order. That meant that even for the weirdest and rarest bugs, we got infinity “tries” at figuring it out, and could add logging, or do whatever else we needed to do to track it down.
The text is hyperbolic, and for some reason they think it’s ok to work with Palantir, but its an inspiring read.
Secondly, a 15 minute video which you should watch in its entirety, and then consider how the game development world got so far ahead of everyone else. Here’s a link to the video.
This is the world that’s possible for operating systems, if we focus on developer experience and integration testing across the whole OS. And GNOME OS + openQA is where I see some of the most promising work happening right now.
Of course we’re a looooong way from this promised land, despite the progress we’ve made in the last 10+ years. Automated testing of the OS is great, but we don’t always have logs (bug), the image randomly fails to start (bug), the image takes hours to build, we can’t test merge requests (bug), testing takes 15+ minutes to run, etc. Some of these issues seem intractable when occasional volunteer effort is all we have.
Imagine a world where you can develop and live-deploy changes to your phone and your laptop OS, exhaustively test them in CI, step backwards with a debugger when problems arise – this is what we should be building in the tech industry. A few teams are chipping away at this vision – in the Linux world, GNOME Builder and the Fedora Atomic project spring to mind and I’m sure there are more .
Anyway, what happened last month?
This is the final month of the Outreachy internship that I’m running around end-to-end testing of GNOME. We already had some wins, there are now 5 separate testsuites running against GNOME OS, unfortunately rather useless at present due to random startup failures.
I spent a *lot* of time working with Tanju on a way to usefully test GNOME on Mobile. I haven’t been able to follow this effort closely beyond seeing a few demos and old blogs. This week was something of a crash course on what there is. Along the way I got pretty confused about scaling in GNOME Shell – it turns out there’s currently a hardcoded minimum screen size, and upstream Mutter will refuse to scale the display below a certain size. In fact upstream GNOME Shell doesn’t have any of the necessary adaptations for use in a mobile form factor. We really need a “GNOME OS Mobile” VM image – here’s an open issue – it’s unlikely to be done within the last 2 weeks of the current internship, though. The best we can do for now is test the apps on a regular desktop screen, but with the window resized to 360×720 pixels.
On the positive side, hopefully this has been a useful journey for Tanju and Dorothy into the inner workings of GNOME. On a separate note, we submitted a workshop on openQA testing to GUADEC in Denver, and if all goes well with travel sponsorship and US VISA applications, we hope to actually meet in person there in July.
I went to FOSDEM 2024 and had a great time, it was one of my favourite FOSDEM trips. I managed to avoid the ‘flu – i think wearing a mask on the plane is the secret. From Codethink we were 41 people this year – probably a new record.
I went a day early to join in the end of the GTK hackfest, and did a little work on the Tiny SPARQL database, formerly known as Tracker SPARQL. Together with Carlos we fixed breakage in the CLI, improved HTTP support and prototyped a potential internship project to add a web based query editor.
My main goal at FOSDEM was to make contact with other openQA users and developers, and we had some success there. Since then i’ve hashed out a wishlist for openQA for GNOME’s use cases, and we’re aiming to set up an open, monthly call where different QA teams can get together and collaborate on a realistic roadmap.
I saw some great talks too, the “Outreachy 1000 Interns” talk and the Fairphone “Sustainable and Longlasting Phones” were particular highlights. I went to the Bytenight event for the first time and and found an incredible 1970’s Wurlitzer transistor organ in the “smoking area” of the HSBXL hackspace, also beating Javi, Bart and Adam at Super Tuxcart several times.
I got a new laptop! It’s a Framework 13 AMD: 8 cores, 2 threads per core, 64 GB RAM, 3:2 2256×1504 matte screen. It kicks my 5-year-old Dell XPS 13 in the pants, and I am so relieved to be back to a matte screen. I just got it up and running with Guix, which though easier than past installation experiences was not without some wrinkles, so here I wanted to share a recipe for what worked for me.
(I swear this isn’t going to become a product review blog, but when I went to post something like this on the Framework forum I got an error saying that new users could only post 2 links. I understand how we got here but hoo, that is a garbage experience!)
Upstream Guix works on the Framework 13 AMD, but only with software rendering and no wifi, and I wasn’t able to install from upstream media. This is mainly because Guix uses a modified kernel and doesn’t include necessary firmware. There is a third-party nonguix repository that defines packages for the vanilla Linux kernel and the linux-firmware collection; we have to use that repo if we want all functionality.
Of course having the firmware be user-hackable would be better, and it would be better if the framework laptop used parts with free firmware. Something for a next revision, hopefully.
As an aside, I think the official Free Software Foundation position on firmware is bad praxis. To recall, the idea is that if a device has embedded software (firmware) that can be updated, but that software is in a form that users can’t modify, then the system as a whole is not free software. This is technically correct but doesn’t logically imply that the right strategy for advancing free software is to forbid firmware blobs; you have a number of potential policy choices and you have to look at their expected results to evaluate which one is most in line with your goals.
Bright lines are useful, of course; I just think that with respect to free software, drawing that line around firmware is not interesting. To illustrate this point, I believe the current FSF position is that if you can run e.g. a USB ethernet adapter without installing non-free firmware, then it is kosher, otherwise it is haram. However many of these devices have firmware; it’s just that you aren’t updating it. So for example the the USB Ethernet adapter I got with my Dell system many years ago has firmware, therefore it has bugs, but I have never updated that firmware because that’s not how we roll. Or, on my old laptop, I never updated the CPU microcode, despite spectre and meltdown and all the rest.
“Firmware, but never updated” reminds me of the wires around some New York neighborhoods that allow orthodox people to leave the house on Sabbath; useful if you are of a given community and enjoy the feeling of belonging, but I think even the faithful would see it as a hack. It is like how Richard Stallman wouldn’t use travel booking web sites because they had non-free JavaScript, but would happily call someone on the telephone to perform the booking for him, using those same sites. In that case, the net effect on the world of this particular bright line is negative: it does not advance free software in the least and only adds overhead. Privileging principle over praxis is generally a losing strategy.
Firstly I had to turn off secure boot in the bios settings; it’s in “security”.
I wasn’t expecting wifi to work out of the box, but for some reason the upstream Guix install media was not able to configure the network via the Ethernet expansion card nor an external USB-C ethernet adapter that I had; stuck at the DHCP phase. So my initial installation attempt failed.
Then I realized that the nonguix repository has installation media, which is the same as upstream but with the vanilla kernel and linux-firmware. So on another machine where I had Guix installed, I added the nonguix channel and built the installation media, via guix system image -t iso9660 nongnu/system/install.scm. That gave me a file that I could write to a USB stick.
Using that installation media, installing was a breeze.
However upon reboot, I found that I had no wifi and I was using software rendering; clearly, installation produced an OS config with the Guix kernel instead of upstream Linux. Happily, at this point the ethernet expansion card was able to work, so connect to wired ethernet, open /etc/config.scm, add the needed lines as described in the operating-system part of the nonguix README, reconfigure, and reboot. Building Linux takes a little less than an hour on this machine.
At that point you have wifi and graphics drivers. I use GNOME, and things seem to work. However the screen defaults to 200% resolution, which makes everything really big. Crisp, pretty, but big. Really you would like something in between? Or that the Framework ships a higher-resolution screen so that 200% would be a good scaling factor; this was the case with my old Dell XPS 13, and it worked well. Anyway with the Framework laptop, I wanted 150% scaling, and it seems these days that the way you have to do this is to use Wayland, which Guix does not yet enable by default.
So you go into config.scm again, and change where it says %desktop-services to be:
(modify-services %desktop-services (gdm-service-type config => (gdm-configuration (inherit config) (wayland? #t))))
Then when you reboot you are in Wayland. Works fine, it seems. But then you have to go and enable an experimental mutter setting; install dconf-editor, run it, search for keys with “mutter” in the name, find the “experimental settings” key, tell it to not use the default setting, then click the box for “scale-monitor-framebuffer”.
Then! You can go into GNOME settings and get 125%, 150%, and so on. Great.
HOWEVER, and I hope this is a transient situation, there is a problem: in GNOME, applications that aren’t native Wayland apps don’t scale nicely. It’s like the app gets rendered to a texture at the original resolution, which then gets scaled up in a blurry way. There aren’t so many of these apps these days as most things have been ported to be Wayland-capable, Firefox included, but Emacs is one of them :( However however! If you install the emacs-pgtk package instead of emacs, it looks better. Not perfect, but good enough. So that’s where I am.
The laptop hangs on reboot due to this bug, but that seems a minor issue at this point. There is an ongoing tracker discussion on the community forum; like other problems in that thread, I hope that this one resolves itself upstream in Linux over time.
I didn’t mention the funniest thing about this laptop: it comes in pieces that you have to put together :) I am not so great with hardware, but I had no problem. The build quality seems pretty good; not a MacBook Air, but then it’s also user-repairable, which is a big strong point. It has these funny extension cards that slot into the chassis, which I have found to be quite amusing.
I haven’t had the machine for long enough but it seems to work fine up to now: suspend, good battery use, not noisy (unless it’s compiling on all 16 threads), graphics, wifi, ethernet, good compilation speed. (I should give compiling LLVM a go; that’s a useful workload.) I don’t have bluetooth or the fingerprint reader working yet; I give it 25% odds that I get around to this during the lifetime of this laptop :)
Until next time, happy hacking!
I know that some blog posts on this topic have already been published, but nevertheless I have decided to share my real-life experience of bisecting regressions in Fedora Silverblue.
My work laptop, Dell XPS 13 Plus, has an Intel IPU6 webcam. It still lacks upstream drivers, so I have to use RPMFusion packages containing Intel’s software stack to ensure the webcam’s functionality.
However, on Monday, I discovered that the webcam was not working. Just last week, it was functioning, and I had no idea what could have broken it. I don’t pay much attention to updates; they’re staged automatically and applied with the next boot. Fortunately, I use Fedora Silverblue, where problems can be identified using the bisection method.
Silverblue utilizes OSTree, essentially described as Git for the operating system. Each Fedora version is a branch, and individual snapshots are commits. You update the system by moving to newer commits and upgrade by switching to the branch with the new version. Silverblue creates daily snapshots, and since OSTree allows you to revert to any commit in the repository, you can roll back the system to any previous date.
I decided to revert to previous states and determine when the regression occurred. I downloaded commit metadata for the last 7 days:
sudo ostree pull --commit-metadata-only --depth 7 fedora fedora/39/x86_64/silverblue
Then, I listed the commits to get their labels and hashes:
sudo ostree log fedora:fedora/39/x86_64/silverblue
Subsequently, I returned the system to the beginning of the previous week:
rpm-ostree deploy 39.20240205.0
After rebooting, I found that the webcam was working, indicating that something had broken it last week. I decided to halve the interval and return to Wednesday:
rpm-ostree deploy 39.20240207.0
In the Wednesday snapshot, the webcam was no longer functioning. Now, I only needed to determine whether it was broken by Tuesday’s or Wednesday’s update. I deployed the Tuesday snapshot:
rpm-ostree deploy 39.20240206.0
I found out that the webcam was still working in this snapshot. It was broken by the Wednesday’s update, so I needed to identify which packages had changed. I used the hashes from one of the outputs above:
rpm-ostree db diff ec2ea04c87913e2a69e71afbbf091ca774bd085530bd4103296e4621a98fc835 fc6cf46319451122df856b59cab82ea4650e9d32ea4bd2fc5d1028107c7ab912
ostree diff commit from: ec2ea04c87913e2a69e71afbbf091ca774bd085530bd4103296e4621a98fc835
ostree diff commit to: fc6cf46319451122df856b59cab82ea4650e9d32ea4bd2fc5d1028107c7ab912
Upgraded:
kernel 6.6.14-200.fc39 -> 6.7.3-200.fc39
kernel-core 6.6.14-200.fc39 -> 6.7.3-200.fc39
kernel-modules 6.6.14-200.fc39 -> 6.7.3-200.fc39
kernel-modules-core 6.6.14-200.fc39 -> 6.7.3-200.fc39
kernel-modules-extra 6.6.14-200.fc39 -> 6.7.3-200.fc39
llvm-libs 17.0.6-2.fc39 -> 17.0.6-3.fc39
mesa-dri-drivers 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-filesystem 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-libEGL 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-libGL 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-libgbm 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-libglapi 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-libxatracker 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-va-drivers 23.3.4-1.fc39 -> 23.3.5-1.fc39
mesa-vulkan-drivers 23.3.4-1.fc39 -> 23.3.5-1.fc39
qadwaitadecorations-qt5 0.1.3-5.fc39 -> 0.1.4-1.fc39
uresourced 0.5.3-5.fc39 -> 0.5.4-1.fc39
The kernel upgrade from 6.6 to 6.7 was the logical suspect. I informed a colleague, who maintains the RPMFusion packages with the IPU6 webcams support, that the 6.7 kernel broke the webcam support for me. He asked for the model and informed me that it had an Intel VSC chip, which has a newly added driver in the 6.7 kernel. However, it conflicts with Intel’s software stack in RPMFusion packages. So he asked the Fedora kernel maintainer to temporarily disable the Intel VSC driver. This change will come with the next kernel update.
Until the update arrives, I decided to stay on the last functional snapshot from the previous Tuesday. To achieve this, I pinned it in OSTree. Because the system was currently running on that snapshot, all I had to do was:
sudo ostree admin pin 0
And that’s it. Just a small real-life example of how to identify when and what caused a regression in Silverblue and how to easily revert to a version before the regression and stay on it until a fix arrives.
I love words, and have as far back as I can remember. Mom says I started reading the newspaper religiously in kindergarten, I proudly show off a small percentage of my books in my videoconferencing background, and of course the web is (still) junk straight into my veins.
So I’ve long wanted an e-ink device that could do multi-vendor ebooks, pdfs, and a feed reader. (I still consume a lot of RSS, mostly via the excellent feedbin.) But my 8yo Kindle kept not dying, and my last shot at this (a Sony) was expensive and frustrating.
Late last year, though, I found out about the Boox Page. It is… pretty great? Some notes and tips:
Added, Feb. 15: An old friend reading this said it surprised him, because my software aesthetics at this point are very much “it had better work out of the box, I have no time or patience for DIY”, and this sounded… pretty DIY.
It’s a great observation. For lots of other devices, I have no patience for something that doesn’t Just Work. But apparently (surprising me as much as anyone!) I have a lot of patience for getting my e-reading experience just so. So this did (still does) take patience, but also so far has rewarded that patience. We’ll see if it sticks.
As there was some interest and questions about my trip to Brussels and FOSDEM on Mastodon, I thought I should write down some notes and observations from the trip. This will not really be about FOSDEM itself as there's numerous other reports from the conference itself.
I had some questions about how and where to buy tickets for a train journey like this.
For the first connection with the commuter train to Stockholm C, I just used my regular 30 day pass for the Stockholm and Uppsala regional local traffic as it already had validity covering the entire trip.
For the rest of the trip I made two separate reservations. One for a round-trip journey from Stockholm C to Hamburg Hbf in a shared couchette (six bed compartment) on the EuroNight service booked via sj.se. Leaving in the evening Feb 1 (Thursday) at 17:34, return trip from Hamburg on Feb 5 (Monday) at 22:03. I then separetly booked tickets on ICE from Hamburg Hbf to Brussel Nord via bahn.de.
For the second part I choose the option with seat reservation, but bound to specific trains. Specifically departing from Hamburg at 10:45 on Feb 2. This gave me almost 5 hours margin, which is perhaps a bit on the safing side, and adds to the total journey time. On the other hand it gives some extra time to have some breakfast and walk around a bit (though it's a bit early in the morning).
Return trip from Brussels to Hamburg was scheduled to arrive at 17:18, giving plenty of time (almost five hours) to get some dinner and visit some sights. Here it's more crucial to set aside time for any hickups, as missing the night train service would be pretty awkward…
Another option would have been to get the InterRail pass. But bare in mind that night trains still require buying reservation. And reseving seats on German ICE might still be a good idea to ensure getting a fixed seat (would be especially beneficial if you intend to work on the train).
A side note: As my camera has started acting up, taking several attempts to start up after being off for a while (probably the lens mechanism getting worn after taking around 21000 pictures), I haven't taken as much pictures as usual…
I left home after lunch on the Thursday, first going for the commuter train.
Trains at Uppsala C |
Arriving at Stockholm C (techinically Stockholm City, the underground station connected to the metro system) I left suitcase in a baggage locker at the station. Now the plan was to take the tram out to Djurgården (this is also where the Vasa and ABBA musuems in Stockholm are located) to have some fika and enjoy the weather as it was one of those sunny days. But since the next tram then was a bus replacement, I decided to instead take a walk.
View over Sergels torg in Stockholm |
Finally some fika at the cafe Lilla Hasselbacken |
The headed back towards the central station, picket up my suitcase and went to the restaurant Belgobaren to have dinner and a couple of beers. Also a good way to warm up for a bit of Belgian spirit 😎
The nice bar at Belgobaren. This place is also the hotel restaurant for Freys Hotel |
Half an hour or so before the night train was about to leave I headed back to the station.
The departure board |
At track 10, next train is ours… |
These are old German sleeper cars |
Arriving in Hamburg the morning after around halv an hour or so behind schedule
Had a bit of breakfast at one of the cafe places in the station.
Coffee and a sandwich |
Taking a morning walk and visiting the exhibition at the city hall (which was thankfully opening at this early time of day).
Hamburh Rathaus (city hall) |
Arriving back at the station awaiting the departure of the next train towards Köln (Cologne).
Shortly before the train was supposed to depart it was announced as being cancelled…
Asked some staff from Deutche Bahn at the platform and got a suggestion to instead take a train to Düsseldorf and then on to Liège, and from there to Brussel Nord.
At Düsseldorf Hbf |
Arriving in Düsseldorf and boarding the train towards Liège (destination Paris Nord), it turns out this was a Eurostar. And they did not accept my ticket even though I think that's what the staff had told me (unless I misunderstood the German). So I had to pay for a new ticket onboard.
Later filed a claim for a refund for this. And as I had not registered an account beforehand at bahn.de with my e-mail address, this had to be done via a printed form and old-school mail… So this is something to keep in mind, registering your booking beforehand could be a good idea.
Eventually arrived in Brussels an hour and some delayed.
Not so much from the acual FOSDEM in this post, but OK a few pictures from ULB…
Started off the morning on Monday Feb 5 by walking around a bit in central Brussels, saying hi to Maneken Pis, and buying some beer and chocolate.
The next thing that happend was a bit eventful though…
Headed back towards the northern station (Bruxelles Nord), as this was what I had booked seat for (probably wouldn't have been a problem getting on the train at the central station, but). Taking the tram from De Brouckère towards the northern station. I had about an hour to spare here. These lines (3 and 4) runs in a tunnel. This is what could be called a pre-metro). After the stop at Rogier (one stop from where I was about to get off) there was a sudden stop, and a power-outage! (ouch!).
After maybe half an hour we had to evacuate walking through the tunnel back to Rogier. Then walked to station. Still in quite good time for the train (but I was getting quite nervous for a while).
Later on the ICE towards Köln was some 20 min delayed. This train was supposed to continue on to Franfurt am Main, but turned back to Brussels in Köln. Fortunatly (for me) this didn't affect me, as I was getting off anyway.
At Köln Hbf |
Got a little less time than planned to get some snack at Köln Hbf.
The old Soviet submarine. Now as the U-434 submarine museum. |
And after that a nice dinner at Blockbräu.
Blockbräu restaurant |
And then a qick a look at the old tunnel under the river Elbe, where you take elevators down. There is still construction going on and one of the tunnel lines is closed. Seems to only be opened for pedestrian and bike traffic. This was also the case when I visited last year.
The old tunnel |
Elevators taking cars. There's also smaller „regular” elevators. But I guess those are much newer… |
Back at the station awaiting the train.
The night train to Stockholm |
And the morning after looking out from the last car of the train on the long straight around Mjölby
And then a bit later, after lunch arriving home after a nice weekend in Brussels attending FOSDEM, listening to interesting FOSS talks and hanging a bit with GNOME folks.
(yes, you read that right: the ‘f’ is not capitalized anymore)
This release doesn’t introduce groundbreaking new functionality. But there are quite a few quality of life improvements worth checking out.
Marking all articles as read is now more predictable and can be undone. It takes into account if the article list is filtered by a search term or viewing only starred articles. Only those will get marked as read. If you still accidentally mark more articles as read than intended there is a grace period that allows you to undo your action.
Subscribing to a new feed used to add it to the sidebar. But no articles were fetched until the next sync. No longer: Newsflash will now fetch articles of that particular feed right after it was added.
Speaking of fetching articles for feeds: if a feed fails to download or parse for some reason it will now display an icon in the sidebar. This works for local RSS and all services that propagate their errors via their API.
My personal highlight in this release is the miniflux implementation supporting enclosures. This means more thumbnails and attachments for users if they run a recent version of miniflux (>= 2.0.49).
Other than that a few more smaller changes and bug fixes can be found in Newsflash 3.1
Get it on flathub
3D Printing Slicers
I recently replaced my Flashforge Adventurer 3 printer that I had been using for a few years as my first printer with a BambuLab X1 Carbon, wanting a printer that was not a “project” so I could focus on modelling and printing. It's an investment, but my partner convinced me that I was using the printer often enough to warrant it, and told me to look out for Black Friday sales, which I did.
The hardware-specific slicer, Bambu Studio, was available for Linux, but only as an AppImage, with many people reporting crashes on startup, non-working video live view, and other problems that the hardware maker tried to work-around by shipping separate AppImage variants for Ubuntu and Fedora.
After close to 150 patches to the upstream software (which, in hindsight, I could probably have avoided by compiling the C++ code with LLVM), I manage to “flatpak” the application and make it available on Flathub. It's reached 3k installs in about a month, which is quite a bit for a niche piece of software.
Note that if you click the “Donate” button on the Flathub page, it will take you a page where you can feed my transformed fossil fuel addiction buy filament for repairs and printing perfectly fitting everyday items, rather than bulk importing them from the other side of the planet.
Preparing a Game Gear consoliser shell
I will continue to maintain the FlashPrint slicer for FlashForge printers, installed by nearly 15k users, although I enabled automated updates now, and will not be updating the release notes, which required manual intervention.
FlashForge have unfortunately never answered my queries about making this distribution of their software official (and fixing the crash when using a VPN...).
Rhythmbox
As I was updating the Rhythmbox Flatpak on Flathub, I realised that it just reached 250k installs, which puts the number of installations of those 3D printing slicers above into perspective.
The updated screenshot used on Flathub
Congratulations, and many thanks, to all the developers that keep on contributing to this very mature project, especially Jonathan Matthew who's been maintaining the app since 2008.
After three months of development, Gameeky reaches its first public release. This project is the result of nearly fifteen years of experience contributing to education projects and mentoring young learners.
I am happy to share it everyone!
Although this project can still be considered in early stages of development, it’s not far from being feature complete. This first release comes the following:
Gameeky is available in English and Spanish, including the beginner’s guide.
The recommended installation method is through GNOME Software from Flathub. Alternatively, it can also be installed from the terminal:
flatpak --user install flathub dev.tchx84.Gameeky
The first thematic pack can also be installed from the terminal:
flatpak --user install flathub dev.tchx84.Gameeky.ThematicPack.FreedomValley
Moving forward, these are the next steps for Gameeky:
To achieve some of these goals I plan to take Gameeky to its users, by organizing workshops and hackathons with students and teachers here in Paraguay.
If you know any organization or individual that might be interested in supporting these workshops financially or otherwise, please feel free to contact me.