January 22, 2022

Builder + podman adventures

I switched to Fedora silverblue not too long ago and was quite surprised how everything works out of the box. One day we got a question about CMake and the backend we use for our CMake plugin. This is the start of a long adventure:

Why is GLFW not resolving correct?

The demo project i got my hands on was some example code of the GLFW library (a OpenGL game library). I spun up my podman development container and installed the library.

toolbox enter
sudo dnf install glfw-devel

I thought this should work. Then i realized that our clang machinery to resolve symbols, validate includes, hover provider and diagnostic provider is acting weird. Everything is red squiggled and nothing works. I wondered because my experience was smooth all the time with toolbox/podman.

The fool who stays inside

I never used anything relevant from outside of my own universe. I developed on librest and there was everything working ootb. Of course all libraries i use there are available in the builder flatpak and that was the reason in never had any problems. I just got wrong informations from a totally different development environment.

How does Builder resolve symbols?

Builder has an own little clang server. When Builder starts we also start gnome-builder-clang which acts as our “language server”. Technology-wise its different but the memory footprint is low and its fast. So it works for us.

builder-clang.png

By default this server has only access to the GNOME Builder flatpak environment (if Builder flatpak, native is different). So it will provide everything from there. In order to access the host we translate files (like includes etc.) to /var/host/ so we can also escape the flatpak environment (this is btw a reason we probably never can be sandboxed). In case of Fedora Silverblue both options are not enough. My host is immutable and does not even contain a compiler or any development files. I need access to podmans environments.

Enter the pod

Podman relies on techniques from LXC. So a running container is initially based on an image which gets mutable layers on top. So we get a filesystem with all the changes in there.

podman-layers.png

In order to find now the file (for example the header file of that library i installed GLFW/glfw3.h) i have to find my container, reconstruct the layer hierarchy and search from the top layer till the image layer for my header. When i have found it i can build a host-compatible path and feed this into the clang machinery.

The actual change to Builder’s code is particular small but the gist to understand how i can build correct translations was huge. I want to thank Christian Hergert for his help. He had a similar problem with debug symbols in sysprof and his preliminary work made the resolution of that problem way easier.

Creating GtkSourceView style schemes

GtkSourceView has the concept of “style schemes” which map language features (types, keywords, strings, etc) to various colors and font properties. Creating them can be a bit laborious even if you’re starting with a color palette prepared. The artistic process is iterative so reducing the time between iterations is paramount.

Furthermore, I have aphantasia which means I need to be able to see things visually because I lack the wiring to simply imagine it in my head.

With that in mind, I spent a little time over the holiday season creating a standalone application to create or modify modify style-schemes. It provides live preview and targets the new features in GtkSourceView 5.x. It’s not a circle app or anything yet, but you can always grab it from the CI pipeline artifacts which include a Flatpak bundle.

It is a simple application with humble requirements.

  • Edit attributes about a style scheme.
  • Manage the color palette and allow importing color palettes from external formats (such as GIMP color palettes).
  • Visually modify styles for global, common, and per-language override.

My hope is that this encourages people to create art, even if you have neurological wiring similar to me.

January 21, 2022

Twitch: GNOME live coding streaming

This year I've started with something that I wanted to do since the first COVID lockdown, but never did. So as a first year resolution I decided to start streaming and I created my Twitch channel Abentogil.

I've been thinking about streaming since lockdown, when a group of Spanish people created a nice initiative to teach kids how to code and other tech stuff. I never participated on that initiative, but it was the seed of this.

This year I've seen other GNOME streamers doing live coding, like ebassi and Georges and, at the end, I pluck up the courage to start.

The plan

I've started with one our at the end of my day, so I'm trying to stream from Monday to Thursday from 20:00 to 21:00 (CET).

This routing also will help me to spend more time in free software development regularly, so not just on weekends or holidays, but a bit of work everyday.

Right now I've started working on my own projects, like Timetrack and Loop. I've fixed some issues that I had with Gtk4 in Timetrack and I've started to work on the Loop app to add a MIDI keyboard.

The other day I discovered the Music Grid app idea in the gitlab and it's related with the MIDI implementation I've been working on, so the next thing that I'll do on my streams is to create this music grid widget and add it to the Loop app, to have a nice Music creation App.

The language: Live coding in Spanish

I've decided to do the streaming mainly in Spanish. The main reason to do that is because now you can find documentation and a lot of videos in English, but for people that doesn't master the English language, it's harder to follow this content and even harder to participate, to ask or say something.

Spanish is also my main language, and the idea is not just create tutorials or something like that, this is just for fun, and if I'm able to create a small community or influence someone, I want to show that the language is not a unbreakable barrier.

I want to make this for the Spanish community, but that doesn't mean that for some streams I cannot talk in English, all depends on the audience, of course, if a non Spanish speaker comes and ask something, I'll answer in English and try to do multilingual stream.

The International English is the language to use in Free Software, but it's really important to do not convert that in a barrier, because there are a lot of different communities and people that don't know the language and the first contact with something new is always better in your main language. Everything is easier with help from people that's near to your cultural background.

The future

Right now I'm working on Loop and every stream is similar, live coding on this Gtk4 music APP, but in the future I'll do different streams, doing different things on each week day. These are the current ideas that could be a day on my stream:

  • Maintenance day: Work on my apps, code review, fixes, releases, and development.
  • GNOME Love day: Give some love to the GNOME project, look for simple bugs to solve in different projects, help with translations, initiatives, and other possible one hour task to give some love to the project in general.
  • Learning day: Pick some technology and learn by doing, for example, I can explore new programming languages writing a simple app, learn about Linux writing a driver, learn how to animate with blender, anything new to play and discover.
  • Newcomers day: Create simple tutorial or introduction to some technology, How to write a GNOME Shell extension, how to create a Gtk4 app in python, teach to code from zero, etc.

This is just a list of ideas, if I discover any other interesting use of this time and stream, I can do that. The final goal of this is to have fun, and if someone learns something in the journey, that's even better.

I'll also upload any interesting stream to youtube, so the good content could be viewed on demand.

So if you like this, don't hesitate to view some of the streams and say "Hola":

Further investments in desktop Linux

This was originally posted on the GNOME Foundation news feed

The GNOME Foundation was supported during 2020-2021 by a grant from Endless Network which funded the Community Engagement Challenge, strategy consultancy with the board, and a contribution towards our general running costs. At the end of last year we had a portion of this grant remaining, and after the success of our work in previous years directly funding developer and infrastructure work on GTK and Flathub, we wanted to see whether we could use these funds to invest in GNOME and the wider Linux desktop platform.

We’re very pleased to announce that we got approval to launch three parallel contractor engagements, which started over the past few weeks. These projects aim to improve our developer experience, make more applications available on the GNOME platform, and move towards equitable and sustainable revenue models for developers within our ecosystem. Thanks again to Endless Network for their support on these initiatives.

Flathub – Verified apps, donations and subscriptions (Codethink and James Westman)

This project is described in detail on the Flathub Discourse but goal is to add a process to verify first-party apps on Flathub (ie uploaded by a developer or an authorised representative) and then make it possible for those developers to collect donations or subscriptions from users of their applications. We also plan to publish a separate repository that contains only these verified first-party uploads (without any of the community contributed applications), as well as providing a repository with only free and open source applications, allowing users to choose what they are comfortable installing and running on their system.

Creating the user and developer login system to manage your apps will also set us up well for future enhancements, such managing tokens for direct binary uploads (eg from a CI/CD system hosted elsewhere, as is already done with Mozilla Firefox and OBS) and making it easier to publish apps from systems such as Electron which can be hard to use within a flatpak-builder sandbox. For updates on this project you can follow the Discourse thread, check out the work board on GitHub or join us on Matrix.

PWAs – Integrating Progressive Web Apps in GNOME (Phaedrus Leeds)

While everyone agrees that native applications can provide the best experience on the GNOME desktop, the web platform, and particularly PWAs (Progressive Web Apps) which are designed to be downloadable as apps and offer offline functionality, makes it possible for us to offer equivalent experiences to other platforms for app publishers who have not specifically targeted GNOME. This allows us to attract and retain users by giving them the choice of using applications from a wider range of publishers than are currently directly targeting the Linux desktop.

The first phase of the GNOME PWA project involves adding back support to Software for web apps backed by GNOME Web, and making this possible when Web is packaged as a Flatpak.  So far some preparatory pull requests have been merged in Web and libportal to enable this work, and development is ongoing to get the feature branches ready for review.

Discussions are also in progress with the Design team on how best to display the web apps in Software and on the user interface for web apps installed from a browser. There has also been discussion among various stakeholders about what web apps should be included as available with Software, and how they can provide supplemental value to users without taking priority over apps native to GNOME.

Finally, technical discussion is ongoing in the portal issue tracker to ensure that the implementation of a new dynamic launcher portal meets all security and robustness requirements, and is potentially useful not just to GNOME Web but Chromium and any other app that may want to install desktop launchers. Adding support for the launcher portal in upstream Chromium, to facilitate Chromium-based browsers packaged as a Flatpak, and adding support for Chromium-based web apps in Software are stretch goals for the project should time permit.

GTK4 / Adwaita – To support the adoption of Gtk4 by the community (Emmanuele Bassi)

With the release of GTK4 and renewed interest in GTK as a toolkit, we want to continue improving the developer experience and ease of use of GTK and ensure we have a complete and competitive offering for developers considering using our platform. This involves identifying missing functionality or UI elements that applications need to move to GTK4, as well as informing the community about the new widgets and functionality available.

We have been working on documentation and bug fixes for GTK in preparation for the GNOME 42 release and have also started looking at the missing widgets and API in Libadwaita, in preparation for the next release. The next steps are to work with the Design team and the Libadwaita maintainers and identify and implement missing widgets that did not make the cut for the 1.0 release.

In the meantime, we have also worked on writing a beginners tutorial for the GNOME developers documentation, including GTK and Libadwaita widgets so that newcomers to the platform can easily move between the Interface Guidelines and the API references of various libraries. To increase the outreach of the effort, Emmanuele has been streaming it on Twitch, and published the VOD on YouTube as well. 

Aditi’s Open Source Journey

Now to the BIG question, How did I get into open source? I read about Google Summer of Code and Outreachy internships as a part of my research. That’s when I came across Priyanka Saggu. She was a previous Outreachy intern with GNOME. She guided me on how to get started with open source. OneContinue reading "Aditi’s Open Source Journey"

#27 Borderless

Update on what happened across the GNOME project in the week from January 14 to January 21.

Core Apps and Libraries

GNOME Shell

Core system user interface for things like launching apps, switching windows, system search, and more.

Sam Hewitt announces

The desktop Shell is getting a big visual refresh for GNOME 42! In addition to a palette update, elements throughout the shell have been given a rounder appearance. Panel menus have also gotten a major redesign, with a new style for sub-menus. The on-screen keyboard is getting big improvements to key visual feedback and word suggestions. Not to mention a tonne of other smaller fixes.

Settings

Configure various aspects of your GNOME desktop.

Georges Stavracas (feaneron) says

This week I ported the Online Accounts panel to GTK4, and landed redesigns of the Display, and the Applications panels, in Settings.

WebKitGTK

GTK port of the WebKit rendering engine.

adrian reports

We have released WebKitGTK 2.34.4, which includes a number of security fixes. While the release notes are spare, it is worth mentioning that it includes an important patch for the Safari IndexedDB leaks vulnerability which has been recently disclosed.

Software

Lets you install and update applications and system extensions.

Philip Withnall announces

Milan Crha has improved the display of permissions needed by Flatseal in GNOME Software

GJS

Use the GNOME platform libraries in your JavaScript programs. GJS powers GNOME Shell, Polari, GNOME Documents, and many other apps.

ptomato announces

In GJS this week:

  • GJS upgraded its underlying JS engine to SpiderMonkey 91, bringing lots of modern JS conveniences. This upgrade was done by Evan Welsh, Chun-wei Fan, and myself. Here’s a sampler of what we get:
    • #privateFields and #methods()
    • The ??=, &&=, and ||= operators
    • The at() method for arrays and strings, allowing indexing with negative numbers
    • Promise.any()
    • Error causes
    • WeakRefs
    • More locale-aware formatting features
  • Evan also added a standards-compliant setTimeout() and setInterval() to GJS, these can now be used as in web browsers, while still integrating with GLib’s main loop.
  • Evan also added overrides for GObject.Object.new() and GObject.Object.new_with_properties() to make them work with properties.
  • Previously, pressing Ctrl+D at the debugger prompt would print an error message instead of quitting. I fixed this.
  • I added column numbers to SyntaxError messages, to go along with the line number.
  • Yet more thanks to Evan for various other contributions.

Circle Apps and Libraries

gtk-rs

Safe bindings to the Rust language for fundamental libraries from the GNOME stack.

Bilal Elmoussaoui reports

After months working on gtk-rs bindings, we finally made a new release! 🎉 The release comes with supporting various new APIs like

  • BuilderScope support in gtk4-rs, it means you can finally set function names on the UI file and define the callback in your Rust code
  • gdk3 wayland API bindings
  • A release of almost all the gir based Rust bindings in World/Rust
  • Brand new GStreamer plugin that allows you to “stream” your pipeline to a GdkPaintable You can find more details on the release blog post and on gstreamer bindings/plugins release blog post

Third Party Projects

Romain reports

I wrote UI Shooter, a tool to make screenshots of GTK4 widgets from a UI file.

It allows loading CSS, resources and translations, setting scale and dark color scheme, and using libadwaita’s stylesheet. It’s mainly intended to be used in headless environments, so I provide a container image running the Weston compositor that can be used as is or extended at will.

I use it in Metadata Cleaner’s CI pipeline to automatically take screenshots of various widgets for the help pages when a translation is added or updated.

Doomsdayrs announces

Announcing, gtk-kt https://gitlab.com/gtk-kt/gtk-kt

gtk-kt is a Kotlin binding of the GTK API. Allowing developers who are familiar with Java / Kotlin to easily write a GTK application.

It is also an easy safe way for new programmers to start creating GTK applications, only needing 10 lines & 154 characters to create a single window. Compare that to C which takes 26 lines and 602 characters, that is a whopping 75% less characters to make a simple window, imagine that for larger projects with more complex components.

It has neared its completion stages, with 97.49% of GTK classes wrapped in Kotlin, leading me to release the first alphas to https://maven.org .

Also being developed/planned is libadwaita (https://gitlab.com/gtk-kt/libadwaita-kt) support and xdg-portal (https://gitlab.com/gtk-kt/libportal-kt) support.

Aaron Erhardt reports

Relm4 0.4 was released this week with many improvements! The highlights include many macro improvements, type-safe actions, more flexibility at runtime and updated dependencies. The full release announcement can be found here.

Phosh

A pure wayland shell for mobile devices.

Guido says

phosh got a VPN quicksetting last week that toggles the last used VPN connection. On the compositor side (phoc) we updated to a newer wlroots which allowed us to enable the xdg-foreign and viewporter wayland protocols (which help flatpaks to position file dialogs better and some video workloads respecitively).

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

January 17, 2022

Status update, 17/01/2022

Happy 2022 everyone! Hope it was a safe one. I managed to travel a bit and visit my family while somehow dodging Omicron each step of the way. I guess you cant ask for much more than that.

I am keeping busy at work integrating BuildStream with a rather large, tricky set of components, Makefiles and a custom dependency management system. I am appreciating how much flexibility BuildStream provides. As an example, some internal tools expect builds to happen at a certain path. BuildStream makes it easy to customize the build path by adding this stanza to the .bst file:

variables:
    build-root: /magic/path

I am also experimenting with committing whole build trees as artifacts, as a way to distribute tests which are designed to run within a build tree. I think this will be easier in Bst 2.x, but it’s not impossible in Bst 1.6 either.

Besides that I have been mostly making music, stay tuned for a new Vladimir Chicken EP in the near future.

Boot Guard and PSB have user-hostile defaults

Compromising an OS without it being detectable is hard. Modern operating systems support the imposition of a security policy or the launch of some sort of monitoring agent sufficient early in boot that even if you compromise the OS, you're probably going to have left some sort of detectable trace[1]. You can avoid this by attacking the lower layers - if you compromise the bootloader then it can just hotpatch a backdoor into the kernel before executing it, for instance.

This is avoided via one of two mechanisms. Measured boot (such as TPM-based Trusted Boot) makes a tamper-proof cryptographic record of what the system booted, with each component in turn creating a measurement of the next component in the boot chain. If a component is tampered with, its measurement will be different. This can be used to either prevent the release of a cryptographic secret if the boot chain is modified (for instance, using the TPM to encrypt the disk encryption key), or can be used to attest the boot state to another device which can tell you whether you're safe or not. The other approach is verified boot (such as UEFI Secure Boot), where each component in the boot chain verifies the next component before executing it. If the verification fails, execution halts.

In both cases, each component in the boot chain measures and/or verifies the next. But something needs to be the first link in this chain, and traditionally this was the system firmware. Which means you could tamper with the system firmware and subvert the entire process - either have the firmware patch the bootloader in RAM after measuring or verifying it, or just load a modified bootloader and lie about the measurements or ignore the verification. Attackers had already been targeting the firmware (Hacking Team had something along these lines, although this was pre-secure boot so just dropped a rootkit into the OS), and given a well-implemented measured and verified boot chain, the firmware becomes an even more attractive target.

Intel's Boot Guard and AMD's Platform Secure Boot attempt to solve this problem by moving the validation of the core system firmware to an (approximately) immutable environment. Intel's solution involves the Management Engine, a separate x86 core integrated into the motherboard chipset. The ME's boot ROM verifies a signature on its firmware before executing it, and once the ME is up it verifies that the system firmware's bootblock is signed using a public key that corresponds to a hash blown into one-time programmable fuses in the chipset. What happens next depends on policy - it can either prevent the system from booting, allow the system to boot to recover the firmware but automatically shut it down after a while, or flag the failure but allow the system to boot anyway. Most policies will also involve a measurement of the bootblock being pushed into the TPM.

AMD's Platform Secure Boot is slightly different. Rather than the root of trust living in the motherboard chipset, it's in AMD's Platform Security Processor which is incorporated directly onto the CPU die. Similar to Boot Guard, the PSP has ROM that verifies the PSP's own firmware, and then that firmware verifies the system firmware signature against a set of blown fuses in the CPU. If that fails, system boot is halted. I'm having trouble finding decent technical documentation about PSB, and what I have found doesn't mention measuring anything into the TPM - if this is the case, PSB only implements verified boot, not measured boot.

What's the practical upshot of this? The first is that you can't replace the system firmware with anything that doesn't have a valid signature, which effectively means you're locked into firmware the vendor chooses to sign. This prevents replacing the system firmware with either a replacement implementation (such as Coreboot) or a modified version of the original implementation (such as firmware that disables locking of CPU functionality or removes hardware allowlists). In this respect, enforcing system firmware verification works against the user rather than benefiting them.
Of course, it also prevents an attacker from doing the same thing, but while this is a real threat to some users, I think it's hard to say that it's a realistic threat for most users.

The problem is that vendors are shipping with Boot Guard and (increasingly) PSB enabled by default. In the AMD case this causes another problem - because the fuses are in the CPU itself, a CPU that's had PSB enabled is no longer compatible with any motherboards running firmware that wasn't signed with the same key. If a user wants to upgrade their system's CPU, they're effectively unable to sell the old one. But in both scenarios, the user's ability to control what their system is running is reduced.

As I said, the threat that these technologies seek to protect against is real. If you're a large company that handles a lot of sensitive data, you should probably worry about it. If you're a journalist or an activist dealing with governments that have a track record of targeting people like you, it should probably be part of your threat model. But otherwise, the probability of you being hit by a purely userland attack is so ludicrously high compared to you being targeted this way that it's just not a big deal.

I think there's a more reasonable tradeoff than where we've ended up. Tying things like disk encryption secrets to TPM state means that if the system firmware is measured into the TPM prior to being executed, we can at least detect that the firmware has been tampered with. In this case nothing prevents the firmware being modified, there's just a record in your TPM that it's no longer the same as it was when you encrypted the secret. So, here's what I'd suggest:

1) The default behaviour of technologies like Boot Guard or PSB should be to measure the firmware signing key and whether the firmware has a valid signature into PCR 7 (the TPM register that is also used to record which UEFI Secure Boot signing key is used to verify the bootloader).
2) If the PCR 7 value changes, the disk encryption key release will be blocked, and the user will be redirected to a key recovery process. This should include remote attestation, allowing the user to be informed that their firmware signing situation has changed.
3) Tooling should be provided to switch the policy from merely measuring to verifying, and users at meaningful risk of firmware-based attacks should be encouraged to make use of this tooling

This would allow users to replace their system firmware at will, at the cost of having to re-seal their disk encryption keys against the new TPM measurements. It would provide enough information that, in the (unlikely for most users) scenario that their firmware has actually been modified without their knowledge, they can identify that. And it would allow users who are at high risk to switch to a higher security state, and for hardware that is explicitly intended to be resilient against attacks to have different defaults.

This is frustratingly close to possible with Boot Guard, but I don't think it's quite there. Before you've blown the Boot Guard fuses, the Boot Guard policy can be read out of flash. This means that you can drop a Boot Guard configuration into flash telling the ME to measure the firmware but not prevent it from running. But there are two problems remaining:

1) The measurement is made into PCR 0, and PCR 0 changes every time your firmware is updated. That makes it a bad default for sealing encryption keys.
2) It doesn't look like the policy is measured before being enforced. This means that an attacker can simply reflash modified firmware with a policy that disables measurement and then make a fake measurement that makes it look like the firmware is ok.

Fixing this seems simple enough - the Boot Guard policy should always be measured, and measurements of the policy and the signing key should be made into a PCR other than PCR 0. If an attacker modified the policy, the PCR value would change. If an attacker modified the firmware without modifying the policy, the PCR value would also change. People who are at high risk would run an app that would blow the Boot Guard policy into fuses rather than just relying on the copy in flash, and enable verification as well as measurement. Now if an attacker tampers with the firmware, the system simply refuses to boot and the attacker doesn't get anything.

Things are harder on the AMD side. I can't find any indication that PSB supports measuring the firmware at all, which obviously makes this approach impossible. I'm somewhat surprised by that, and so wouldn't be surprised if it does do a measurement somewhere. If it doesn't, there's a rather more significant problem - if a system has a socketed CPU, and someone has sufficient physical access to replace the firmware, they can just swap out the CPU as well with one that doesn't have PSB enabled. Under normal circumstances the system firmware can detect this and prompt the user, but given that the attacker has just replaced the firmware we can assume that they'd do so with firmware that doesn't decide to tell the user what just happened. In the absence of better documentation, it's extremely hard to say that PSB actually provides meaningful security benefits.

So, overall: I think Boot Guard protects against a real-world attack that matters to a small but important set of targets. I think most of its benefits could be provided in a way that still gave users control over their system firmware, while also permitting high-risk targets to opt-in to stronger guarantees. Based on what's publicly documented about PSB, it's hard to say that it provides real-world security benefits for anyone at present. In both cases, what's actually shipping reduces the control people have over their systems, and should be considered user-hostile.

[1] Assuming that someone's both turning this on and actually looking at the data produced

comment count unavailable comments

January 16, 2022

Announce new release 0.9.0 of librest

I’m pleased to announce the release of 0.9.0 of librest, a library meant to interact with “Restful” web services. This library is very old and not really big but it handles the aspect of interaction with REST-APIs in a convenient fashion. After a long period of maintenance state i picked it up and brought it into 2022. Most of the deprecated API calls are gone now and it should be now possible to parallel-install librest with the previous release.

So what is this in detail and how does it work?

Basically its an abstraction of libsoup which is an HTTP client. If you would use libsoup directly it happens to create some kind of abstraction is necessary for your usecase in order to encapsulate all REST functions with their necessary HTTP calls. Here ´librest` tries to help by providing an abstraction fitting to the REST usecase in general.

Getting started

Typically a REST interface contains the host on which data need to be fetch/posted and a function.

rest01.png

librest encapsulates the host in a RestProxy. Every call to the REST interface starts from there. The creation is a simple

RestProxy *proxy = rest_proxy_new ("https://www.gitlab.com/api/v4/", FALSE);

It is possible to make the host parametrizable. For example if we want to support different API versions it is possible to do something like this:

RestProxy *proxy = rest_proxy_new ("https://www.gitlab.com/api/%s/", TRUE);

I will describe later how to fill in the void of %s in order to choose which API version we want to target.

Calling a function on the host

In order to call now a function of the host we create a new ProxyCall object.

RestProxyCall *call = rest_proxy_new_call (proxy);

Of course this object needs now some configuration in order to call the correct function we need.

rest_proxy_call_set_function (call, "version");

If you did created a parametrizable proxy you should bind the proxy before creating a new call:

rest_proxy_bind (proxy, "v4");

Ok we have now everything configured so we can execute the call:

// sync variant
GError *error = NULL;
rest_proxy_call_sync (call, &error);

// or async variant
rest_proxy_call_invoke_async (call, cancellable, my_callback_when_done, user_data);

We see its possible to call the REST interface in a blocking and in an unblocking variant (GUI tools should use the later)

Additional configuration options

By default a RestProxyCall calls the REST interface via GET in order to fetch data. Of course its possible to use other HTTP methods too via:

rest_proxy_call_set_method (call, "POST");

If you want to add query parameters or payload to your call just use the params-interface:

rest_proxy_call_add_param (call, "key", "value");

Depending on the HTTP method this will be transformed to a query or a payload.

Get resources after call

When a call is successfully you can inspect the response from the server via:

const gchar *payload = rest_proxy_call_get_payload (call);
goffset payload_length = rest_proxy_call_get_payload_length (call);

NOTE: the payload string is not null terminated as its contained in a GBytes structure. Therefore it is always recommended to use the length too. Transforming it to a null terminated string can be done via:

gchar *payload_null_terminated = g_strndup (payload, payload_length);

What about authentication?

Most REST APIs need some sort of authentication. Sadly the HTTP authentication workflow with a unauthenticated call -> respone 401 -> try basic/digest auth -> response 200 does not work because its possible to call some REST APIs also as unauthorized user.

Besides of that problem it is most often not perfect security therefore there are many more options API provider can choose. For example it is possible to secure the API with OAuth/OAuth2. Both are possible wit librest but it is necessary to use the specified Proxies for these authentication methods.

// OAuth 1.0 or 1.0a
oauth_proxy_new (consumer_key, consumer_secret, url, binding_required);

// OAuth 2
rest_oauth2_proxy_new (authurl, tokenurl, redirect, client_id, client_secret, url);

The former was already present in earlier versions of librest. I wanted to keep the namespace intact which is the reason that OAuth2 uses an prefix rest_.

Summary

I continue working on that nifty little library as i think its an important part of todays communication possibilities. I will try to reduce the code in GNOME Online Accounts because they have to maintain already the same functionality which perfectly would fit into librest.

January 15, 2022

Pulling on a thread

I’m attending the https://linux.conf.au/ conference online this weekend, which is always a good opportunity for some sideline hacking.

I found something boneheaded doing that today.

There have been a few times while inventing the OpenHMD Rift driver where I’ve noticed something strange and followed the thread until it made sense. Sometimes that leads to improvements in the driver, sometimes not.

In this case, I wanted to generate a graph of how long the computer vision processing takes – from the moment each camera frame is captured until poses are generated for each device.

To do that, I have a some logging branches that output JSON events to log files and I write scripts to process those. I used that data and produced:

Pose recognition latency.
dt = interpose spacing, delay = frame to pose latency

Two things caught my eye in this graph. The first is the way the baseline latency (pink lines) increases from ~20ms to ~58ms. The 2nd is the quantisation effect, where pose latencies are clearly moving in discrete steps.

Neither of those should be happening.

Camera frames are being captured from the CV1 sensors every 19.2ms, and it takes that 17-18ms for them to be delivered across the USB. Depending on how many IR sources the cameras can see, figuring out the device poses can take a different amount of time, but the baseline should always hover around 17-18ms because the fast “device tracking locked” case take as little as 1ms.

Did you see me mention 19.2ms as the interframe period? Guess what the spacing on those quantisation levels are in the graph? I recognised it as implying that something in the processing is tied to frame timing when it should not be.

OpenHMD Rift CV1 tracking timing

This 2nd graph helped me pinpoint what exactly was going on. This graph is cut from the part of the session where the latency has jumped up. What it shows is a ~1 frame delay between when the frame is received (frame-arrival-finish-local-ts) before the initial analysis even starts!

That could imply that the analysis thread is just busy processing the previous frame and doesn’t get start working on the new one yet – but the graph says that fast analysis is typically done in 1-10ms at most. It should rarely be busy when the next frame arrives.

This is where I found the bone headed code – a rookie mistake I wrote when putting in place the image analysis threads early on in the driver development and never noticed.

There are 3 threads involved:

  • USB service thread, reading video frame packets and assembling pixels in framebuffers
  • Fast analysis thread, that checks tracking lock is still acquired
  • Long analysis thread, which does brute-force pose searching to reacquire / match unknown IR sources to device LEDs

These 3 threads communicate using frame worker queues passing frames between each other. Each analysis thread does this pseudocode:

while driver_running:
    Pop a frame from the queue
    Process the frame
    Sleep for new frame notification

The problem is in the 3rd line. If the driver is ever still processing the frame in line 2 when a new frame arrives – say because the computer got really busy – the thread sleeps anyway and won’t wake up until the next frame arrives. At that point, there’ll be 2 frames in the queue, but it only still processes one – so the analysis gains a 1 frame latency from that point on. If it happens a second time, it gets later by another frame! Any further and it starts reclaiming frames from the queues to keep the video capture thread fed – but it only reclaims one frame at a time, so the latency remains!

The fix is simple:

while driver_running:
   Pop a frame
   Process the frame
   if queue_is_empty():
     sleep for new frame notification

Doing that for both the fast and long analysis threads changed the profile of the pose latency graph completely.

Pose latency and inter-pose spacing after fix

This is a massive win! To be clear, this has been causing problems in the driver for at least 18 months but was never obvious from the logs alone. A single good graph is worth a thousand logs.

What does this mean in practice?

The way the fusion filter I’ve built works, in between pose updates from the cameras, the position and orientation of each device are predicted / updated using the accelerometer and gyro readings. Particularly for position, using the IMU for prediction drifts fairly quickly. The longer the driver spends ‘coasting’ on the IMU, the less accurate the position tracking is. So, the sooner the driver can get a correction from the camera to the fusion filter the less drift we’ll get – especially under fast motion. Particularly for the hand controllers that get waved around.

Before: Left Controller pose delays by sensor
After: Left Controller pose delays by sensor

Poses are now being updated up to 40ms earlier and the baseline is consistent with the USB transfer delay.

You can also visibly see the effect of the JPEG decoding support I added over Christmas. The ‘red’ camera is directly connected to USB3, while the ‘khaki’ camera is feeding JPEG frames over USB2 that then need to be decoded, adding a few ms delay.

The latency reduction is nicely visible in the pose graphs, where the ‘drop shadow’ effect of pose updates tailing fusion predictions largely disappears and there are fewer large gaps in the pose observations when long analysis happens (visible as straight lines jumping from point to point in the trace):

Before: Left Controller poses
After: Left Controller poses

January 14, 2022

Lifetimes, Clones, and Closures: Explaining the “glib::clone!()” Macro

One thing that I’ve seen confuse newcomers to writing GObject-based Rust code is the glib::clone!() macro. It’s foreign to people coming from writing normal Rust code trying to write GObject-based code, and it’s foreign to many people used to writing GObject-based code in other languages (e.g. C, Python, JavaScript, and Vala). Over the years I’ve explained it a few times, and I figure now that I should write a blog post that I can point people to describing what the clone!() macro is, what it does, and why we need it in detail.

Closures and Clones in Plain Rust

Rust has a nifty thing called a closure. To quote the official Rust book:

…closures are anonymous functions you can save in a variable or pass as arguments to other functions. You can create the closure in one place and then call the closure to evaluate it in a different context. Unlike functions, closures can capture values from the scope in which they’re defined.

Simply put, a closure is a function you can use as a variable or an argument to another function. Closures can “capture” variables from the environment, meaning that you can easily pass variables within your scope without needing to pass them as arguments. Here’s an example of capturing:

let num = 1;
let num_closure = move || {
    println!("Num times 2 is {}", num * 2); // `num` captured here
};

num_closure();

num is an i32, or a signed 32-bit integer. Integers are cheap, statically sized primitives, and they don’t require any special behavior when they are dropped. Because of this, it’s safe to keep using them after a move – so the type can and does implement the Copy trait. In practice, that means we can use our integer after the closure captures it, as it captures a copy. So we can have:

// Everything above stays the same
num_closure();
println!("Num is {}", num);

And the compiler will be happy with us. What happens if you need something dynamically sized and stored on the heap, like the data from a String? If we try this pattern with a String:

let string = String::from("trust");
let string_closure = move || {
    println!("String contains \"rust\": {}", string.contains("rust"));
};

string_closure();
println!("String is \"{}\"", string); 

We get the following error:

error[E0382]: borrow of moved value: `string`
  --> src/main.rs:10:34
   |
4  |     let string = String::from("trust");
   |         ------ move occurs because `string` has type `String`, which does not implement the `Copy` trait
5  |     let string_closure = move || {
   |                          ------- value moved into closure here
6  |         println!("String contains \"rust\": {}", string.contains("rust"));
   |                                                  ------ variable moved due to use in closure
...
10 |     println!("String is \"{}\"", string); 
   |                                  ^^^^^^ value borrowed here after move

Values of the String type cannot be copied, so the compiler instead “moves” our string, giving the closure ownership. In Rust, only one thing can have ownership of a value. So when the closure captures string, our outer scope no longer has access to it. That doesn’t mean we can’t use string in our closure, though. We just need to be more explicit about how it should be handled.

Rust provides the Clone trait that we can implement for objects like this. Clone provides the clone() method, which explicitly duplicates an object. Types that implement Clone but not Copy are generally types that can be of an arbitrary size, and are stored in the heap. Values of the String type can be vary in size, which is why it falls into this category. When you call clone(), usually you are creating a new full copy of the object’s data on the heap. So, we want to create a clone, and only pass that clone into the closure:

let s = string.clone();
let string_closure = move || {
    println!("String contains \"rust\": {}", s.contains("rust"));
};

The closure will only capture our clone, and we can still use the original in our original scope.

If you need more information on cloning and ownership, I recommend reading the “Understanding Ownership” chapter of the official Rust book.

Reference Counting, Abbreviated

When working with types of an arbitary size, we may have types that are too large to efficiently clone(). For these types, we can use reference counting. In Rust, there are two types for this you’re likely to use: Rc<T> for single-threaded contexts, and Arc<T> for multi-threaded contexts. For now let’s focus on Rc<T>.

When working with reference-counted types, the reference-counted object is kept alive for as long as anything holds a “strong” reference. Rc<T> creates a new Rc<T> instance when you call .clone() and increments the number of strong references instead of creating a full copy. The number of strong references is decreased when an instance of Rc<T> goes out of scope. An Rc can often be used in contexts the reference &T is used. Particularly, calling a method that takes &self on an Rc<T> will call the method on the underlying T. For example, some_string.as_str() would work the same if some_string were a String or an Rc<String>.

For our example, we can simply wrap our String constructor with Rc::new():

let string = Rc::new(String::from("trust"));
let s = string.clone();
let string_closure = move || {
    println!("String contains \"rust\": {}", s.contains("rust"));
};

string_closure();
println!("String is \"{}\"", string); 

With this, we can capture and use larger values without creating expensive copies. There are some consequences to naively using clone(), and we’ll get into those below, but in a slightly different context.

Closures and Copies in GObject-based Rust

When working with GObject-based Rust, particularly gtk-rs, closures come up most often when working with signals. Signals are a GObject concept. To (over)simplify, signals are used to react to and modify object-specific events. For more detail I recommend reading the “Signals” section in the “Type System Concepts” documentation. Here’s what you need to know:

  • Signals are emitted by objects.
  • Signals can carry data in the form of parameters that connections may use.
  • Signals can expect their handlers to have a return type that’s used elsewhere.

Let’s take a look at how this works with a C example. Say we have a GtkButton, and we want to react when the button is clicked. Most code will use the g_signal_connect () function macro to register a signal handler. g_signal_connect () takes 4 parameters:

  • The GObject that we expect to emit the signal
  • The name of the signal
  • A GCallback that is compatible with the signal’s parameters
  • data, which is a pointer to a struct.

The object here is our GtkButton instance. The signal we want to connect to is the “clicked” signal. The signal expects a callback with the signature of void clicked (GtkButton *self, gpointer user_data). So we need to write a function that has that signature. user_data here corresponds to the data parameter that we give g_signal_connect (). With all of that in mind, here’s what connecting to the signal would typically look like in C:

void
button_clicked_cb (GtkButton *button,
                   gpointer   user_data)
{
    MyObject *self = MY_OBJECT (user_data);
    my_object_do_something_with_button (self, button);
}


static void
my_object_some_setup (MyObject *self)
{
    GtkWidget *button = gtk_button_new_with_label ("Do Something");
    g_signal_connect (button, "clicked",
                      G_CALLBACK (button_clicked_cb), self);
    
    my_object_add_button (button); // Assume this does something to keep button alive
}

This is the simplest way to handle connecting to the signal. But we have an issue with this setup: what if we want to pass multiple values to the callback, that aren’t necessarily a part of MyObject? You would need to create a custom struct that’s houses each value you want to pass, use that struct as data, and read each field of that struct within your callback.

Instead of having to create a struct for each callback that needs to take multiple arguments, in Rust we can and do use closures. The gtk-rs bindings are nice in that they have generated functions for each signal a type can emit. So for gtk::Button we have connect_clicked (). These generated functions take a closure as an argument, with the closure taking the same arguments that the signal expects – except user_data. However, because Rust closures can capture variables, we don’t need user_data – the closure essentially becomes a struct containing captured variables, and the pointer to it becomes user_data. So, let’s try to do a direct port of the functions above, and condense them down to one function with a closure inside:

impl MyObject {
    pub fn some_setup(&self) {
        let button = gtk::Button::with_label("Do Something");

        button.connect_clicked(move |btn| {
            self.do_something_with_button(btn);
        });

        self.add_button(button);
    }
}

This looks pretty nice, right? The catch is, it doesn’t compile:

error[E0759]: `self` has an anonymous lifetime `'_` but it needs to satisfy a `'static` lifetime requirement
  --> src/lib.rs:33:36
   |
30 |           pub fn some_setup(&self) {
   |                             ----- this data with an anonymous lifetime `'_`...
...
33 |               button.connect_clicked(move |btn| {
   |  ____________________________________^
34 | |                 self.do_something_with_button(btn);
35 | |             });
   | |_____________^ ...is captured here...
   |
note: ...and is required to live as long as `'static` here
  --> src/lib.rs:33:20
   |
33 |             button.connect_clicked(move |btn| {
   |                    ^^^^^^^^^^^^^^^

Lifetimes can be a bit confusing, so I’ll try to simplify. &self is a reference to our object. It’s like the C pointer MyObject *self, except it has guarantees that C pointers don’t have: notably, they must always be valid where they are used. The compiler is telling us that by the time our closure runs – which could be any point where button is alive – our reference may not be valid, because our &self method argument (by declaration) only lives to the end of the method. There are a few ways to solve this: change the lifetime of our reference and ensure it matches the closure’s lifetime, or to find a way to pass an owned object to the closure.

Lifetimes are complex – I don’t recommend worrying about them unless you really need the extra performance from using references everywhere. There’s a big complication with trying to work with lifetimes here: our closure has a specific lifetime bound. If we take a look at the function signature for connect_clicked():

fn connect_clicked<F: Fn(&Self) + 'static>(&self, f: F) -> SignalHandlerId

We can see that the closure (and thus everything captured by the closure) has the 'static lifetime. This can mean different things in different contexts, but here that means that the closure needs to be able to hold onto the type for as long as it wants. For more detail, see “Rust by Example”’s chapter on the static lifetime. So, the only option is for the closure to own the objects it captures.

The trick to giving ownership to something you don’t necessarily own is to duplicate it. Remember clone()? We can use that here. You might think it’s expensive to clone your object, especially if it’s a large and complex widget, like your main window. There’s something very nice about GObjects though: all GObjects are reference-counted. So, cloning a GObject instance is like cloning an Rc<T> instance. Instead of making a full copy, the amount of strong references increases. So, we can change our code to use clone just like we did in our original String example:

pub fn some_setup(&self) {
    let button = gtk::Button::with_label("Do Something");

    let s = self.clone();
    button.connect_clicked(move |btn| {
        s.do_something_with_button(btn);
    });

    self.add_button(button);
}

All good, right? Unfortunately, no. This might look innocent, and in some programs cloning like this might cause any issues. What if button wasn’t owned by MyObject? Take this version of the function:

pub fn some_setup(&self, button: &gtk::Button) {
    let s = self.clone();
    button.connect_clicked(move |btn| {
        s.do_something_with_button(btn);
    });
}

button is now merely passed to some_setup(). It may be owned by some other widget that may be alive for much longer than we want MyObject to be alive. Think back to the description of reference counting: objects are kept alive for as long as a strong reference exists. We’ve given a strong reference to the closure we attached to the button. That means MyObject will be forcibly kept alive for as long as the closure is alive, which is potentially as long as button is alive. MyObject and the memory associated with it may never be cleaned up, and that gets more problematic the bigger MyObject is and the more instances we have.

Now, we can structure our program differently to avoid this specific case, but for now let’s continue using it as an example. How do we keep our closure from controlling the lifetime of MyObject when we need to be able to use MyObject when the closure runs? Well, in addition to “strong” references, reference counting has the concept of “weak” references. The amount of weak references an object has is tracked, but it doesn’t need to be 0 in order for the object to be dropped. With an Rc<T> instance we’d use Rc::downgrade() to get a Weak<T>, and with a GObject we use ObjectExt::downgrade() to get a WeakRef<T>. In order to turn a weak reference back into a usable instance of an object we need to “upgrade” it. Upgrading a weak reference can fail, since weak references do not keep the referenced object alive. So Weak<T>::upgrade() returns an Option<Rc<T>>, and WeakRef returns an Option<T>. Because it’s optional, we should only move forward if T still exists.

Let’s rework our example to use weak references. Since we only care about doing something when the object still exists, we can use if let here:

pub fn some_setup(&self, button: &gtk::Button) {
    let s = self.downgrade();
    button.connect_clicked(move |btn| {
        if let Some(obj) = s.upgrade() {
            obj.do_something_with_button(btn);
        }
    });
}

Only two more lines, but a little more annoying than just calling clone(). Now, what if we have another widget we need to capture?

pub fn some_setup(&self, button: &gtk::Button, widget: &OtherWidget) {
    let s = self.downgrade();
    let w = widget.downgrade();
    button.connect_clicked(move |btn| {
        if let (Some(obj), Some(widget)) = (s.upgrade(), w.upgrade()) {
            obj.do_something_with_button(btn);
            widget.set_visible(false);
        }
    });
}

That’s getting harder to parse. Now, what if the closure needed a return value? Let’s say it should return a boolean. We need to handle our intended behavior when MyObject and OtherWidget still exist, and we need to handle the fallback for when it doesn’t:

pub fn some_setup(&self, button: &gtk::Button, widget: &OtherWidget) {
    let s = self.downgrade();
    let w = widget.downgrade();
    button.connect_clicked(move |btn| {
        if let (Some(obj), Some(widget)) = (s.upgrade(), w.upgrade()) {
            obj.do_something_with_button(btn);
            widget.visible()
        } else {
            false
        }
    });
}

Now we have something pretty off-putting. If we want to avoid keeping around unwanted objects or potential reference cycles, this will get worse for every object we want to capture. Thankfully, we don’t have to write code like this.

Enter the glib::clone!() Macro

The glib crate provides a macro to solve all of these cases. The macro takes the variables you want to capture as @weak or @strong, and the capture behavior corresponds to upgrading/downgrading and calling clone(), respectively. So, starting with the example behavior that kept MyObject around, if we really wanted that we would write the function like this:

pub fn some_setup(&self, button: &gtk::Button) {
    button.connect_clicked(clone!(@strong self as s => move |btn| {
        s.do_something_with_button(btn);
    }));
}

We use self as s because self is a keyword in Rust. We don’t need to rename a variable unless it’s a keyword or some field (e.g. foo.bar as bar). Here, glib::clone!() doesn’t prevent us from holding onto s forever, but it does provide a nicer way of doing it should we want to. If we want to use a weak reference instead, it would be:

button.connect_clicked(clone!(@weak self as s => move |btn| {
    s.do_something_with_button(btn);
}));

Just one word and we no longer have to worry about MyObject sticking around when it shouldn’t. For the example with multiple captures, we can use comma separation to pass multiple variables:

pub fn some_setup(&self, button: &gtk::Button, widget: &OtherWidget) {
    button.connect_clicked(clone!(@weak self as s, @weak widget => move |btn| {
        s.do_something_with_button(btn);
        widget.set_visible(false);
    }));
}

Very nice. It’s also simple to provide a fallback for return values:

button.connect_clicked(clone!(@weak self as s, @weak widget => @default-return false, move |btn| {
    s.do_something_with_button(btn);
    widget.visible()
}));

Now instead of spending time and code on using weak references and fall back correctly, we can rely on glib::clone!() to handle it for us succinctly.

There’s are a few caveats to using glib::clone!(). Errors in your closures may be harder to spot, as the compiler may point to the site of the macro, instead of the exact site of the error. rustfmt also can’t format the contents inside the macro. For that reason, if your closure is getting too long I would recommend separating the behavior into a proper function and calling that.

Overall, I recommend using glib::clone!() when working on gtk-rs codebases. I hope this post helps you understand what it’s doing when you come across it, and that you know when you should use it.

#26 Contact Me

Update on what happened across the GNOME project in the week from January 07 to January 14.

Core Apps and Libraries

GNOME Contacts

Keep and organize your contacts information.

nielsdg says

GNOME Contacts has been ported to GTK 4 and libadwaita, making sure it nicely fits in with a lot of other core apps in GNOME 42.

Mutter

A Wayland display server and X11 window manager and compositor library.

robert.mader says

Thanks to Jonas Ådahl we now support the new Wayland dmabuf feedback protocol. The protocol (for communication between clients and Mutter) together with some improvements to Mutters native backend (communication between Mutter and the kernel) allows a number of optimizations. In Gnome 42 for example, this will allow us to use direct scanout with most fullscreen OpenGL or Vulkan clients. Something we already supported in recent versions, however only in very selective cases. You can think of this as a more sophisticated version of X11 unredirect, notably without tearing. What does this mean for users? The obvious part is that it will squeeze some more FPS out of GPUs when running games. To me, the even more important part is that it will help reduce energy consumption and thus increase battery life for e.g. video players. When playing a fullscreen video, doing a full size extra copy of every frame takes up a significant part of GPU time and skipping that allows the hardware to clock down. What does this mean for developers? Fortunately, support for this protocol is build into OpenGL and Vulkan drivers. I personally spent a good chunk of time over the last two years helping to make Firefox finally use OpenGL by default. Now I’m very pleased to get this efficiency boost for free. Similarly, if you consider porting your app from GTK3 to GTK4 (the later using OpenGL by default) this might be a further incentive to do so. What next? In future versions of Gnome we plan to support scanout for non-fullscreen windows. Also, users with multi-GPU devices can expect to benefit significantly from further improvements.

Libadwaita

Building blocks for modern GNOME apps using GTK4.

Alexander Mikhaylenko reports

Builder and Logs now support the upcoming dark style preference.

GJS

Use the GNOME platform libraries in your JavaScript programs. GJS powers GNOME Shell, Polari, GNOME Documents, and many other apps.

ptomato announces

In GJS this week:

  • Evan Welsh made GObject interfaces enumerable, so you can now do things like Object.keys(Gio.File.prototype) and get a list of the methods, like you can with other GObject types.
  • Evan also fixed a memory leak with callbacks.
  • Marco Trevisan and myself landed a large refactor involving type safety.
  • Chun-wei Fan kept everything buildable on Windows.
  • Thanks to Sonny Piers, Sergei Trofimovich, and Eli Schwartz for various other contributions.

Cantarell

Jakub Steiner says

GNOME’s UI typeface, Cantarell, has gotten a new minisite at cantarell.gnome.org. We finally have a canonical place for the font binary downloads, but the site also demos the extensive weight coverage of the variable font. I’m happy the typeface now has a respectable home for the amount of work Nikolaus Waxweiler has poured into it in the past few years. Thank you!

Circle Apps and Libraries

Secrets

A password manager which makes use of the KeePass v.4 format.

Maximiliano says

Secrets, formerly known as Password Safe, version 6.0 was just released, featuring the recent GTK 4 port, libadwaita, and OTP support. Due to the rename now it is under org.gnome.World.Secrets in Flathub.

gtk-rs

Safe bindings to the Rust language for fundamental libraries from the GNOME stack.

Bilal Elmoussaoui announces

gtk4-rs has now a Windows MSVC CI pipeline. This will ensure the bindings builds just fine and avoid regressions for Windows users that want to build applications using GTK4 and Rust.

Gaphor

A simple UML and SysML modeling tool.

Arjan announces

In our upcoming release Gaphor, based on popular demand, we now support diagram types! If you create an activity diagram, for example, it adds diagram info to the upper left of the diagram and collapses the toolbox to only show the relevant tools for that diagram.

Fragments

Easy to use BitTorrent client.

Felix announces

I added context menus to Fragments to make common actions easier and faster to perform. These are primarily intended for desktop users, but can also be activated on touchscreens by long pressing and holding.

Commit

An editor that helps you write better Git and Mercurial commit messages.

sonnyp announces

Commit message editor now use GtkSourceView which allows for new features and improvements. It’s also now available to translate on Weblate.

Third Party Projects

sonnyp announces

Tobias Bernard and I started working on Playhouse an HTML/CSS/JavaScript playground for GNOME.

There is no release yet but contributions and feedback welcome.

Powered by GTK 4, GJS, libadwaita , GtkSourceView and WebKitGTK !

Corentin Noël announces

We are happy to announce the first public alpha release of libshumate, the GTK4 Map widget library announced in 2019. This first unstable release contains everything one need to embed a minimal Map view. This library completely replaces libchamplain which was using Clutter and now provides a native way to control maps in GTK4. Application developers are encouraged to use libshumate and report any issue that might occur or any missing feature to the library.

flxzt announces

I have been working on it for a while, but now it is ready for an announcement: Rnote is a vector-based drawing app to create handwritten notes and to annotate pictures and PDFs. It features an endless sheet, different pen types with stylus pressure support, shapes and tools. It also has an integrated workspace browser and you can choose between different background colors and patterns. It can be downloaded as a flatpak from flathub

dabrain34 announces

GstPipelineStudio aims to provide a graphical user interface to the GStreamer framework. From a first step in the framework with a simple pipeline to a complex pipeline debugging, the tool provides a friendly interface to add elements to a pipeline and debug it.

Phosh

A pure wayland shell for mobile devices.

Guido says

Panzer Sajt added support for non-numeric passwords to phosh. Some bits of Sam Hewitt’s ongoing style refresh is also already visible in the video as is the new VPN indicator in the top-bar:

Documentation

Emmanuele Bassi announces

I merged the initial batch of beginner tutorials for the GNOME Developer Documentation website. They are meant to be used as a bridge between the HIG and API references, providing useful information about UI elements with code examples in multiple programming languages. More to come in the future!

Miscellaneous

Sophie Herold announces

The app pages on apps.gnome.org are now coming with a more exciting header design. Further, page rendering times have been optimized and a few issues with right-to-left scripts have been fixed. The latter surfaced with the newly added Hebrew translation.

That’s all for this week!

See you next week, and be sure to stop by #thisweek:gnome.org with updates on your own projects!

January 13, 2022

Human Interface Guidelines, libadwaita 1.0 edition

After a lot of hard work, libadwaita 1.0 was released on the last day of 2021. If you haven’t already, check out Alexander’s announcement, which covers a lot of what’s in the new release.

When we rewrote the HIG back in May 2021, the new version expected and recommended libadwaita. However, libadwaita evolved between then and 1.0, so changes were needed to bring the HIG up to date.

Therefore, over the last two or three weeks, I’ve been working on updating the HIG to cover libadwaita 1.0. Hopefully this will mean that developers who are porting to GTK 4 and libadwaita have everything that they need in terms of design documentation but, if anything isn’t clear, do reach out using the usual GNOME design channels.

In the rest of this post, I’ll review what’s changed in the HIG, compared with the previous version.

What’s changed

There’s a bunch of new content in the latest HIG version, which reflects additional capabilities that are present in libadwaita 1.0. This includes material on:

There have also been updates to existing content: all screenshots have been updated to use the latest UI style from libadwaita, and the guidelines on UI styling have been updated, to reflect the flexibility that comes with libadwaita’s new stylesheet.

As you might expect, there have been some general improvements to the HIG, which are unrelated to libadwaita. The page on navigation has been improved, to make it more accessible. A page on selection mode has also been added (we used to have this documented, then dropped the documentation while the pattern was updated). There has also been a large number of small style and structure changes, which should make the HIG an easier read.

If you spot any issues, the HIG issue tracker is open, and you can send merge requests too!

Moving librsvg's documentation to gi-docgen

Librsvg's documentation tooling is pretty ancient. The man page for rsvg-convert is written by hand in troff, and the C library's reference documentation still uses the venerable gtk-doc.

As part of the modernization effort, I have turned the man page into a reStructuredText document, and the C API documentation into gi-docgen. This post describes how I did that.

You can read librsvg's new documentation here.

From man to rst

The man page for rsvg-convert was written in troff, which is pretty cumbersome. The following gunk defines a little paragraph and a table:

.P
You can also specify dimensions as CSS lengths, for example
.B 10px
or \"
.BR 8.5in .
The unit specifiers supported are as follows:
.RS
.TS
tab (@);
l lx.
px@T{
pixels (the unit specifier can be omitted)
T}
in@T{
inches
T}
cm@T{
centimeters
T}
mm@T{
millimeters
T}
pt@T{
points, 1/72 inch
T}
pc@T{
picas, 1/6 inch
T}
.TE

Yeah, nope. We have better tools now like rst2man, which take a reStructuredText document — fancy plain text — and turn it into a troff man page. I just had to use a command line like

pandoc --from=man --to=rst rsvg-convert.1 > rsvg-convert.rst

and then tweak the output a little:

You can also specify dimensions as CSS lengths, for example ``10px`` or
``8.5in``. The unit specifiers supported are as follows:

   == ==========================================
   px pixels (the unit specifier can be omitted)
   in inches
   cm centimeters
   mm millimeters
   pt points, 1/72 inch
   pc picas, 1/6 inch
   == ==========================================

Much better, right?

I've learned that Pandoc is awesome. Pure magic, highly recommended.

I hope to integrate the man page into a prettier user manual for rsvg-convert at some point. It's no longer a trivial program, and its options allow for some interesting combinations that could use some illustrations and generally more detail than a man page.

From gtk-doc to gi-docgen

I highly recommend that you read Emmanuele's initial description of gi-docgen, which includes the history of gtk-doc, a description of its shortcomings, and how gi-docgen is a simpler tool that leverages the fact that GObject Introspection already slurps documentation from source code and so obviates most of gtk-doc already.

Summary of how gi-docgen works:

  • The C code has documentation comments in Markdown format, with annotations for GObject Introspection. (Note: librsvg has no C code for the library, so those documentation comments actually live in the .h header files that it installs for the benefit of C programs.)

  • The library gets compiled and introspected. In this step, g-ir-scanner(1) extracts annotations and documentation from the source code and puts them in the MyLibrary.gir XML file.

  • You write a small configuration file to tell gi-docgen about the structure of your documentation. Unlike gtk-doc, you don't need to write a DocBook skeleton or anything complicated. Stand-alone chapters can be individual Markdown files, and the configuration file just lists them in the order you want them to appear. Gi-docgen automatically includes all the classes, types, functions, etc. from your code into the docs.

  • ... it runs very fast. Gtk-doc was always slow due to xsltproc and complicated stylesheets to turn a DocBook document into browsable HTML documentation. Gi-docgen is much leaner.

Doing the conversion

Unlike the mostly automatic pandoc step for the man page, I converted the documentation comments to from DocBook to Markdown by hand. For librsvg this took me a moderately caffeinated afternoon; it's a little fiddly business, but nothing out of this world.

You can look forward to having good error messages from gi-docgen when something goes wrong, unlike gtk-doc, whose errors I always tended to ignore until the last minute because they were so hard to discern and diagnose.

Some hints:

  • DocBook hyperlinks that looked like <ulink url="blahblah.html">blah blah</ulink> get turned into [blah blah](blahblah.html) Markdown.

  • Gi-docgen allows references to methods like [method@Gtk.Button.set_child] - see the linking documentation for other kinds of links.

  • You can get progressively fancy with introspection attributes.

  • There is no direct mapping between DocBook's extremely granular semantic markup and Markdown conventions, so for example I'd substitute both <literal>foobar</literal> and <filename>/foo/bar</filename> for `foobar` and `/foo/bar`, respectively (i.e. the text I wanted to show, between backticks, to indicate verbatim text).

Librsvg seemed to include verbatim text blocks in gtk-doc delimited like this:

/**
 * blah_blah():
 *
 * For example:
 *
 * |[
 * verbatim text goes here
 * ]|
 *
 * Etc. etc.
 */

Those can go between ``` triple backticks in Markdown:

/**
 * blah_blah():
 *
 * For example:
 *
 * ```
 * verbatim text goes here
 * ```
 *
 * Etc. etc.
 */

Errors I found

My first manual run of gi-docgen looked like this:

$ gi-docgen check Rsvg-2.0.gir 
INFO: Loading config file: None
INFO: Search paths: ['/home/federico/src/librsvg/gi-docgen/_build', '/home/federico/.local/share/gir-1.0', '/home/federico/.local/share/flatpak/exports/share/gir-1.0', '/var/lib/flatpak/exports/share/gir-1.0', '/usr/local/share/gir-1.0', '/usr/share/gir-1.0']
INFO: Elapsed time 1.601 seconds
WARNING: Symbol 'Rsvg.HandleFlags' at <unknown>:0 is not documented
WARNING: Return value for symbol 'Rsvg.Handle.get_dimensions_sub' is not documented
WARNING: Return value for symbol 'Rsvg.Handle.get_geometry_for_element' is not documented
WARNING: Return value for symbol 'Rsvg.Handle.get_geometry_for_layer' is not documented
WARNING: Return value for symbol 'Rsvg.Handle.get_position_sub' is not documented
WARNING: Return value for symbol 'Rsvg.Handle.render_document' is not documented
WARNING: Return value for symbol 'Rsvg.Handle.render_element' is not documented
WARNING: Return value for symbol 'Rsvg.Handle.render_layer' is not documented
WARNING: Return value for symbol 'Rsvg.Handle.set_stylesheet' is not documented
WARNING: Symbol 'Rsvg.Handle.base-uri' at <unknown>:0 is not documented
WARNING: Symbol 'Rsvg.Handle.dpi-x' at <unknown>:0 is not documented
WARNING: Symbol 'Rsvg.Handle.dpi-y' at <unknown>:0 is not documented
WARNING: Symbol 'Rsvg.cleanup' at include/librsvg/rsvg.h:447 is not documented
WARNING: Symbol 'Rsvg.DEPRECATED_FOR' at include/librsvg/rsvg.h:47 is not documented
WARNING: Parameter 'f' of symbol 'Rsvg.DEPRECATED_FOR' is not documented

The warnings like WARNING: Return value ... is not documented are easy to fix; the comment blocks had their descriptions, but they were missing the Returns: part.

The warnings like WARNING: Symbol 'Rsvg.Handle.base-uri' at <unknown>:0 is not documented are different. Those are GObject properties, which previously were documented like this:

/**
 * RsvgHandle::base-uri:
 *
 * Base URI, to be used to resolve relative references for resources.  See the section
 * "Security and locations of referenced files" for details.
 */

There is a syntax error there! The symbol line should use a single colon between the class name and the property name, e.g. RsvgHandle:base-uri instead of RsvgHandle::base-uri. This one, plus the other properties that showed up as not documented, had the same kind of typo.

The first warning, WARNING: Symbol 'Rsvg.HandleFlags' at <unknown>:0 is not documented, turned out to be that there were two documentation comments with the same title for RsvgHandleFlags, and the second one was empty — and the last one wins. I left a single one with the actual docs.

Writing standalone chapters

Librsvg had a few chapters like doc/foo.xml, doc/bar.xml that were included in the reference documentation; those were a DocBook <chapter> each. I was able to convert them to Markdown with pandoc individually, and then add a Title: heading in the first line of each .md file — that's what gi-docgen uses to build the table of contents in the documentation's starting page.

Title: Overview of Librsvg

# Overview of Librsvg

Librsvg is a library for rendering Scalable Vector Graphics files (SVG).
Blah blah blah blah.

Build scripts

There are plenty of examples for using gi-docgen with meson; you can look at how it is done in gtk.

However, librsvg is still using Autotools! You can steal the following bits:

Publishing the documentation

Gtk-doc assumed that magic happened somewhere in developer.gnome.org to generate the documentation and publish it. Gi-docgen assumes that your project publishes it with Gitlab pages.

Indeed, the new documentation is published there — you can see how it is generated in .gitlab-ci.yml. Note that there are two jobs: the reference job generates gi-docgen's HTML in a public/Rsvg-2.0 directory, and the pages job integrates it with the Rust API documentation and publishes both together.

Linking the docs to the main developer's site

Finally, librsvg's docs are linked from the GNOME Platform Introduction. I submitted a merge request to the developer-www project to update it.

That's all! I hope this is useful for someone who wants to move from gtk-doc to gi-docgen, which is a much more pleasant tool!

January 12, 2022

Italy welcomes Linux App Summit 2022

We’re happy to announce that Linux App Summit will take place in Rovereto, Italy between the 29th and 30th of April.

Linux App Summit (LAS) is a conference focused on building a Linux application ecosystem. LAS aims to encourage the creation of quality applications, seek opportunities for compensation for FOSS developers, and foster a thriving market for the Linux operating system.

This year LAS will be held as a hybrid event and attendees will be able to join virtually or in person at our venue in Rovereto.

Everyone is invited to attend! Companies, journalists, and individuals who are interested in learning more about the Linux desktop application space and growing their user base are especially welcome.

The call for papers and registration will be open soon. Please check linuxappsummit.org for more updates in the upcoming weeks.

Rovereto Italy

About Rovereto

Rovereto Italy is an old Fortress Town located in the autonomous province of Trento in Northen Italy near the southern edge of the Italian Alps and is the main city of the Vallagarina district.

The city has several interesting sites including:

    • The Ancient War Museum
    • A castle built by the counts of Castelbarco
    • The Museum of Modern and Contemporary Art of Trento

The biggest businesses of Rovereto include Wine, coffee, Rubber, and Chocolate.  The town was acknowledged as a “Peace town” in the 20th century. Also in the area footprints of dinosaurs have been found.

We hope to see you in Rovereto, Italy.

*The image “Rovereto” by barnyz is licensed under CC BY-NC-ND 2.0.

About the Linux App Summit

The Linux App Summit (LAS) brings the global Linux community together to learn, collaborate, and help grow the Linux application ecosystem. Through talks, panels, and Q&A sessions, we encourage attendees to share ideas, make connections, and join our goal of building a common app ecosystem.

Previous iterations of the Linux App Summit have been held in the United States in Portland, Oregon and Denver, Colorado, as well as in Barcelona, Spain.

Learn more by visiting linuxappsummit.org.

January 10, 2022

December of Rust 2021, Part 1: A Little Computer

In the beginning of December I read Andrei Ciobanu’s Writing a simple 16-bit VM in less than 125 lines of C. Now, I’ve been interested in virtual machines and emulators for a long time, and I work tangential to VMs as part of my day job as a JavaScript engine developer for Igalia. I found this post really interesting because, well, it does what it says on the tin: A simple VM in less than 125 lines of C.

Readers of this blog, if I have any left at this point, might remember that in December 2020 I did a series of posts solving that year’s Advent of Code puzzles in order to try to teach myself how to write programs in the Rust programming language. I did say last year that if I were to do these puzzles again, I would do them at a slower pace and wouldn’t blog about them. Indeed, I started again this year, but it just wasn’t as interesting, having already gone through the progression from easy to hard puzzles and now having some idea already of the kinds of twists that they like to do in between the first and second halves of each puzzle.

So, instead of puzzles, I decided to see if I could write a similarly simple VM in Rust this December, as a continuation of my Rust learning exercise last year1.

Andrei Ciobanu’s article, as well as some other articles he cites, write VMs that simulate the LC-3 architecture2. What I liked about this one is that it was really concise and no-nonsense, and did really well at illustrating how a VM works. There are already plenty of other blog posts and GitHub repositories that implement an LC-3 VM in Rust, and I can’t say I didn’t incorporate any ideas from them, but I found many of them to be a bit verbose. I wanted to see if I could create something in the same concise spirit as Andrei’s, but still idiomatic Rust.

Over the course of a couple of weeks during my end-of-year break from work, I think I succeeded somewhat at that, and so I’m going to write a few blog posts about it.

About the virtual machine

This post is not a tutorial about virtual machines. There are already plenty of those, and Andrei’s article is already a great one so it doesn’t make sense for me to duplicate it. Instead, In this section I’ll note some things about the LC-3 architecture before we start.

First of all, it has a very spartan instruction set. Each instruction is 16 bits, and there are no variable length instructions. The opcode is packed in the topmost 4 bits of each word. That means there are at most 16 instructions. And one opcode (1101) is not even used!

Only three instructions are arithmetic-type ones: addition, bitwise AND, and bitwise NOT. If you’re used to x86 assembly language you’ll notice that other operations like subtraction, multiplication, bitwise OR, are missing. We only need these three to do all the other operations in 2’s-complement arithmetic, although it is somewhat tedious! As I started writing some LC-3 assembly language to test the VM, I learned how to implement some other arithmetic operations in terms of ADD, AND, and NOT.3 I’ll go into this in a following post.

The LC-3 does not have a stack. All the operations take place in registers. If you are used to thinking in terms of a stack machine (for example, SpiderMonkey is one), this takes some getting used to.

First steps

I started out by trying to port Andrei’s C code to Rust code in the most straightforward way possible, not worrying about whether it was idiomatic or not.

The first thing I noticed is that whereas in C it’s customary to use a mutable global, such as reserving storage for the VM’s memory and registers at the global level with declarations such as uint16_t mem[UINT16_MAX] = {0};, the Rust compiler makes this very difficult. You can use a mutable static variable, but accessing it needs to be marked as unsafe. In this way, the Rust compiler nudges you to encapsulate the storage inside a class:

struct VM {
    mem: [u16; MEM_SIZE],
    reg: [u16; NREGS],
    running: bool,
}

Next we write functions to access the memory. In the C code these are:

static inline uint16_t mr(uint16_t address) { return mem[address];  }
static inline void mw(uint16_t address, uint16_t val) { mem[address] = val; }

In Rust, we have to cast the address to a usize, since usize is the type that we index arrays with:

#[inline]
fn ld(&mut self, addr: u16) -> u16 {
    self.mem[addr as usize]
}

#[inline]
fn st(&mut self, addr: u16, val: u16) {
    self.mem[addr as usize] = val;
}

(I decide to name them ld and st for “load” and “store” instead of mr and mw, because the next thing I do is write similar functions for reading and writing the VM’s registers, which I’ll call r and rw for “register” and “register write”. These names look less similar, so I find that makes the code more readable yet still really concise.)

The next thing in the C code is a bunch of macros that do bit-manipulation operations to unpack the instructions. I decide to turn these into #[inline] functions in Rust. For example,

#define OPC(i) ((i)>>12)
#define FIMM(i) ((i>>5)&1)

from the C code, become, in Rust,

#[inline] #[rustfmt::skip] fn opc(i: u16) -> u16 { i >> 12 }
#[inline] #[rustfmt::skip] fn fimm(i: u16) -> bool { (i >> 5) & 1 != 0 }

I put #[rustfmt::skip] because I think it would be nicer if the rustfmt tool would allow you to put super-trivial functions on one line, so that they don’t take up more visual space than they deserve.

You might think that the return type of opc should be an enum. I originally tried making it that way, but Rust doesn’t make it very easy to convert between enums and integers. The num_enum crate provides a way to do this, but I ended up not using it, as you will read below.

We also need a way to load and run programs in LC-3 machine code. I made two methods of VM patterned after the ld_img() and start() functions from the C code.

First I’ll talk about ld_img(). What I really wanted to do is read the bytes of the file directly into the self.mem array, without copying, as the C code does. This is not easy to do in Rust. Whereas in C all pointers are essentially pointers to byte arrays in the end, this is not the case in Rust. It’s surprisingly difficult to express that I want to read u16s into an array of u16s! I finally found a concise solution, using both the byteorder and bytemuck crates. For this to work, you have to import the byteorder::ReadBytesExt trait into scope.

pub fn ld_img(&mut self, fname: &str, offset: u16) -> io::Result<()> {
    let mut file = fs::File::open(fname)?;
    let nwords = file.metadata()?.len() as usize / 2;
    let start = (PC_START + offset) as usize;
    file.read_u16_into::<byteorder::NetworkEndian>(bytemuck::cast_slice_mut(
        &mut self.mem[start..(start + nwords)],
    ))
}

What this does is read u16s, minding the correct byte order, into an array of u8. But we have an array of u16 that we want to store it in, not u8. So bytemuck::cast_slice_mut() treats the &mut [u16] slice as a &mut [u8] slice, essentially equivalent to casting it as (uint8_t*) in C. It does seem like this ought to be part of the Rust standard library, but the only similar facility is std::mem::transmute(), which does the same thing. But it also much more powerful things as well, and is therefore marked unsafe. (I’m trying to avoid having any code that needs to be marked unsafe in this project.)

For running the loaded machine code, I wrote this method:

pub fn start(&mut self, offset: u16) {
    self.running = true;
    self.rw(RPC, PC_START + offset);
    while self.running {
        let i = self.ld(self.r(RPC));
        self.rw(RPC, self.r(RPC) + 1);
        self.exec(i);
    }
}

I’ll talk more about what happens in self.exec() in the next section.

The basic execute loop

In the C code, Andrei cleverly builds a table of function pointers and indexes it with the opcode, in order to execute each instruction:

typedef void (*op_ex_f)(uint16_t instruction);
op_ex_f op_ex[NOPS] = { 
    br, add, ld, st, jsr, and, ldr, str, rti, not, ldi, sti, jmp, res, lea, trap 
};

// ...in main loop:
op_ex[OPC(i)](i);

Each function, such as add(), takes the instruction as a parameter, decodes it, and mutates the global state of the VM. In the main loop, at the point where I have self.exec(i) in my code, we have op_ex[OPC(i)](i) which decodes the opcode out of the instruction, indexes the table, and calls the function with the instruction as a parameter. A similar technique is used to execute the trap routines.

This approach of storing function pointers in an array and indexing it by opcode or trap vector is great in C, but is slightly cumbersome in Rust. You would have to do something like this in order to be equivalent to the C code:

type OpExF = fn(&mut VM, u16) -> ();

// in VM:
const OP_EX: [OpExF; NOPS] = [
    VM::br, VM::add, ..., VM::trap,
];

// ...in main loop:
OP_EX[opc(i) as usize](self, i);

Incidentally, this is why I decided above not to use an enum for the opcodes. Not only would you have to create it from a u16 when you unpack it from the instruction, you would also have to convert it to a usize in order to index the opcode table.

In Rust, a match expression is a much more natural fit:

match opc(i) {
    BR => {
        if (self.r(RCND) & fcnd(i) != 0) {
            self.rw(RPC, self.r(RPC) + poff9(i));
        }
    }
    ADD => {
        self.rw(dr(i), self.r(sr(i)) +
            if fimm(i) {
                sextimm(i)
            } else {
                self.r(sr2(i))
            });
        self.uf(dr(i));
    }
    // ...etc.
}

However, there is an even better alternative that makes the main loop much more concise, like the one in the C code! We can use the bitmatch crate to simultaneously match against bit patterns and decode parts out of them.

#[bitmatch]
fn exec(&mut self, i: u16) {
    #[bitmatch]
    match i {
        "0000fffooooooooo" /* BR */ => {
            if (self.r(RCND) & f != 0) {
                self.rw(RPC, self.r(RPC) + sext(o, 9));
            }
        }
        "0001dddsss0??aaa" /* ADD register */ => {
            self.rw(d, self.r(s) + self.r(a));
            self.uf(d);
        }
        "0001dddsss1mmmmm" /* ADD immediate */ => {
            self.rw(d, self.r(s) + sext(m, 5));
            self.uf(d);
        }
        // ...etc.
    }
}

This actually gets rid of the need for all the bit-manipulation functions that I wrote in the beginning, based on the C macros, such as opc(), fimm(), and poff9(), because bitmatch automatically does all the unpacking. The only bit-manipulation we still need to do is sign-extension when we unpack immediate values and offset values from the instructions, as we do above with sext(o, 9) and sext(m, 5).

I was curious what kind of code the bitmatch macros generate under the hood and whether it’s as performant as writing out all the bit-manipulations by hand. For that, I wrote a test program that matches against the same bit patterns as the main VM loop, but with the statements in the match arms just replaced by constants, in order to avoid cluttering the output:

#[bitmatch]
pub fn test(i: u16) -> u16 {
    #[bitmatch]
    match i {
        "0000fffooooooooo" => 0,
        "0001dddsss0??aaa" => 1,
        "0001dddsss1mmmmm" => 2,
        // ...etc.
    }
}

There is a handy utility for viewing expanded macros that you can install with cargo install cargo-expand, and then run with cargo expand --lib test (I put the test function in a dummy lib.rs file.)

Here’s what we get!

pub fn test(i: u16) -> u16 {
    match i {
        bits if bits & 0b1111000000000000 == 0b0000000000000000 => {
            let f = (bits & 0b111000000000) >> 9usize;
            let o = (bits & 0b111111111) >> 0usize;
            0
        }
        bits if bits & 0b1111000000100000 == 0b0001000000000000 => {
            let a = (bits & 0b111) >> 0usize;
            let d = (bits & 0b111000000000) >> 9usize;
            let s = (bits & 0b111000000) >> 6usize;
            1
        }
        bits if bits & 0b1111000000100000 == 0b0001000000100000 => {
            let d = (bits & 0b111000000000) >> 9usize;
            let m = (bits & 0b11111) >> 0usize;
            let s = (bits & 0b111000000) >> 6usize;
            2
        }
        // ...etc.
        _ => // ...some panicking code
    }
}

It’s looking a lot like what I’d written anyway, but with all the bit-manipulation functions inlined. The main disadvantage is that you have to AND the value with a bitmask at each arm of the match expression. But maybe that isn’t such a problem? Let’s look at the generated assembly to see what the computer actually executes. There is another Cargo tool for this, which you can install with cargo install cargo-asm and run with cargo asm --lib lc3::test4. In the result, there are actually only three AND instructions, because there are only three unique bitmasks tested among all the arms of the match expression (0b1111000000000000, 0b1111100000000000, and 0b1111000000100000). So it seems like the compiler is quite able to optimize this into something good.

First test runs

By the time I had implemented all the instructions except for TRAP, at this point I wanted to actually run a program on the LC-3 VM! Andrei has one program directly in his blog post, and another one in his GitHub repository, so those seemed easiest to start with.

Just like in the blog post, I wrote a program (in my examples/ directory so that it could be run with cargo r --example) to output the LC-3 machine code. It looked something like this:

let program: [u16; 7] = [
    0xf026,  // TRAP 0x26
    0x1220,  // ADD R1, R0, 0
    0xf026,  // TRAP 0x26
    0x1240,  // ADD R1, R1, R0
    0x1060,  // ADD R0, R1, 0
    0xf027,  // TRAP 0x27
    0xf025,  // TRAP 0x25
];
let mut file = fs::File::create(fname)?;
for inst in program {
    file.write_u16::<byteorder::NetworkEndian>(inst)?;
}
Ok(())

In order for this to work, I still needed to implement some of the TRAP routines. I had left those for last, and at that point my match expression for TRAP instructions looked like "1111????tttttttt" => self.trap(t), and my trap() method looked like this:

fn trap(&mut self, t: u8) {
    match t {
        _ => self.crash(&format!("Invalid TRAP vector {:#02x}", t)),
    }
}

For this program, we can see that three traps need to be implemented: 0x25 (HALT), 0x26 (INU16), and 0x27 (OUTU16). So I was able to add just three arms to my match expression:

0x25 => self.running = false,
0x26 => {
    let mut input = String::new();
    io::stdin().read_line(&mut input).unwrap_or(0);
    self.rw(0, input.trim().parse().unwrap_or(0));
}
0x27 => println!("{}", self.r(0)),

With this, I could run the sample program, type in two numbers, and print out their sum.

The second program sums an array of numbers. In this program, I added a TRAP 0x27 instruction right before the HALT in order to print out the answer, otherwise I couldn’t see if it was working! This also required changing R1 to R0 so that the sum is in R0 when we call the trap routine, and adjusting the offset in the LEA instruction.5

When I tried running this program, it crashed the VM! This is due to the instruction ADD R4, R4, x-1 which decrements R4 by adding -1 to it. R4 is the counter for how many array elements we have left to process, so initially it holds 10, and when we get to that instruction for the first time, we decrement it to 9. But if you look at the implementation of the ADD instruction that I wrote above, we are actually doing an unsigned addition of the lowest 5 bits of the instruction, sign-extended to 16 bits, so we are not literally decrementing it. We are adding 0xffff to 0x000a and expecting it to wrap around to 0x0009 like it does in C. But integer arithmetic doesn’t wrap in Rust!

Unless you specifically tell it to, that is. So we could use u16::wrapping_add() instead of the + operator to do the addition. But I got what I thought was a better idea, to use std::num::Wrapping! I rewrote the definition of VM at the top of the file:

type Word = Wrapping<u16>;

struct VM {
    mem: [Word; MEM_SIZE],
    reg: [Word; NREGS],
    running: bool,
}

This did require adding Wrapping() around some integer literals and adding .0 to unwrap to the bare u16 in some places, but on the whole it made the code more concise and readable. As an added bonus, this way, we are using the type system to express that the LC-3 processor does wrapping unsigned arithmetic. (I do wish that there were a nicer way to express literals of the Wrapping type though.)

And with that, the second example program works. It outputs 16, as expected. In a following post I’ll go on to explain some of the other test programs that I wrote.

Other niceties

At this point I decided to do some refactoring to make the Rust code more readable and hopefully more idiomatic as well. Inspired by the type alias for Word, I added several more, one for addresses and one for instructions, as well as a function to convert a word to an address:

type Addr = usize;
type Inst = u16;
type Reg = u16;
type Flag = Word;
type TrapVect = u8;

#[inline] fn adr(w: Word) -> Addr { w.0.into() }

Addr is convenient to alias to usize because that’s the type that we use to index the memory array. And Inst is convenient to alias to u16 because that’s what bitmatch works with.

In fact, using types in this way actually allowed me to catch two bugs in Andrei’s original C code, where the index of the register was used instead of the value contained in the register.

I also added a few convenience methods to VM for manipulating the program counter:

#[inline] fn pc(&self) -> Word { self.r(RPC) }
#[inline] fn jmp(&mut self, pc: Word) { self.rw(RPC, pc); }
#[inline] fn jrel(&mut self, offset: Word) { self.jmp(self.pc() + offset); }
#[inline] fn inc_pc(&mut self) { self.jrel(Wrapping(1)); }

With these, I could write a bunch of things to be more expressive and concise. For example, the start() method now looked like this:

pub fn start(&mut self, offset: u16) {
    self.running = true;
    self.jmp(PC_START + Wrapping(offset));
    while self.running {
        let i = self.ld(adr(self.pc())).0;
        self.inc_pc();
        self.exec(i);
    }
    Ok(())
}

I also added an iaddr() method to load an address indirectly from another address given as an offset relative to the program counter, to simplify the implementation of the LDI and STI instructions.

While I was at it, I noticed that the uf() (update flags) method always followed a store into a destination register, and I decided to rewrite it as one method dst(), which stores a value into a destination register and updates the flags register based on that value:

#[inline]
fn dst(&mut self, r: Reg, val: Word) {
    self.rw(r, val);
    self.rw(
        RCND,
        match val.0 {
            0 => FZ,
            1..=0x7fff => FP,
            0x8000..=0xffff => FN,
        },
    );
}

At this point, the VM’s main loop looked just about as concise and simple as the original C code did! The original:

static inline void br(uint16_t i)   { if (reg[RCND] & FCND(i)) { reg[RPC] += POFF9(i); } }
static inline void add(uint16_t i)  { reg[DR(i)] = reg[SR1(i)] + (FIMM(i) ? SEXTIMM(i) : reg[SR2(i)]); uf(DR(i)); }
static inline void ld(uint16_t i)   { reg[DR(i)] = mr(reg[RPC] + POFF9(i)); uf(DR(i)); }
static inline void st(uint16_t i)   { mw(reg[RPC] + POFF9(i), reg[DR(i)]); }
static inline void jsr(uint16_t i)  { reg[R7] = reg[RPC]; reg[RPC] = (FL(i)) ? reg[RPC] + POFF11(i) : reg[BR(i)]; }
static inline void and(uint16_t i)  { reg[DR(i)] = reg[SR1(i)] & (FIMM(i) ? SEXTIMM(i) : reg[SR2(i)]); uf(DR(i)); }
static inline void ldr(uint16_t i)  { reg[DR(i)] = mr(reg[SR1(i)] + POFF(i)); uf(DR(i)); }
static inline void str(uint16_t i)  { mw(reg[SR1(i)] + POFF(i), reg[DR(i)]); }
static inline void res(uint16_t i) {} // unused
static inline void not(uint16_t i)  { reg[DR(i)]=~reg[SR1(i)]; uf(DR(i)); }
static inline void ldi(uint16_t i)  { reg[DR(i)] = mr(mr(reg[RPC]+POFF9(i))); uf(DR(i)); }
static inline void sti(uint16_t i)  { mw(mr(reg[RPC] + POFF9(i)), reg[DR(i)]); }
static inline void jmp(uint16_t i)  { reg[RPC] = reg[BR(i)]; }
static inline void rti(uint16_t i) {} // unused
static inline void lea(uint16_t i)  { reg[DR(i)] =reg[RPC] + POFF9(i); uf(DR(i)); }
static inline void trap(uint16_t i) { trp_ex[TRP(i)-trp_offset](); }

My implementation:

#[bitmatch]
match i {
    "0000fffooooooooo" /* BR   */ => if (self.r(RCND).0 & f) != 0 { self.jrel(sext(o, 9)); },
    "0001dddsss0??aaa" /* ADD1 */ => self.dst(d, self.r(s) + self.r(a)),
    "0001dddsss1mmmmm" /* ADD2 */ => self.dst(d, self.r(s) + sext(m, 5)),
    "0010dddooooooooo" /* LD   */ => self.dst(d, self.ld(adr(self.pc() + sext(o, 9)))),
    "0011sssooooooooo" /* ST   */ => self.st(adr(self.pc() + sext(o, 9)), self.r(s)),
    "01000??bbb??????" /* JSRR */ => { self.rw(R7, self.pc()); self.jmp(self.r(b)); }
    "01001ooooooooooo" /* JSR  */ => { self.rw(R7, self.pc()); self.jrel(sext(o, 11)); }
    "0101dddsss0??aaa" /* AND1 */ => self.dst(d, self.r(s) & self.r(a)),
    "0101dddsss1mmmmm" /* AND2 */ => self.dst(d, self.r(s) & sext(m, 5)),
    "0110dddbbboooooo" /* LDR  */ => self.dst(d, self.ld(adr(self.r(b) + sext(o, 6)))),
    "0111sssbbboooooo" /* STR  */ => self.st(adr(self.r(b) + sext(o, 6)), self.r(s)),
    "1000????????????" /* n/a  */ => self.crash(&format!("Illegal instruction {:#04x}", i)),
    "1001dddsss??????" /* NOT  */ => self.dst(d, !self.r(s)),
    "1010dddooooooooo" /* LDI  */ => self.dst(d, self.ld(self.iaddr(o))),
    "1011sssooooooooo" /* STI  */ => self.st(self.iaddr(o), self.r(s)),
    "1100???bbb??????" /* JMP  */ => self.jmp(self.r(b)),
    "1101????????????" /* RTI  */ => self.crash("RTI not available in user mode"),
    "1110dddooooooooo" /* LEA  */ => self.dst(d, self.pc() + sext(o, 9)),
    "1111????tttttttt" /* TRAP */ => self.trap(t as u8),
}

Of course, you could legitimately complain that both are horrible soups of one- and two-letter identifiers.6 But I think in both of them, if you have the abbreviations close to hand (and you do, since the program is so small!) it’s actually easier to follow because everything fits well within one vertical screenful of text. The Rust version has the added bonus of the bitmatch patterns being very visual, and the reader not having to think about bit shifting in their head.

Here’s the key for abbreviations:

  • iInstruction
  • rRegister
  • dDestination register (“DR” in the LC-3 specification)
  • sSource register (“SR1”)
  • aAdditional source register (“SR2”)
  • bBase register (“BASER”)
  • m — iMmediate value
  • oOffset (6, 9, or 11 bits)
  • fFlags
  • tTrap vector7
  • pcProgram Counter
  • stSTore in memory
  • ldLoaD from memory
  • rwRegister Write
  • adr — convert machine word to memory ADdRess
  • jmpJuMP
  • dstDestination register STore and update flags
  • jrelJump RELative to the PC
  • sextSign EXTend
  • iaddr — load Indirect ADDRess

At this point the only thing left to do, to get the program to the equivalent level of functionality as the one in Andrei’s blog post, was to implement the rest of the trap routines.

Two of the trap routines involve waiting for a key press. This is actually surprisingly difficult in Rust, as far as I can tell, and definitely not as straightforward as the reg[R0] = getchar(); which you can do in C. You can use the libc crate, but libc::getchar() is marked unsafe.8 Instead, I ended up pulling in another dependency, the console crate, and adding a term: console::Term member to VM. With that, I could implement a getc() method that reads a character, stores its ASCII code in the R0 register, and returns the character itself:

fn getc(&mut self) -> char {
    let ch = self.term.read_char().unwrap_or('\0');
    let res = Wrapping(if ch.is_ascii() { ch as u16 & 0xff } else { 0 });
    self.rw(R0, res);
    ch
}

This by itself was enough to implement the GETC trap routine (which waits for a key press and stores its ASCII code in the lower 8 bits of R0) and the IN trap routine (which does the same thing but first prints a prompt, and echoes the character back to stdout) was not much more complicated:

0x20 => {
    self.getc();
}
0x23 => {
    print!("> ");
    let ch = self.getc();
    print!("{}", ch);
}

Next I wrote the OUT trap routine, which prints the lower 8 bits of R0 as an ASCII character. I wrote an ascii() function that converts the lower 8 bits of a machine word into a char:

#[inline]
fn ascii(val: Word) -> char {
    char::from_u32(val.0 as u32 & 0xff).unwrap_or('?')
}

// In the TRAP match expression:
0x21 => print!("{}", ascii(self.r(R0))),

Now the two remaining traps were PUTS (print a zero-terminated string starting at the address in R0) and PUTSP (same, but the string is packed two bytes per machine word). These two routines are very similar in that they both access a variable-length area of memory, starting at the address in R0 and ending with the next memory location that contains zero. I found a nice solution that feels very Rust-like to me, a strz_words() method that returns an iterator over exactly this region of memory:

fn strz_words(&self) -> impl Iterator<Item = &Word> {
    self.mem[adr(self.r(R0))..].iter().take_while(|&v| v.0 != 0)
}

The two trap routines differ in what they do with the items coming out of this iterator. For PUTS we convert each machine word to a char with ascii():

0x22 => print!("{}", self.strz_words().map(|&v| ascii(v)).collect::<String>()),

(It’s too bad that we have a borrowed value in the closure, otherwise we could just do map(ascii). On the positive side, collect::<String>() is really nice.)

PUTSP is a bit more complicated. It’s a neat trick to use flat_map() to convert our iterator over machine words into a twice-as-long iterator over bytes. However, we still have to collect the bytes into an intermediate vector so we can check if the last byte is zero, because the string might have an odd number of bytes. In that case we’d still have a final zero byte which we have to pop off the end, because the iterator doesn’t finish until we get a whole memory location that is zero.

0x24 => {
    let mut bytes = self
        .strz_words()
        .flat_map(|&v| v.0.to_ne_bytes())
        .collect::<Vec<_>>();
    if bytes[bytes.len() - 1] == 0 {
        bytes.pop();
    };
    print!("{}", String::from_utf8_lossy(&bytes));
}

Conclusion

At this point, what I had was mostly equivalent to what you have if you follow along with Andrei’s blog post until the end, so I’ll end this post here. Unlike the C version, it is not under 125 lines long, but it does clock in at just under 200 lines.9

After I had gotten this far, I spent some time improving what I had, making the VM a bit fancier and writing some tools to use with it. I intend to make this article into a series, and I’ll cover these improvements in following posts, starting with an assembler.

You can find the code in a GitHub repo. I have kept this first version apart, in an examples/first.rs file, but you can browse the other files in that repository if you want a sneak preview of some of the other things I’ll write about.

Many thanks to Federico Mena Quintero who gave some feedback on a draft of this post.


[1] In the intervening time, I didn’t write any Rust code at all ↩

[2] “LC” stands for “Little Computer”. It’s a computer architecture described in a popular textbook, and as such shows up in many university courses on low-level programming. I hadn’t heard of it before, but after all, I didn’t study computer science in university ↩

[3] I won’t lie, mostly by googling terms like “lc3 division” on the lazyweb ↩

[4] This tool is actually really bad at finding functions unless they’re in particular locations that it expects, such as lib.rs, which is the real reason why I stuck the test function in a dummy lib.rs file ↩

[5] Note that the offset is calculated relative to the following instruction. I assume this is so that branching to an offset of 0 takes you to the next instruction instead of looping back to the current one ↩

[6] Especially since bitmatch makes you use one-letter variable names when unpacking ↩

[7] i.e., memory address of the trap routine. I have no idea why the LC-3 specification calls it a vector, since it is in fact a scalar ↩

[8] I’m not sure why. Federico tells me it is possibly because it doesn’t acquire a lock on stdin. ↩

[9] Not counting comments, and provided we cheat a bit by sticking a few trivial functions onto one line with #[rustfmt::skip] ↩

January 09, 2022

One More Trip Around the Sun

It’s been exactly one year since I’ve done the foolish thing and changed my blog backend to write more. And to my own surprise it worked. Let me look back at 2021 from a rather narrow perspective of what I usually write about. Perhaps to your disappointment most of it is personal, not professional.

I’ve produced a fraction of my drone videos from the past years in 2021 and haven’t practiced or raced nearly at all this year. This void has been fully filled by music and synthesizers. After two decades of hiatus I enjoy making music again. Fully aware how crude and awful I am at it, there isn’t any other medium where I enjoy my own creations as much as music.

I’ve also come back to pixel art, even though the joy is a lot tainted by the tools I use. Very convenient, very direct, so much fun, very proprietary.

libadwaita

In 2022 I’d like to

  • Replace my reliance on iPad and Apple Pencil. Would be nice to use a small screen tablet on my Fedora instead. Just plug it when I need it, run GIMP or Aseprite in the same time it takes me with Procreate and Pixaki.
  • Embrace Fedora for music making. While I’m not a heavy Ableton Live user, I should totally embrace Bitwig instead as it’s conveniently available as a Flatpak. The Pipewire revolution also made Renoise usable for me again, so maybe I’ll give it another stab.
  • Continue using the gear I have and not buy any more. I have way more gear than I need. I’m going to sell some I don’t actually enjoy using anymore, but even splitting time between the Digitone, Digitakt, Polyend Tracker and Dirtywave M8 is making me feel unfocused. If I only had a synth room where I could just walk in and jam :)
  • Continue posting on this ancient platform called WWW. Before I figure out a replacement for comments, feel free to tweet at me or toot.

A little late with wishing you a better 2022 than 2020 was! I didn’t even catch 2021 fly by.

scikit-survival 0.17 released

This release adds support for scikit-learn 1.0, which includes support for feature names. If you pass a pandas dataframe to fit, the estimator will set a feature_names_in_ attribute containing the feature names. When a dataframe is passed to predict, it is checked that the column names are consistent with those passed to fit. The example below illustrates this feature.

For a full list of changes in scikit-survival 0.17.0, please see the release notes.

Installation

Pre-built conda packages are available for Linux, macOS, and Windows via

 conda install -c sebp scikit-survival

Alternatively, scikit-survival can be installed from source following these instructions.

Feature Names Support

Prior to scikit-survival 0.17, you could pass a pandas dataframe to estimators’ fit and predict methods, but the estimator was oblivious to the feature names accessible via the dataframe’s columns attribute. With scikit-survival 0.17, and thanks to scikit-learn 1.0, feature names will be considered when a dataframe is passed.

Let’s illustrate feature names support using the Veteran’s Lung Cancer dataset.

from sksurv.datasets import load_veterans_lung_cancer
X, y = load_veterans_lung_cancer()
X.head(3)
Age_in_years Celltype Karnofsky_score Months_from_Diagnosis Prior_therapy Treatment
0 69.0 squamous 60.0 7.0 no standard
1 64.0 squamous 70.0 5.0 yes standard
2 38.0 squamous 60.0 3.0 no standard

The original data has 6 features, three of which contain strings, which we encode as numeric using OneHotEncoder.

from sksurv.preprocessing import OneHotEncoder
transform = OneHotEncoder()
Xt = transform.fit_transform(X)

Transforms now have a get_feature_names_out() method, which will return the name of features after the transformation.

transform.get_feature_names_out()
array(['Age_in_years', 'Celltype=large', 'Celltype=smallcell',
'Celltype=squamous', 'Karnofsky_score', 'Months_from_Diagnosis',
'Prior_therapy=yes', 'Treatment=test'], dtype=object)

The transformed data returned by OneHotEncoder is again a dataframe, which can be used to fit Cox’s proportional hazards model.

from sksurv.linear_model import CoxPHSurvivalAnalysis
model = CoxPHSurvivalAnalysis().fit(Xt, y)

Since we passed a dataframe, the feature_names_in_ attribute will contain the names of the dataframe used when calling fit.

model.feature_names_in_
array(['Age_in_years', 'Celltype=large', 'Celltype=smallcell',
'Celltype=squamous', 'Karnofsky_score', 'Months_from_Diagnosis',
'Prior_therapy=yes', 'Treatment=test'], dtype=object)

This is used during prediction to check that the data matches the format of the training data. For instance, when passing a raw numpy array instead of a dataframe, a warning will be issued.

pred = model.predict(Xt.values)
UserWarning: X does not have valid feature names, but CoxPHSurvivalAnalysis was fitted with feature names

Moreover, it will also check that the order of columns matches.

X_reordered = pd.concat(
(Xt.drop("Age_in_years", axis=1), Xt.loc[:, "Age_in_years"]),
axis=1
)
pred = model.predict(X_reordered)
FutureWarning: The feature names should match those that were passed during fit. Starting version 1.2, an error will be raised.
Feature names must be in the same order as they were in fit.

For more details on feature names support, have a look at the scikit-learn release highlights.

Pluton is not (currently) a threat to software freedom

At CES this week, Lenovo announced that their new Z-series laptops would ship with AMD processors that incorporate Microsoft's Pluton security chip. There's a fair degree of cynicism around whether Microsoft have the interests of the industry as a whole at heart or not, so unsurprisingly people have voiced concerns about Pluton allowing for platform lock-in and future devices no longer booting non-Windows operating systems. Based on what we currently know, I think those concerns are understandable but misplaced.

But first it's helpful to know what Pluton actually is, and that's hard because Microsoft haven't actually provided much in the way of technical detail. The best I've found is a discussion of Pluton in the context of Azure Sphere, Microsoft's IoT security platform. This, in association with the block diagrams on page 12 and 13 of this slidedeck, suggest that Pluton is a general purpose security processor in a similar vein to Google's Titan chip. It has a relatively low powered CPU core, an RNG, and various hardware cryptography engines - there's nothing terribly surprising here, and it's pretty much the same set of components that you'd find in a standard Trusted Platform Module of the sort shipped in pretty much every modern x86 PC. But unlike Titan, Pluton seems to have been designed with the explicit goal of being incorporated into other chips, rather than being a standalone component. In the Azure Sphere case, we see it directly incorporated into a Mediatek chip. In the Xbox Series devices, it's incorporated into the SoC. And now, we're seeing it arrive on general purpose AMD CPUs.

Microsoft's announcement says that Pluton can be shipped in three configurations:as the Trusted Platform Module; as a security processor used for non-TPM scenarios like platform resiliency; or OEMs can choose to ship with Pluton turned off. What we're likely to see to begin with is the former - Pluton will run firmware that exposes a Trusted Computing Group compatible TPM interface. This is almost identical to the status quo. Microsoft have required that all Windows certified hardware ship with a TPM for years now, but for cost reasons this is often not in the form of a separate hardware component. Instead, both Intel and AMD provide support for running the TPM stack on a component separate from the main execution cores on the system - for Intel, this TPM code runs on the Management Engine integrated into the chipset, and for AMD on the Platform Security Processor that's integrated into the CPU package itself.

So in this respect, Pluton changes very little; the only difference is that the TPM code is running on hardware dedicated to that purpose, rather than alongside other code. Importantly, in this mode Pluton will not do anything unless the system firmware or OS ask it to. Pluton cannot independently block the execution of any other code - it knows nothing about the code the CPU is executing unless explicitly told about it. What the OS can certainly do is ask Pluton to verify a signature before executing code, but the OS could also just verify that signature itself. Windows can already be configured to reject software that doesn't have a valid signature. If Microsoft wanted to enforce that they could just change the default today, there's no need to wait until everyone has hardware with Pluton built-in.

The two things that seem to cause people concerns are remote attestation and the fact that Microsoft will be able to ship firmware updates to Pluton via Windows Update. I've written about remote attestation before, so won't go into too many details here, but the short summary is that it's a mechanism that allows your system to prove to a remote site that it booted a specific set of code. What's important to note here is that the TPM (Pluton, in the scenario we're talking about) can't do this on its own - remote attestation can only be triggered with the aid of the operating system. Microsoft's Device Health Attestation is an example of remote attestation in action, and the technology definitely allows remote sites to refuse to grant you access unless you booted a specific set of software. But there are two important things to note here: first, remote attestation cannot prevent you from booting whatever software you want, and second, as evidenced by Microsoft already having a remote attestation product, you don't need Pluton to do this! Remote attestation has been possible since TPMs started shipping over two decades ago.

The other concern is Microsoft having control over the firmware updates. The context here is that TPMs are not magically free of bugs, and sometimes these can have security consequences. One example is Infineon TPMs producing weak RSA keys, a vulnerability that could be rectified by a firmware update to the TPM. Unfortunately these updates had to be issued by the device manufacturer rather than Infineon being able to do so directly. This meant users had to wait for their vendor to get around to shipping an update, something that might not happen at all if the machine was sufficiently old. From a security perspective, being able to ship firmware updates for the TPM without them having to go through the device manufacturer is a huge win.

Microsoft's obviously in a position to ship a firmware update that modifies the TPM's behaviour - there would be no technical barrier to them shipping code that resulted in the TPM just handing out your disk encryption secret on demand. But Microsoft already control the operating system, so they already have your disk encryption secret. There's no need for them to backdoor the TPM to give them something that the TPM's happy to give them anyway. If you don't trust Microsoft then you probably shouldn't be running Windows, and if you're not running Windows Microsoft can't update the firmware on your TPM.

So, as of now, Pluton running firmware that makes it look like a TPM just isn't a terribly interesting change to where we are already. It can't block you running software (either apps or operating systems). It doesn't enable any new privacy concerns. There's no mechanism for Microsoft to forcibly push updates to it if you're not running Windows.

Could this change in future? Potentially. Microsoft mention another use-case for Pluton "as a security processor used for non-TPM scenarios like platform resiliency", but don't go into any more detail. At this point, we don't know the full set of capabilities that Pluton has. Can it DMA? Could it play a role in firmware authentication? There are scenarios where, in theory, a component such as Pluton could be used in ways that would make it more difficult to run arbitrary code. It would be reassuring to hear more about what the non-TPM scenarios are expected to look like and what capabilities Pluton actually has.

But let's not lose sight of something more fundamental here. If Microsoft wanted to block free operating systems from new hardware, they could simply mandate that vendors remove the ability to disable secure boot or modify the key databases. If Microsoft wanted to prevent users from being able to run arbitrary applications, they could just ship an update to Windows that enforced signing requirements. If they want to be hostile to free software, they don't need Pluton to do it.

(Edit: it's been pointed out that I kind of gloss over the fact that remote attestation is a potential threat to free software, as it theoretically allows sites to block access based on which OS you're running. There's various reasons I don't think this is realistic - one is that there's just way too much variability in measurements for it to be practical to write a policy that's strict enough to offer useful guarantees without also blocking a number of legitimate users, and the other is that you can just pass the request through to a machine that is running the appropriate software and have it attest for you. The fact that nobody has actually bothered to use remote attestation for this purpose even though most consumer systems already ship with TPMs suggests that people generally agree with me on that)

comment count unavailable comments

January 08, 2022

Portability is not sufficient for portability


A forenote

This blog post has some examples of questionable quality. This should not be meant as an attack on those projects. The issues listed here are fairly widespread, these are just the examples I ran into while doing other work.

What is meant by portability?

Before looking into portable software, let's first examine portability from a hardware perspective. When you ask most people what they consider a "portable computer", they'll probably think of laptops or possibly even a modern smartphone. But what about this:

This is most definitely a computer (the one I'm using to write this blog post, in fact), but not portable. It weighs something on the order of 10 kilos and it is too big to comfortably wrap your hands around.

And yet, I have carried this computer from one end of the Helsinki metropolitan region to another. It took over an hour on a train and a subway. When I finally got it home my arms were so exhausted that for a while I could not even lift them up and all muscles in them were sore for several days. I did use a helper carry strap, but it did not help much.

So in a way, yes, this computer is portable. It's not really designed for it, the actual transport process is a long painful slog and if you make an accidental misstep and bump it against a wall you run the risk of breaking everything inside. But it is a "portable computer" as a single person can carry it from one place to another using nothing but their own muscles.

Tying this to software portability

There is a lot of software out there that claims to be "portable" but can only be said to be that in the same way as the computer shown above is "portable". For the rest of the post we're only going to focus on portability to Windows.

Let's say a project has a parser that is built with Lex and Bison. Thus you need to have those programs during compilation. Building them from source is problematic on Windows (because of, among other things, Autotools) so it would be nice to get some prebuilt binaries for them. After a bit of googling you might find this page which provides Windows binaries. That has last been updated in 2004. So no.

You could also install Msys2 and get the binaries with Pacman. If you are using Visual Studio and just want to build the thing, installing a whole separate userspace system and package manager just to get two executables seems like bit of an overkill. Thinking about it further you might realize that you could install Msys on some other machine, get the executables, copy them and their direct dependency DLLs to your machine and put them in PATH. If you try this, the binaries segfault on run, probably because they can't access their localisation files that are "somewhere". 

Is this piece of software portable to Windows? Yes it is, in the "desktop PC is portable" sense, but definitely not in the "a laptop is portable" sense.

As an another example let's look at the ICU project. It claims to be highly portable, and it kind of is, here is a random snippet from their highly portable Makefile.

I don't know about you, but just looking at that … thing gives me a headache. If you ever need to do something the existing functionality does not provide, then trying to decipher what that thing is doing is an exercise in masochism. For Windows this is relevant, because Visual Studio only ships with nmake, which is not at all compatible with Make so you need to decrypt absolutely everything.

Again, this is portable to Windows, you just need to prebuild it with MinGW or using the provided VS solution files, copying the libraries from one place to another and using them. This is very much "desktop PC portable" again.

Sometimes you don't get even that. Take for example the liblangtag project. It is a fairly typical dependency library that provides a single shared library. It even hides its symbols and only exports those belonging to the public API. Sadly it does this using Libtool magic postprocessing. On Windows you have to annotate exported symbols with magic markers. Thus it is actually impossible to build a shared library properly on VS without making source code changes[1]. Thus you have to go the Mingw build route here. But that is again "portable" as in if you spend a ton of time and effort then you can sorta kinda make it work in a rubegoldbergesque way.

Being more specific

Due to various reason I have had to deal with the innards of libraries of different vintage. Fairly often the experience has been similar to dragging my desktop computer across town: arduous, miserable and exhausting. It is something I would wish upon my worst enemy and also upon most of my lesser enemies. It would serve them right.

In my personal opinion saying that some piece of code is portable should imply some basic ease of use. If you need to spend time fighting with it to make it work on an uncommon platform, toolchain or configuration then the thing is not really portable. It also blocks adoption, because if some library is a massive pain to use, people will prefer to reimplement the functionality or use some other library, just to get away from the pain.

Since changing the generally accepted meanings of words is unlikely to work, this won't happen. So in the mean time when you are talking about portability with someone else, do be sure to specify whether you mean "portable as in a desktop gaming PC" or "portable as in a laptop".

How this blog post came about

Some years ago I ported a sizable fraction of LibreOffice to build with Meson. It worked only on Linux as it used system dependencies. I rebased it to current trunk and tried to see if it could be built using nothing but Visual Studio by getting dependencies via the WrapDB. This repo contains the code, which now actually does build some code including dependencies like libxml, zlib and icu.

The code that is there is portable in the laptop sense. You only need to do a git checkout and start the build in a VS x64 dev tools prompt. It does cheat in some points, such as using pregenerated flex + bison sources, but it's not meant to be production quality, just an experiment.

[1] The project in question seems to have more preprocessor macro magic definitions than actual code, so it is possible there is some combination of defines that makes this work. If so, I did not manage to find it. This is typical in many old school C projects.

January 05, 2022

Philip Chimento’s Open Source Story

Hi, I’m Philip Chimento! My open source journey started in the early 2000s when I was at university. I wanted to learn how to program games, and I had been told that would require programming in C. Paying money for a compiler seems unbelievable today, but C compilers were quite expensive at the time. AContinue reading "Philip Chimento’s Open Source Story"

January 04, 2022

Small steps towards a GTK 4-based Initial Setup

Over the Christmas holidays, I was mostly occupied with the literal care and feeding of small humans, but I found a bit of time for the metaphorical care and feeding of Initial Setup for GNOME 42 as well. Besides a bit of review and build and CI housekeeping, I wrote some patches to update it for API changes in libgnome-desktop (merged) and libgweather (pending). The net result is an app which looks and works exactly the same, complete with a copy of the widget formerly known as GWeatherLocationEntry (RIP) with its serial numbers filed off.

Of course, my ultimate goal was to port Initial Setup to GTK 4. I made some other tiny steps in that direction, such as removing a redundant use of GtkFrame that becomes actively harmful with the removal of the shadow-type property in GTK 4, and now have a proof-of-concept port of just the final page which both compiles and runs!

Screenshots of "All done!" page of Initial Setup

But, I will not have time to complete this port in time for the GNOME 42 UI freeze on 12th February. If you are reading this and feel inspired to pick this up, even just a page or two, more hands would be much appreciated.

GNOME Nightly maintenance

Quick heads up, the GNOME Nightly Flatpak repository is currently undergoing maintenance, during which you may notice that some applications are currently missing from the repo.

For a couple of months now we have been plagued by a few bugs that have made maintenance of the repo very hard, and CI builds were constantly failing due to a lack of available space. In order to resolve this we had to wipe the majority of the refs/objects in the repository and start again with safeguards in place.

As such, we are currently re-populating the repository with fresh builds of all the applications, but it may take a while. If you want to help with this, make sure your Flatpak manifests are up to date and build-able, and that you have set up a daily or weekly scheduled CI build in your project. Your app may not be changing, but the runtime might, and it’s good to be on top of possible API/ABI breaks.

Go to your project, Settings -> CI/CD -> Schedules -> New schedule button -> Select the daily preset.

If you are a user and seeing warnings while updating, don’t worry – you won’t have to do anything and updates will start working again transparently once the applications are available in the repository.

$ flatpak update
Looking for updates…
F: Warning: Treating remote fetch error as non-fatal since runtime/org.gnome.Todo.Devel.Locale/x86_64/master is already installed: No such ref 'runtime/org.gnome.Todo.Devel.Locale/x86_64/master' in remote gnome-nightly
F: Warning: Treating remote fetch error as non-fatal since runtime/org.gnome.TextEditor.Devel.Locale/x86_64/master is already installed: No such ref 'runtime/org.gnome.TextEditor.Devel.Locale/x86_64/master' in remote gnome-nightly
F: Warning: Treating remote fetch error as non-fatal since runtime/org.gnome.TextEditor.Devel.Debug/x86_64/master is already installed: No such ref 'runtime/org.gnome.TextEditor.Devel.Debug/x86_64/master' in remote gnome-nightly
F: Warning: Treating remote fetch error as non-fatal since runtime/org.gnome.Photos.Locale/x86_64/master is already installed: No such ref 'runtime/org.gnome.Photos.Locale/x86_64/master' in remote gnome-nightly
F: Warning: Treating remote fetch error as non-fatal since runtime/org.gnome.Epiphany.Devel.Locale/x86_64/master is already installed: No such ref 'runtime/org.gnome.Epiphany.Devel.Locale/x86_64/master' in remote gnome-nightly
F: Warning: Treating remote fetch error as non-fatal since app/org.gnome.Todo.Devel/x86_64/master is already installed: No such ref 'app/org.gnome.Todo.Devel/x86_64/master' in remote gnome-nightly
F: Warning: Treating remote fetch error as non-fatal since app/org.gnome.TextEditor.Devel/x86_64/master is already installed: No such ref 'app/org.gnome.TextEditor.Devel/x86_64/master' in remote gnome-nightly
F: Warning: Treating remote fetch error as non-fatal since app/org.gnome.Photos/x86_64/master is already installed: No such ref 'app/org.gnome.Photos/x86_64/master' in remote gnome-nightly
F: Warning: Treating remote fetch error as non-fatal since app/org.gnome.Epiphany.Devel/x86_64/master is already installed: No such ref 'app/org.gnome.Epiphany.Devel/x86_64/master' in remote gnome-nightly

Sorry for the inconvenience and happy hacking.

December 31, 2021

Libadwaita 1.0

Libadwaita 1.0 Demo

Libadwaita 1.0 has been released, just at the end of the year.

Libadwaita is a GTK 4 library implementing the GNOME HIG, complementing GTK. For GTK 3 this role has increasingly been played by Libhandy, and so Libadwaita is a direct Libhandy successor.

You can read more in Adrien’s announcement.

What’s New

Since Libadwaita is a Libhandy successor, it includes most features from it in one form or another, so the changes are presented compared to it.

If you’ve been following This Week in GNOME, you may have already seen a large part of changes.

Updated Stylesheet

Probably the most noticeable change is the reworked stylesheet.

For the past 7 years, the Adwaita style has been a part of GTK. Now it’s a part of Libadwaita instead, while the GTK style has been renamed to Default.

Since we have this opportunity, the stylesheet has been completely redesigned with several goals in mind:

Modernizing the style

GNOME designers have long wanted to do this, and the GTK 4 Default style contains a few changes in that direction compared to the GTK 3 version of Adwaita, and GNOME Shell has been using a similar style as well. Libadwaita takes it much further. You can read more about it Allan Day’s blog post.

The changes are not fully compatible with GTK Default and may require changes on the application side when porting. I’ve also blogged in detail about the biggest breaking change: the updated header bar style.

Runtime recoloring

Ever since Adwaita started using SCSS, it couldn’t really be recolored at all without recompiling it. This created big problems for applications that wanted to do that.

For example, GNOME Web makes its header bar blue in incognito mode. This may sound simple, but involves copy-pasting large chunks of Adwaita into the app itself and making many small changes everywhere to adjust it, as well as using SCSS for it because the original style is SCSS. More recently, GNOME Console and Apostrophe started doing the same thing – copy-pasted from Web, as a matter of fact. This approach means the style is messy and extremely hard to keep up to date with Adwaita changes – I have updated this style for the 3.32 style refresh and never want to do this again.

Another approach applications like Contrast are using (were using with GTK 3, anyway), is copying the whole stylesheet from GTK, and using libsass to recompile it in runtime. This worked – it’s much more maintainable than the first approach, but fell apart when libsass got deprecated.

Meanwhile, the elementary OS stylesheet has been doing recoloring just fine with nothing but @define-color – and so Libadwaita does exactly that, it exposes all of the colors it uses (31 as of the moment of writing) as named colors. The new colors are also documented and will be treated as a proper API.

It also drops all of the formerly used PNG assets, so the colors can affect the elements that used them.

It also reworks the high contrast variant to use the same colors when possible to make sure that changing color for the regular style also works with high contrast.

Another thing the new stylesheet does is simplifying how it handles colors in general. The new simplified style comes in very handy here.

For example, many parts of the UI are now derived from the text color and change with it automatically, and widgets that don’t absolutely need to define their own text color don’t do that anymore, so it can propagate. Where possible, transparency is used instead of mixing or hardcoding colors.

All in all it means that simple custom styles like this one:

/* Solarized popovers */
popover > arrow,
popover > contents {
  background-color: #fdf6e3;
  color: #586e75;
}

actually work correctly and with no glitches, in both light and dark variants, as well as high contrast style. Try doing that with GTK 4 Default and compare the results:

Default popover, light, recolored to Solarized. Half the colors are off, but still legible Default popover, dark, recolored to Solarized. Half the colors are off to the point they are illegible Adwaita popover, light, recolored to Solarized Adwaita popover, dark, recolored to Solarized

Dark Variant Contrast

Blanket app, GTK3, . The controls are almost invisible Blanket app, GTK4, dark. The controls are visible just fine

The dark variant of Adwaita has historically been intended to be used as a lights-out, low distraction appearance for media apps – video players, image viewers, games – and not as a general purpose dark style. As such, it has pretty low contrast and can be hard to see at times.

A primary example is the accent color – historically, Adwaita has never really had a proper accent color as a named color – and many applications have been using @theme_selected_bg_color – a background color used for selected text and list items – as an accent color for text and icons. Not only does it not have a good enough contrast to be used as text color, the dark variant dims it even further, to make this background color less distracting – so while it’s not too bad in the light variant, it falls apart with dark.

Libadwaita fixes that – it makes the accent brighter (made possible by not using it in contexts where it can be distracting), and introduces a second color to be used for cases like this. This second color does vary between the light and dark variants, and this allows it to be much brighter in dark variant, and darker in light variant, so it’s suitable for text.

Example text in accent color, light Example text in accent color, dark

It also changes many other things. The window background is now darker, while elements like buttons and boxed lists are lighter, GtkSwitch and GtkScale sliders are light, etc.

Style Classes

Button styles: regular (no style), flat, suggested action, destructive action, a few custom colored buttons, circular, pill, osd
Various button styles

The updated stylesheet includes many new style classes for app developers to use, in a lot of cases codifying existing patterns that applications have been using via custom styles, but also adding new things.

Some highlights:

  • .pill makes a button large and rounded – in other words, makes it a pill button
  • .flat can now be used with GtkHeaderBar
  • .accent colors a label into the accent color (using the correct color as per the above section)
  • .numeric makes a label use tabular figures
  • .card makes a widget have the same background and shadow as a boxed list.

And speaking of boxed lists, the old .content style class from Libhandy has finally been renamed to .boxed-list, matching the HIG name.

The available style classes (both existing and new) are now documented, and Libadwaita demo now includes a sample of each of them.

Refactoring and cleanups

Adwaita has historically been a big SCSS file containing most styles, another file containing complex mixins for drawing buttons, entries and other widgets, and a few more files for colors.

Libadwaita splits all of that into small manageable files. It removes the complicated mixins, because the new style is simple enough that they aren’t needed. It removes tons of unused and redundant styles, some of which were leftovers from early GTK 3 days, and so on. And, of course, the new style itself allows making styles significantly simpler.

The end result is a much more maintainable and less arcane stylesheet.

Dark Preference

Settings, light
A work in progress dark preference in Settings. Dark version

I’ve blogged about this in much more detail a few months ago, but in short, Libadwaita includes API to support the new cross-desktop dark style preference, as well as streamline the high contrast mode handling.

This has also been backported to Libhandy and will be available in the next release.

While the Libhandy version is strictly opt-in, Libadwaita flips the switch and follows the preference by default, unless the application opts out. This means that any new applications will support the preference by default – and that supporting it is an expected step when porting an application from GTK3 and Libhandy. The documentation now also includes a guide on how to handle application styles.

Many third party applications have already adopted it by now, and there has been good progress on supporting it in the core GNOME applications – though at the moment it’s unlikely that all of the core applications will support it in GNOME 42. If you maintain a core app, it’s a perfect time to start supporting it in order to avoid that 😉.

Libadwaita GtkInspector page

A new GtkInspector page is also available to help testing the style and high contrast preferences.

Documentation

Like GTK 4 itself, Libadwaita features new documentation using the awesome gi-docgen generator by Emmanuele Bassi.

The docs themselves have been reworked and expanded, and feature new generated screenshots, which all come in light and dark versions to match the documentation pages:

Libadwaita docs, adaptive layouts page, showing leaflet Libadwaita docs, adaptive layouts page, showing leaflet. Dark style

Toasts

Toast saying "'Lorem Ipsum' Deleted", with an Undo button

While in-app notifications aren’t a new pattern by any means, we’ve never really had a ready to use widget. Sure, GdNotification exists, but it leaves a lot of decisions to applications, e.g how to deal with multiple notifications at once, or even what notifications should contain – essentially it only provides the notification style, a close button and a timeout.

A big feature that made GdNotification attractive was the ability to animate its visibility before GTK had a widget for that purpose. Now GtkRevealer exists (which our new widget ironically doesn’t use), and most apps currently use that to re-implement in-app notifications from scratch. This has lead to major inconsistencies between apps, and situations like this:

GNOME Boxes, two undo notifications, awkwardly stacked
GNOME Boxes, two undo notifications

To help fixing this, Maximiliano has implemented a new widget to replace them. Its API is very streamlined, and is modeled after notifications. The widget part is not a notification, but rather a notification area that toasts (which are just generic objects and not widgets) are added into. If multiple toasts are added in a quick succession, they are queued based on their priority.

A big difference from GNotification though is that toasts are mutable – and can be changed after they have been shown. This is useful when using toasts as undo bars, for example.

Animations

Libadwaita animation demo

Manuel Genovés has implemented an animation API as part of his GSoC project. Unfortunately not everything that was planned has been implemented, but we have basic timed animations and spring animations.

Timed animations provide simple transitions from one value to another one in a given time and with a given curve. They can repeat, reverse their direction, and alternate with each iteration.

Spring animations don’t have a fixed duration, and instead use physical properties to describe their curve: damping ratio (or optionally just damping), mass, stiffness, an initial velocity and an epsilon to determine when to stop it. The fact they have a variable initial velocity makes them perfect to animate deceleration after performing a gesture:

AdwLeaflet, AdwFlap and AdwCarousel all use spring animations now, and AdwSwipeTracker provides the final velocity after a swipe is finished, instead of pre-calculated duration.

Unfortunately, due to time constraints, none of the above widgets support overshoot when animating. Since they use a critically damped spring by default (meaning it takes the shortest possible time to reach the end and doesn’t overshoot unless the velocity is very high), it’s not really visible unless you swipe really hard, and it can be fixed after the initial release without any API changes.

Unread Badges

An example of an unread badge

AdwViewSwitcher and related widgets now can display unread badges and not just needs-attention dots. This means they don’t use GtkStack anymore, but a new widget called AdwViewStack. For the most part, it’s a drop-in replacement, although it does trim down the API not necessary for this use case.

Thanks to Frederick Schenk for implementing this!

Application

Nahuel has implemented AdwApplication – a GtkApplication subclass that automatically initializes Libadwaita when used. It also automatically loads styles from GResource relative to the application base path. For example, if your application has org.example.App application ID, it will automatically load /org/example/App/style.css. It also loads style-dark.css, style-hc.css, and style-hc-dark.css, allowing to add styles for dark or high contrast styles only.

Helper Widgets

Libadwaita provides a few widgets to simplify common tasks:

  • GtkHeaderBar in GTK4 does not provide a direct way to set a title and a subtitle, and just shows the window title by default. If you want to have a subtitle or to simply display a title that’s different from the window title – for example, for split header bars – the recommended way to do that is to construct two labels manually. That can be tedious and easy to get wrong. AdwWindowTitle aims to help with that. It can be used as follows:

    <object class="GtkHeaderBar">
      <property name="title-widget">
        <object class="AdwWindowTitle">
          <property name="title">Title</property>
          <property name="subtitle">Subtitle</property>
        </object>
      </property>
    </object>>
    
  • AdwBin is a widget that uses GtkBinLayout, has one child, provides API to manage it, implements GtkBuildable accordingly, implements GtkWidget.compute_expand(), and unparents the child in GObject.dispose(). Applications can subclass it instead of GtkWidget without worrying about those things. It can also be used directly without subclassing it.
  • AdwSplitButton provides an easy way to create a, well, split button that will use the correct appearance in a header bar or a toolbar.
  • AdwButtonContent can be used to create a button with an icon and a label without needing to manually set up the button mnemonic:

    <object class="GtkButton">
      <property name="child">
        <object class="AdwButtonContent">
          <property name="label">_Open</property>
          <property name="icon-name">document-open-symbolic</property>
          <property name="use-underline">True</property>
        </object>
      </property>
    </object>
    

API Cleanups

Large parts of the API have been streamlined. Check out the migration guide for more details.

Some highlights:

  • AdwHeaderBar provides separate properties for controlling window buttons at the 2 ends of the header bar, instead of one controlling both sides. This can be used to implement split header bar layouts and removes the need for HdyHeaderGroup.
  • Ever since HdyWindow we’ve had an easy way to support leaflet swipes spanning both the window and the titlebar without HdyHeaderGroup. All of the features HdyWindow and HdyWindowHandle provided have been added into GTK itself, and have been available since 4.0. In acknowledgement of that, Libadwaita doesn’t include a HdySwipeGroup equivalent.

    It still includes AdwWindow and AdwApplicationWindow as helpers, but it’s very easy to achieve the same result with a GtkWindow.

    <object class="GtkWindow">
      <property name="titlebar">
        <object class="GtkBox">
          <property name="visible">False</property>
        </object>
      </property>
      <property name="child">
        <!-- ... -->
      </property>
    </object>
    
  • AdwComboRow has been completely overhauled and is now basically a GtkDropDown clone. This means it uses the same list item factories, allows setting the models from UI files, etc. One thing it doesn’t provide is binding enums – to replace that, Libadwaita includes AdwEnumListModel.
  • AdwAvatar removes the old complicated image loading API and instead just allows setting a GdkPaintable as a custom image.
  • AdwLeaflet now supports the back/forward mouse buttons and keyboard keys, as well as Alt+arrow shortcuts, in addition to swipe gestures, and the properties controlling that have been renamed to reflect the addition.

What hasn’t made it

About Window

Work-in-progress about window

Even though it was featured in Allan Day’s blog post a few months ago, the new About window Adrien has been working on hasn’t made it into Libadwaita 1.0. There were still unresolved design and API questions, and we decided to wait until the next release to have time to polish it instead of rushing it.

Color API

While overriding colors via @define-color is far simpler than it was before (essentially, copying the entire style of the widgets you want to change, with different colors), it’s still not as easy as it could be.

For example, if an application wants to override its accent color, it needs to override 3 colors. One of them (@accent_color) exists pretty much only for contrast, and also differs between light and dark variants. In an ideal world, this color would be calculated automatically based on @accent_bg_color.

Chris has been working on a programmatic API to manage colors, and that should improve this situation a lot. As with the about window, though, it wasn’t ready in time for 1.0.

Touchpad Swipes

A known regression compared to Libhandy is that swipes in widgets such as AdwLeaflet only work on touchscreen, but not touchpad, if the pointer is above a scrolling view. This is something I really hoped to fix before 1.0, but it’s a surprisingly complex issue. It needs extensive GTK changes in order to be fixable, involving essentially replacing and deprecating the existing API for dealing with scrolling, and it’s not something that should be rushed.

Thanks To

  • Adrien Plazas, Christopher Davis, Frederick Schenk, Manuel Genovés, Maximiliano
  • GNOME design team
  • GTK developers
  • Frederik Feichtmeier, nana-4 and other members of the community

I also want to thank my employer, Purism, for letting me and others work on Libadwaita and GTK to make this happen.

Happy holidays and happy hacking!

December 29, 2021

The icon view is dead, long live the icon view!

Porting Files to GTK 4 has been helping me learn and appreciate even more the legacy of the nautilus software package. Its two-decades-long history is closely entangled with the history of the GNOME project.

As I prepare to merge the removal of more than 20 thousand lines of code, I’ve decided to stop and pay some tribute to the legacy widget that’s about to be decommissioned.

Living legend

Archive.org recording of cvs.gnome.org in 1998

The day is May 27, 1998: the day before the start of the Fourth Annual Linux ExpoFederico Mena, co-founder of the GNOME project, uploads a work-in-progress version of the GnomeCanvas widget. This widget would then be included in an “Initial revision” of nautilus as the basis for its icon view.

Early preview version of nautilus    Icon captions feature

Screenshots from Eazel website, preserved by the Internet Archive.

Federico’s 1998’s TODO list is still found, more than 23 years later, in the nautilus source code.

Die-hard icons

At some point renamed/forked to FooCanvas and later EelCanvas, this base widget continued to serve as the fundamental base for the GNOME desktop files and file browser’s icon view across major versions.

However, as GNOME 3 no longer featured icons on desktop, a free-position canvas was no longer required. Various efforts were made to implement a less complex grid view, but the canvas refused to be dethroned easily.

On Jul 22, 2012, Jon McCann renamed the icon-view to canvas-view to pave way for a new icon view. I recall that around that time there was a nautilus git branch implementing a new icon view using GtkIconView. The branch has since been deleted, and I can’t find an archived discussion about it, so I can’t assert why it has been abandoned. I think it was partly due to poor performance for a large number of items.
In any case, GtkIconView, like EelCanvas, didn’t employ child widgets for the content items. This kept the content items from taking advantage of newer toolkit features. This was seen as a critical deficiency in the 2013 DX hackfest, which has prompted the introduction of GtkFlowBox, based on the earlier EggWrapBox widget.

Fast forward to 2016, Carlos Soriano starts working on a new GtkFlowBox-based view. It was discussed in a hackfest later that year, but it was concluded that the performance for large directories was the biggest problem. It has been included in releases as an experimental setting, but couldn’t replace the old canvas.

Another reason why the canvas stuck was that it was a requirement for the icons on desktop. While GNOME 3 didn’t use this, it was still a feature that was supported in nautilus and enabled in some distributions.

Carlos has initially tried to separate the desktop icons into a separate program, but in the end the only viable solution was to drop the desktop icons implementation from nautilus.

Enter GTK 4

In the early days of GTK 4 development, Ernestas Kulik has ported Files to that new in-development GTK version. This notably included a GTK 4 port of EelCanvas. It looked like the canvas would survive yet another major transition.

However, GTK 4 would take a few more years to be developed, and the growing API changes would end up making a port of EelCanvas all but viable.

The limited performance scalability of GtkFlowBox when used as a grid view for content apps has lead GTK developers to create scalable view widgets, which ultimately resulted in GtkGridView and its siblings, available in GTK 4.

Now, this left Files development in a sort of a chicken and egg problem: adopting GtkGridView required porting to GTK 4 first, but porting to GTK 4 required replacing EelCanvas with something first.

Interregnum

So, I’ve picked up Carlos experimental GtkFlowBox-based view and completed it, in order to use it as a stand-in for GtkGridView until after the app is ported to GTK 4.

It has reached feature parity with the canvas view, which is finally going to retirement.

Old and new grid views side by side
Old (EelCanvas-based) grid view on the left. New (GtkFlowBox-based) grid view on the right.
I’m deleting EelCanvas in the git repository of nautilus, but the legacy of GnomeCanvas lives on in other software packages, such as Evolution or nautilus forks nemo and caja.

Merge Request showing the diff.
One does not simply remove 20k LOC. 👌

December 24, 2021

This year receive the gift of a free Meson manual

About two years ago, the Meson manual was published and made available for purchase. The sales were not particularly stellar and the bureaucracy needed to keep the sales channel going took a noticeable amount of time and effort. The same goes for keeping the book continually up to date.

Thus it came to pass that sales were shut down a year ago. At the time there were some questions on whether the book could be made freely available. This was not done, as it would not really have been fair to all the people who paid actual money to get it. So the book has been unavailable since.

However since an entire year has passed since then, the time has come. I'm making the full PDF manual available for personal use. You can download your own copy via this link. The contents have not been updated in more than a year, so it's not really up to date on details but the fundamentals are still valid.

Enjoy, and have a happy whatever-it-is-that-you-call-the-holiday-at-this-time-of-year.

The boring small print

Even though the book is freely downloadable it is not under any sort of an open license. You can download it and read it for personal use, but redistribution of any kind is not permitted.

December 23, 2021

2021-12-23 Thursday

  • Last minute end of year invoicing admin thrash; grateful for turning the bottom of people's budgets into Free Software in 2022.
  • Pleased to see what we've achieved in 2022 with a nice Thank You blog from the marketing team.

December 22, 2021

2021-12-22 Wednesday

  • Calls with various staff; company meet-up in gather.town, nice to see some different faces. Sync. with Andras.
  • Julie & Isaac & David over for dinner - lovely to see them; played games variously in the evening.

Christmas Maps

So it's that time of the year again and about time for an end-of-year post.

Some news in Maps for the upcoming GNOME 42.

Finally, we have added support for running development versions of Maps (such as from the Nightly Flatpak repo) in parallel with the stable ones

The develop one is distinguished by the “cogwheel” background in the headerbar, and also by it's ”bio-hazard” strip icon as seen above.



Also we've had an old feature request laying around about supporting a command-line option to initiate a search.

In the meantime in Evolution there was discussions about being able to launch a search using a configured map application, rather than just using the OpenStreetMap web-based search. In that issue it was suggested there is a draft maps: URI scheme that has been used by Apple Maps on iOS.

So I have implemented this is Maps, so that Maps will now register as a mime-handler for the maps: URI scheme. And you can then open URIs of the form something like:

maps:q=search%20query

This could be tested from the command line using the gio command

$ gio open maps:q=search%20query

 

I also took the opportunity to add a DBus action to perform search.

This can be tested using a command-line like the following:

$ gdbus call --session --dest org.gnome.Maps.Devel --object-path /org/gnome/Maps/Devel --method org.freedesktop.Application.ActivateAction 'search' "[<'search query'>]" "{}"

 or by using the d-feet DBus debugger


 (and even though the output states no return, Maps will actually launch, or activate the currently running instance with the search performed).

And I also implemented an old-school command-line argument -S with the same semantic (taking the search query as it's argument) as per the original feature request.

These will either show the search popover in Maps when there are multiple search results, or just open the place bubble with the place selected when the search was specific enough to have a single result (such as when searching for a contact's address).

 Furthermore, as a little refinement the routing instructions for turn-by-turn based modes now also makes use of the u-turn icons:


 There was also a corner-case bug introduced by me when refactoring the instruction types to decouple them the GraphHopper-specific codes, resulting in a bug in some cases with u-turns preventing the route to show up. This was spotted, and fixed by one of our newcomer contributors Marina Billes.

One thing missing when it comes to the instruction icons by the way is that we miss icons for “keep left” and “keep right”. So I've created an issue for that (https://gitlab.gnome.org/GNOME/gnome-maps/-/issues/410). (I first gave Inkscape a go, but I quickly realized icons attempt to draw looks more like deformed sausages :-) ).

 

Another thing that's been on my mind for a while is the default zoom levels we use when “going to” a place when selecting a search result. Currently we find a suitable zoom level based on the so called bounding box of a feature (when it's available in the result item). This means things like buildings and parks can usually be shown initially so that it fits in the present window. For other object that are mere nodes (just a pair of coordinates), we have used some presets based on the place types as defined by the geocode-glib library. But these are quite limited and only covers a few cases.

So I started playing with a WiP branch to select more fine-grained default zoom levels for more place types with a heuristic based on the types we get from the OSM data.

This way, e.g. the continents (which are represented in OSM as nodes) gets a better fit, and not just defaults to a fully zoomed-in (as a fallback) place somewhere in a rural area or such:

 

And also a few more distinct levels for different place “sizes”, such as hamlets so we don't have to resort to a “one size fits” all level more suitable for larger towns and cities:


 

And I guess that will wrap it up for this time!

Happy Holidays everyone!

December 21, 2021

GTK4ifying Settings

It took a long time, and massive amounts of energy and sweat and blood, but as of last week, Settings is finally ported to GTK4 and uses libadwaita for platform integration.

This was by far the biggest application I’ve ported to GTK4. In total, around 330 files needed to be either rewritten or at least modified as part of the porting process. It also required GTK4 ports of some dependencies, like gnome-desktop, libnma, and colord-gtk.

The Users, Cellular, and Online Accounts panels aren’t ported yet. They have dependencies that needs porting, and we agreed on porting them afterwards. Some of these ports are tricky, such as the Online Accounts panel which requires a GTK4 version of WebKit2GTK – which depends on libsoup3, and that conflicts with librest’s dependency on libsoup2, so all of a sudden we have a bunch of intertwined dependencies to take care of. Another dependency that needs porting is GCR, but seems like it’s not going to be as complicated as Online Accounts.

By using libadwaita, we will finally be able to standardize rows and preferences groups across different panels. There are contributors working on it already, and updates will be shared in no time, stay tuned!

All in all, I’m pretty satisfied with this port. Settings is one of the largest GNOME applications, so having it ported before GNOME 42 was a small miracle. There are bugs to fix, issues to uncover, and regressions to iron out, but so far it’s been working well enough for mundane operations, and I expect it to mature enough until the release.

As always, contributions fixing these regressions will be warmly welcomed.

Let it snow '21

Amidst the holidays that perhaps aren't turning out exactly as hoped, one can take comfort in small tokens of continuity – like the fact that xsnow is still being actively maintained.

Thanks, everyone, for all the good software. Let's extract the best from the year to come.

December 20, 2021

Create an App Information Component in Nuxt

You must have seen multiple apps which show the app’s information like app version and last updated at time in their footers or via a floating action button. In this tutorial, you’ll learn to create a component to show such kind of information in a Nuxt app.

Contents

Prerequisites

For this tutorial, it is assumed that you have already set up a Nuxt app and have a manual or automated way of adding .version and .last-updated-at files to your project.

We build our Nuxt app using Github Actions. In our workflow, we have set up a system that automatically determines the next release version and adds it to the .version file. At the same time, it also creates a .last-updated-at file and adds to it the build date and time.

Reading App Information Files

The first step is to be able to read the contents of .version and .last-updated-at files. The only place this can be done is the nuxt.config.js.

To read the files on the file system, first, you need to install the fs NPM package. To do so, open your terminal, navigate to your project directory and run the following command:

npm install --save fs

Next, add the following code at the start of the nuxt.config.js file:

import fs from 'fs'

// 1
let appVersion
try {
  appVersion = fs.readFileSync('./.version', 'utf8')
} catch (err) {
  appVersion = 'dev'
}

// 2
let appLastUpdatedAt
try {
  appLastUpdatedAt = fs.readFileSync('./.last-updated-at', 'utf8')
} catch (err) {
  appLastUpdatedAt = new Date()
}

In the above code:

  1. You try to read the .version file using the fs.readFileSync method and assign the result to appVersion. In case the .version file doesn’t exist, an error occurs, so appVersion is set to dev.
  2. In the same way, you try to read the .last-updated-at file using the fs.readFileSync method and assign the result to appVersion. In case the .last-updated-at file doesn’t exist, an error occurs, so appLastUpdatedAt is set to the current time (new Date()).

Finally, in the export default object of the nuxt.config.js, add the appVersion and appLastUpdatedAt variables to the publicRuntimeConfig object:

export default {
  ...

  // environment variables used by nuxt
  publicRuntimeConfig: {
    appVersion,
    appLastUpdatedAt,
  },
  ...
}

By adding variables to publicRuntimeConfig, you can access them anywhere - both client and server side - in the Nuxt app using $config.appVersion and $config.appLastUpdatedAt.

Creating an App Information Component

Now that your configuration is set, it’s time to create an app information component.

First, create an AppInfo.vue file in the components directory and add the following template code to it:

<template>
  <div>
    Version: <b>{{ $config.appVersion }}</b>
    <br />
    Updated at: <b>{{ $config.appLastUpdatedAt }}</b>
  </div>
</template>

Next, you can import this component in a layout or in a page by adding the following code:

<template>
  <app-info />
</template>

Finally, restart your Nuxt app by running the following command in your terminal:

npm run dev

Open your app in the browser and you’ll get something like this:

App Information component in a Nuxt app

We have skipped the styling part as you can style the app information component as per your liking.

Conclusion

That’s it! You have successfully implemented an app information component in your Nuxt app. In the same way, you can add things like Changelog, What’s New, and more to your app by taking the help of publicRuntimeConfig in a Nuxt app.

December 19, 2021

Status update, 19/12/2021

Its a time to be thankful for what you can do, rather than be pissed off about things that you can’t do because we’re in the 3rd year of a global pandemic.

I made it home to Shropshire, in time to cast an important vote, and to spend Christmas with my folks… something I couldn’t do last year.

Work involves a client with software integration difficulties. Our goal is to enable a Python 3 migration in the company, which involves a tangle of dependencies in various languages. The interesting aspect is that we’re trialling BuildStream as the solution. We know BuildStream can control the mix of C/C++/Go/Java/etc. dependencies, in a way which Python-only tools like virtualenv cannot, and we hope it will be less friction compared to introducing a fullblown packaging system like DPKG. The project is challenging for a number of reasons and I am not enjoying working over VPN+SSH to another continent, but I’m sure we will learn a lot.

I was excited to see Your Year in Music available on Listenbrainz this year. I’d love to be able to export the generated playlists with Calliope, but I don’t have the time to implement it myself.

Besides that I am mostly concentrating on relaxing and finishing off some music. Here’s the weather in Wales today – cloud or snow?

December 18, 2021

M8 — Drum and Base

M8

I drafted a post on how the Dirtywave M8 is an amazing synth, but given the time and the growing scope of that post, I’ll sum it up in a short blurb instead. For a single man project, this synth is a miracle. Very geeky, all shortcut driven, standing on the shoulders of tracker giants, particularly LSDJ, it has a solid workflow and most definitely isn’t a gimmick. You’ll have to do without any visual aid. This is what I love on the Elektron boxes, where the display really helps you understand what you’re doing when filtering or creating an LFO. All you have here are hex numbers and consistent shortcuts. But it sounds absolutely marvelous and allows you to create music anywhere.

I’d like to share two tracks I’ve learned the ropes of the device on, but also the genre itself. I’ve listened to DNB mainly through Noisia/Vision radio podcast that made my runs possible (I hated running all my life, but it’s really the best way to combat the negative effects of sitting behind a computer all day). But I’ve never actually tried producing a track within that DNB realm.

Tengu

Woohan

While I lean on samples for the beats, the base is all the internal FM (with multiple oscilator types, not just sine) and macrosynth engines.

I’ve also been learning the ropes of Blender’s geometry nodes recently. While only scratching the surface, I created this visualizer for the track. The heavy lifting is done with baking the sound to f-curves, which is then somewhat tweaked to acceptable ranges with f-curve modifiers.

I also have to mention the absolutely bonkers amazing visual identity of the M8 project. It just couldn’t be more hip. This is also my very last gear acquisition. For sure.

Previously, Previously, Previously, Previously, Previously, Previously, Previously.

geewallet 0.4.300.0 released!

10th of my 21-day quarantine*! And to celebrate, I'm going to release a new version of geewallet. It's not that I blog about geewallet releases often (or blog at all, lately), but this one is a special one for me. We decided to call it 0.4.300.0


The highlights:

  • We fixed the GTK theme for our snap package. (Long version of the story: ever since we upgraded our snap generation process to take place in Ubuntu 20.04 instead of Ubuntu 18.04, the theme stopped working so the app was not showing anymore with the default theme of the system, but with the default Gtk theme, which is very plain. Even if you might consider this issue important, we haven't had time to look at it because we've been very busy finishing Lightning support. Sorry.)
  • The chart rendering doesn't use SkiaSharp anymore, but good-old Cairo. This fixes some UI glitches that we had in the GTK frontend. (Long version: for this, we didn't just draw the chart using Cairo in our Gtk frontend, we actually wrote an implementation of the Shapes API for the Xamarin.Forms' GTK backend, and we contributed the work upstream: https://github.com/xamarin/Xamarin.Forms/pull/14235 . Hopefully they merge it soon so that we don't need to use our own forked repo/nuget anymore.)
  • Fixed a crash when pairing with a cold-storage wallet. (Long version: user might not know that pairing is only allowed against another geewallet instance; low-hanging fruit bugfix which I shouldn't have neglected for so long, I know.)
  • Fixed a crash when scanning some QR-codes that contained unknown parameters in the bitcoin URI. (Long version: I was actually in El Salvador and when trying to use a BTM, I found this bug! Apparently some BTMs here add an extraneous "chivo" param in the URI's querystring, in case the wallet being used is the one from the government; not sure why. In this case, geewallet was failing fast instead of ignoring the unexpected intruder.)
The less important (not user-facing) work:
  • Our CI now checks that our Android, macOS, and iOS frontends don't break. Previously the only frontends that we built in CI were the Gtk one (Linux) and the Console one (cross-platform, it's just terminal-based).
  • We do snap package generation in GitLab now instead of GitHub. This is good because Microsoft keeps changing the Linux VMs being used in the GitHubActions service so we cannot keep up fixing things that just break out of the blue (so, they break independently from what we change in our commits, which is very confusing!). (Long version: we had to use GitHubActions because GitLabCI uses docker under the hood; so given that snapcraft uses systemd, it conflicts with it; now we use a "docker in docker" approach to be able to run in GitLabCI; which also allows us to publish the snap package as an artifact in the GitLabCI pipeline, not just publishing it to the Snap Store; this way, in case you somehow need a previous version in the future you can grab it from there, something that you couldn't just via snap AFAIU).
Limitations:
  • Even though this wallet supports two ETH currencies (ETH itself, and DAI), we don't recommend their use at the moment because of the high fees and long confirmation waits these days. This is because the wallet waits for an ETH transaction to be mined (to make sure it didn't run out of gas, and if it did, report the problem to the user), but these days this wait is longer than the time-out. The short-term fix for this is either a) assume it will never ran out of gas, since our address is not a contract anyway (so I guess it can never run out of gas, right? feel free to prove me wrong, my ETH knowledge is not top-notch), or b) have some UI indicating that a transaction has been sent but not accepted by the network yet. The long-term fix is to have off-chain (Layer2) technology supported by the wallet, but we don't know which technology we will choose for this, and of course we're giving priority to the first Layer2 technology: Lightning (which is only compatible with BTC and LTC). All this aside, the wallet works well with ETC (an Ethereum-compatible technology). Anyway, this doesn't worry me too much because... what is the ETH blockchain used for these days, mainly? NFTs and DeFi pyramid schemes. In case you didn't get the memo, most of the former (if not all) are scams, and the latter are all of them mainly based on dubious centralized stablecoins (which could suffer fractional reserve and therefore cause bank runs, as Elizabeth Warren has already warned about).
  • Despite this wallet being implemented with .NET (F#), our Windows compatibility story is very poor :'-( We ran into limitations of the Microsoft's AOT technology being used for UWP apps (required by the official process required to publish it in the WindowsStore) in the past. Nowadays apparently you can publish apps in the WindowsStore without these limitations, but we haven't tried again. Maybe by the next time we give it another go, we might have moved to MAUI already (which means WinUI instead of UWP under the hood). As always, if this is your cup of tea, we accept MRs!
BTW on the topic of F#, I augmented my tiny C#-to-F# tutorial to include Python (so Python devs can try how it feels to switch to a more typed approach without the need to be so verbose, thanks to F# type inference!), as both languages have a very similar style (indentation based, no curly braces!). Check it out.

* And on the topic of quarantine (which was increased from 14 to 21 days for me just because of the omicron panic) I just wanted to share some rambling that is in my head: if the omicron strain is more infectious but at the same time is less dangerous (I think it was only yesterday that the first death happened because of it, right? at least the first one covered by the media) than the others, then wouldn't this be a good outcome? Or rather, a least worse one. I mean, if this variant gets more prevalent around the pandemic, this coronavirus might actually become just the next flu, right? So: endemic, but with much less mortality rate. I don't know, hopefully something along these lines happens, just sharing some positive perspective! Be safe.

NB: if you're looking for this version in Android, please be aware that the validation from Google takes a bit of time, hopefully the update will be available in the Play store in less than 24h.

December 13, 2021

webassembly: the new kubernetes?

I had an "oh, duh, of course" moment a few weeks ago that I wanted to share: is WebAssembly the next Kubernetes?

katers gonna k8s

Kubernetes promises a software virtualization substrate that allows you to solve a number of problems at the same time:

  • Compared to running services on bare metal, Kubernetes ("k8s") lets you use hardware more efficiently. K8s lets you run many containers on one hardware server, and lets you just add more servers to your cluster as you need them.

  • The "cloud of containers" architecture efficiently divides up the work of building server-side applications. Your database team can ship database containers, your backend team ships java containers, and your product managers wire them all together using networking as the generic middle-layer. It cuts with the grain of Conway's law: the software looks like the org chart.

  • The container abstraction is generic enough to support lots of different kinds of services. Go, Java, C++, whatever -- it's not language-specific. Your dev teams can use what they like.

  • The operations team responsible for the k8s servers that run containers don't have to trust the containers that they run. There is some sandboxing and security built-in.

K8s itself is an evolution on a previous architecture, OpenStack. OpenStack had each container be a full virtual machine, with a whole kernel and operating system and everything. K8s instead generally uses containers, which don't generally require a kernel in the containers. The result is that they are lighter-weight -- think Docker versus VirtualBox.

In a Kubernetes deployment, you still have the kernel at a central place in your software architecture. The fundamental mechanism of containerization is the Linux kernel process, with private namespaces. These containers are then glued together by TCP and UDP sockets. However, though one or more kernel process per container does scale better than full virtual machines, it doesn't generally scale to millions of containers. And processes do have some start-up time -- you can't spin up a container for each request to a high-performance web service. These technical contraints lead to certain kinds of system architectures, with generally long-lived components that keep some kind of state.

k8s <=? w9y

Server-side WebAssembly is in a similar space as Kubernetes -- or rather, WebAssembly is similar to processes plus private namespaces. WebAssembly gives you a good abstraction barrier and (can give) high security isolation. It's even better in some ways because WebAssembly provides "allowlist" security -- it has no capabilities to start with, requiring that the "host" that runs the WebAssembly explicitly delegate some of its own capabilities to the guest WebAssembly module. Compare to processes which by default start with every capability and then have to be restricted.

Like Kubernetes, WebAssembly also gets you Conway's-law-affine systems. Instead of shipping containers, you ship WebAssembly modules -- and some metadata about what kinds of things they need from their environment (the 'imports'). And WebAssembly is generic -- it's a low level virtual machine that anything can compile to.

But, in WebAssembly you get a few more things. One is fast start. Because memory is data, you can arrange to create a WebAssembly module that starts with its state pre-initialized in memory. Such a module can start in microseconds -- fast enough to create one on every request, in some cases, just throwing away the state afterwards. You can run function-as-a-service architectures more effectively on WebAssembly than on containers. Another is that the virtualization is provided entirely in user-space. One process can multiplex between many different WebAssembly modules. This lets one server do more. And, you don't need to use networking to connect WebAssembly components; they can transfer data in memory, sometimes even without copying.

(A digression: this lightweight in-process aspect of WebAssembly makes it so that other architectures are also possible, e.g. this fun hack to sandbox a library linked into Firefox. They actually shipped that!)

I compare WebAssembly to K8s, but really it's more like processes and private namespaces. So one answer to the question as initially posed is that no, WebAssembly is not the next Kubernetes; that next thing is waiting to be built, though I know of a few organizations that have started already.

One thing does seem clear to me though: WebAssembly will be at the bottom of the new thing, and therefore that the near-term trajectory of WebAssembly is likely to follow that of Kubernetes, which means...

  • Champagne time for analysts!

  • The Gartner ✨✨Magic Quadrant✨✨™®© rides again

  • IBM spins out a new WebAssembly division

  • Accenture starts asking companies about their WebAssembly migration plan

  • The Linux Foundation tastes blood in the waters

And so on. I see turbulent waters in the near-term future. So in that sense, in which Kubernetes is not essentially a technical piece of software but rather a nexus of frothy commercial jousting, then yes, certainly: we have a fun 5 years or so ahead of us.

December 12, 2021

Can you help with bulk storage firmware updates?

Does anyone have any examples of peripheral devices that can have their firmware upgraded by dropping a new firmware file onto a mounted volume? e.g. insert device, new disk appears, firmware file is copied over, then the firmware update completes?

Could anyone with a device that supports firmware upgrade using bulk storage please fill in my 2 minute questionnaire? I’m trying to create a UF2-compatible plugin to fwupd and need data to make sure it’s suitable for all vendors and devices. The current pull request is here, but I have no idea if it is suitable yet. Thanks!

December 10, 2021

Loop: A simple music application

In the last year I've seen some really good musician that performs all the instruments in a song with just a loop machine, recording each instrument one by one in tracks and looping.

I was thinking that it should be easy to have a desktop application that does exactly the same, just some tracks to record some sounds and the playback with a loop option, and that's what I created during this week.

The Loop application is just a simple Gtk4 application that uses gstreamer to record tracks and then you can play each one at the same with or without loop option. With that, a good musician could create the base melody of the song and then sign on top of that. Unfortunately, I'm not a good musician, but I can use this to play around.

I've just created the request on flathub to add the application, so if everything is okay it will be available there soon, so more people can play with this awesome toy.

Right now it has the basic record, play and loop functionality with just four tracks, so don't expect to have a professional music app (yet). The recording and play times are not perfect and there's a delay, that's a known issue, but I'm planning to fix this issue and add more functionality in the future, like:

  • Make number of tracks configurable
  • Import track from files
  • Add a trim slider per track, to be able to adjust the recorder track to loop
  • Metronome tick and clock to have a visual reference for recording tracks
  • A record button, but export the combination of all tracks and mic to a mp3 file

And that's all. The logo and design is an initial version done by myself, so if any designer wants to take a look, all contributions are welcome. And of course any code contribution is also welcome.

If you use this application and do some good or fun performance, please, ping me on social networks and let me know.

Cambalache 0.8.0 released!

Cambalache is a new RAD tool for Gtk 4 and 3 with a clear MVC design and data model first philosophy.

Exactly one year ago I made the first commit…

commit 51d4185cd8556f0358cc463578df5a4138ac90b5
Author: Juan Pablo Ugarte
Date: Wed Dec 9 19:14:53 2020 -0300

Initial commit

Supports type system and interface data model.
Basic history (undo/redo stack) implementation with triggers.

It consisted of just 3 files, the data model, a python script to generate history triggers automatically and a small sql script to test it.

Fast forward one year and I am starting to believe it can be on feature parity with Glade sooner than expected.

Having a pretty solid backend this release focus on adding all the big missing UX parts, like a good type selector, workspace placeholders and basic clipboard actions like copy/paste.

Type chooser bar

Borrowing the design from Glade I implemented a type chooser that categorizes object classes to make it easier to find what you need.

Workspace placeholders

Containers now have placeholders to make it easier to add children in a specific positions

It supports the following actions:

  • Double click on a placeholder to create a new widget in place
  • <Control> + Insert to add more placeholders
  • <Control> + Delete to remove placeholders
  • <Shift><Control> + Insert to add a new row
  • <Shift><Control> + Delete to remove a row

Translatable properties

Thanks to the work of Philipp Unger it is possible to mark properties as translatable and add comments for translators.

Clipboard actions

To make life easier, common clipboard actions like, Copy, Paste, Cut and Delete where added.

Better unsupported features report

Cambalache will make its best effort to notify the user about unsupported features when importing UI files and export to a different filename as a precaution to avoid loosing data.

Matrix channel

Have any question? come chat with us at #cambalache:gnome.org

Where to get it?

As always you can get the code in gitlab

git clone https://gitlab.gnome.org/jpu/cambalache.git

or download the bundle from flathub

flatpak remote-add --user --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak install --user flathub ar.xjuan.Cambalache

UPDATE: Version 0.8.1 released!

 

Happy coding!

December 09, 2021

PSA: The 5.17 kernel will require some initrd generator changes for kms drivers

Starting with kernel 5.17 the kernel supports the builtin privacy screens built into the LCD panel of some new laptop models.

This means that the drm drivers will now return -EPROBE_DEFER from their probe() method on models with a builtin privacy screen when the privacy screen provider driver has not been loaded yet.

To avoid any regressions distors should modify their initrd generation tools to include privacy screen provider drivers in the initrd (at least on systems with a privacy screen), before 5.17 kernels start showing up in their repos.

If this change is not made, then users using a graphical bootsplash (plymouth) will get an extra boot-delay of up to 8 seconds (DeviceTimeout in plymouthd.defaults) before plymouth will show and when using disk-encryption where the LUKS password is requested from the initrd, the system will fallback to text-mode after these 8 seconds.

I've written a patch with the necessary changes for dracut, which might be useful as an example for how to deal with this in other initrd generators, see: https://github.com/dracutdevs/dracut/pull/1666

I've also filed bugs for tracking this for Fedora, openSUSE, Arch, Debian and Ubuntu.