April 14, 2021

2021-04-14 Wednesday

  • Catch up with Tor, sales call with Eloy - prodded at a nasty mis-feature with web view mode in Online and code read through a nasty with Ash. M's ballet re-starting; fun.

April 13, 2021

2021-04-13 Tuesday

  • Mail chew, sync with Kendy; chased customer bugs, and contractuals; catch-up with Pedro.

April 12, 2021

Rift CV1 – Getting close now…

It’s been a while since my last post about tracking support for the Oculus Rift in February. There’s been big improvements since then – working really well a lot of the time. It’s gone from “If I don’t make any sudden moves, I can finish an easy Beat Saber level” to “You can’t hide from me!” quality.

Equally, there are still enough glitches and corner cases that I think I’ll still be at this a while.

Here’s a video from 3 weeks ago of (not me) playing Beat Saber on Expert+ setting showing just how good things can be now:

Beat Saber – Skunkynator playing Expert+, Mar 16 2021

Strap in. Here’s what I’ve worked on in the last 6 weeks:

Pose Matching improvements

Most of the biggest improvements have come from improving the computer vision algorithm that’s matching the observed LEDs (blobs) in the camera frames to the 3D models of the devices.

I split the brute-force search algorithm into 2 phases. It now does a first pass looking for ‘obvious’ matches. In that pass, it does a shallow graph search of blobs and their nearest few neighbours against LEDs and their nearest neighbours, looking for a match using a “Strong” match metric. A match is considered strong if expected LEDs match observed blobs to within 1.5 pixels.

Coupled with checks on the expected orientation (matching the Gravity vector detected by the IMU) and the pose prior (expected position and orientation are within predicted error bounds) this short-circuit on the search is hit a lot of the time, and often completes within 1 frame duration.

In the remaining tricky cases, where a deeper graph search is required in order to recover the pose, the initial search reduces the number of LEDs and blobs under consideration, speeding up the remaining search.

I also added an LED size model to the mix – for a candidate pose, it tries to work out how large (in pixels) each LED should appear, and use that as a bound on matching blobs to LEDs. This helps reduce mismatches as devices move further from the camera.

LED labelling

When a brute-force search for pose recovery completes, the system now knows the identity of various blobs in the camera image. One way it avoids a search next time is to transfer the labels into future camera observations using optical-flow tracking on the visible blobs.

The problem is that even sped-up the search can still take a few frame-durations to complete. Previously LED labels would be transferred from frame to frame as they arrived, but there’s now a unique ID associated with each blob that allows the labels to be transferred even several frames later once their identity is known.

IMU Gyro scale

One of the problems with reverse engineering is the guesswork around exactly what different values mean. I was looking into why the controller movement felt “swimmy” under fast motions, and one thing I found was that the interpretation of the gyroscope readings from the IMU was incorrect.

The touch controllers report IMU angular velocity readings directly as a 16-bit signed integer. Previously the code would take the reading and divide by 1024 and use the value as radians/second.

From teardowns of the controller, I know the IMU is an Invensense MPU-6500. From the datasheet, the reported value is actually in degrees per second and appears to be configured for the +/- 2000 °/s range. That yields a calculation of Gyro-rad/s = Gyro-°/s * (2000 / 32768) * (?/180) – or a divisor of 938.734.

The 1024 divisor was under-estimating rotation speed by about 10% – close enough to work until you start moving quickly.

Limited interpolation

If we don’t find a device in the camera views, the fusion filter predicts motion using the IMU readings – but that quickly becomes inaccurate. In the worst case, the controllers fly off into the distance. To avoid that, I added a limit of 500ms for ‘coasting’. If we haven’t recovered the device pose by then, the position is frozen in place and only rotation is updated until the cameras find it again.

Exponential filtering

I implemented a 1-Euro exponential smoothing filter on the output poses for each device. This is an idea from the Project Esky driver for Project North Star/Deck-X AR headsets, and almost completely eliminates jitter in the headset view and hand controllers shown to the user. The tradeoff is against introducing lag when the user moves quickly – but there are some tunables in the exponential filter to play with for minimising that. For now I’ve picked some values that seem to work reasonably.

Non-blocking radio

Communications with the touch controllers happens through USB radio command packets sent to the headset. The main use of radio commands in OpenHMD is to read the JSON configuration block for each controller that is programmed in at the factory. The configuration block provides the 3D model of LED positions as well as initial IMU bias values.

Unfortunately, reading the configuration block takes a couple of seconds on startup, and blocks everything while it’s happening. Oculus saw that problem and added a checksum in the controller firmware. You can read the checksum first and if it hasn’t changed use a local cache of the configuration block. Eventually, I’ll implement that caching mechanism for OpenHMD but in the meantime it still reads the configuration blocks on each startup.

As an interim improvement I rewrote the radio communication logic to use a state machine that is checked in the update loop – allowing radio communications to be interleaved without blocking the regularly processing of events. It still interferes a bit, but no longer causes a full multi-second stall as each hand controller turns on.

Haptic feedback

The hand controllers have haptic feedback ‘rumble’ motors that really add to the immersiveness of VR by letting you sense collisions with objects. Until now, OpenHMD hasn’t had any support for applications to trigger haptic events. I spent a bit of time looking at USB packet traces with Philipp Zabel and we figured out the radio commands to turn the rumble motors on and off.

In the Rift CV1, the haptic motors have a mode where you schedule feedback events into a ringbuffer – effectively they operate like a low frequency audio device. However, that mode was removed for the Rift S (and presumably in the Quest devices) – and deprecated for the CV1.

With that in mind, I aimed for implementing the unbuffered mode, with explicit ‘motor on + frequency + amplitude’ and ‘motor off’ commands sent as needed. Thanks to already having rewritten the radio communications to use a state machine, adding haptic commands was fairly easy.

The big question mark is around what API OpenHMD should provide for haptic feedback. I’ve implemented something simple for now, to get some discussion going. It works really well and adds hugely to the experience. That code is in the https://github.com/thaytan/OpenHMD/tree/rift-haptics branch, with a SteamVR-OpenHMD branch that uses it in https://github.com/thaytan/SteamVR-OpenHMD/tree/controller-haptics-wip

Problem areas

Unexpected tracking losses

I’d say the biggest problem right now is unexpected tracking loss and incorrect pose extractions when I’m not expecting them. Especially my right controller will suddenly glitch and start jumping around. Looking at a video of the debug feed, it’s not obvious why that’s happening:

To fix cases like those, I plan to add code to log the raw video feed and the IMU information together so that I can replay the video analysis frame-by-frame and investigate glitches systematically. Those recordings will also work as a regression suite to test future changes.

Sensor fusion efficiency

The Kalman filter I have implemented works really nicely – it does the latency compensation, predicts motion and extracts sensor biases all in one place… but it has a big downside of being quite expensive in CPU. The Unscented Kalman Filter CPU cost grows at O(n^3) with the size of the state, and the state in this case is 43 dimensional – 22 base dimensions, and 7 per latency-compensation slot. Running 1000 updates per second for the HMD and 500 for each of the hand controllers adds up quickly.

At some point, I want to find a better / cheaper approach to the problem that still provides low-latency motion predictions for the user while still providing the same benefits around latency compensation and bias extraction.

Lens Distortion

To generate a convincing illusion of objects at a distance in a headset that’s only a few centimetres deep, VR headsets use some interesting optics. The LCD/OLED panels displaying the output get distorted heavily before they hit the users eyes. What the software generates needs to compensate by applying the right inverse distortion to the output video.

Everyone that tests the CV1 notices that the distortion is not quite correct. As you look around, the world warps and shifts annoyingly. Sooner or later that needs fixing. That’s done by taking photos of calibration patterns through the headset lenses and generating a distortion model.

Camera / USB failures

The camera feeds are captured using a custom user-space UVC driver implementation that knows how to set up the special synchronisation settings of the CV1 and DK2 cameras, and then repeatedly schedules isochronous USB packet transfers to receive the video.

Occasionally, some people experience failure to re-schedule those transfers. The kernel rejects them with an out-of-memory error failing to set aside DMA memory (even though it may have been running fine for quite some time). It’s not clear why that happens – but the end result at the moment is that the USB traffic for that camera dies completely and there’ll be no more tracking from that camera until the application is restarted.

Often once it starts happening, it will keep happening until the PC is rebooted and the kernel memory state is reset.

Occluded cases

Tracking generally works well when the cameras get a clear shot of each device, but there are cases like sighting down the barrel of a gun where we expect that the user will line up the controllers in front of one another, and in front of the headset. In that case, even though we probably have a good idea where each device is, it can be hard to figure out which LEDs belong to which device.

If we already have a good tracking lock on the devices, I think it should be possible to keep tracking even down to 1 or 2 LEDs being visible – but the pose assessment code will have to be aware that’s what is happening.

Upstreaming

April 14th marks 2 years since I first branched off OpenHMD master to start working on CV1 tracking. How hard can it be, I thought? I’ll knock this over in a few months.

Since then I’ve accumulated over 300 commits on top of OpenHMD master that eventually all need upstreaming in some way.

One thing people have expressed as a prerequisite for upstreaming is to try and remove the OpenCV dependency. The tracking relies on OpenCV to do camera distortion calculations, and for their PnP implementation. It should be possible to reimplement both of those directly in OpenHMD with a bit of work – possibly using the fast LambdaTwist P3P algorithm that Philipp Zabel wrote, that I’m already using for pose extraction in the brute-force search.

Others

I’ve picked the top issues to highlight here. https://github.com/thaytan/OpenHMD/issues has a list of all the other things that are still on the radar for fixing eventually.

Other Headsets

At some point soon, I plan to put a pin in the CV1 tracking and look at adapting it to more recent inside-out headsets like the Rift S and WMR headsets. I implemented 3DOF support for the Rift S last year, but getting to full positional tracking for that and other inside-out headsets means implementing a SLAM/VIO tracking algorithm to track the headset position.

Once the headset is tracking, the code I’m developing here for CV1 to find and track controllers will hopefully transfer across – the difference with inside-out tracking is that the cameras move around with the headset. Finding the controllers in the actual video feed should work much the same.

Sponsorship

This development happens mostly in my spare time and partly as open source contribution time at work at Centricular. I am accepting funding through Github Sponsorships to help me spend more time on it – I’d really like to keep helping Linux have top-notch support for VR/AR applications. Big thanks to the people that have helped get this far.

April 11, 2021

guile's reader, in guile

Good evening! A brief(ish?) note today about some Guile nargery.

the arc of history

Like many language implementations that started life when you could turn on the radio and expect to hear Def Leppard, Guile has a bottom half and a top half. The bottom half is written in C and exposes a shared library and an executable, and the top half is written in the language itself (Scheme, in the case of Guile) and somehow loaded by the C code when the language implementation starts.

Since 2010 or so we have been working at replacing bits written in C with bits written in Scheme. Last week's missive was about replacing the implementation of dynamic-link from using the libltdl library to using Scheme on top of a low-level dlopen wrapper. I've written about rewriting eval in Scheme, and more recently about how the road to getting the performance of C implementations in Scheme has been sometimes long.

These rewrites have a quixotic aspect to them. I feel something in my gut about rightness and wrongness and I know at a base level that moving from C to Scheme is the right thing. Much of it is completely irrational and can be out of place in a lot of contexts -- like if you have a task to get done for a customer, you need to sit and think about minimal steps from here to the goal and the gut doesn't have much of a role to play in how you get there. But it's nice to have a project where you can do a thing in the way you'd like, and if it takes 10 years, that's fine.

But besides the ineffable motivations, there are concrete advantages to rewriting something in Scheme. I find Scheme code to be more maintainable, yes, and more secure relative to the common pitfalls of C, obviously. It decreases the amount of work I will have when one day I rewrite Guile's garbage collector. But also, Scheme code gets things that C can't have: tail calls, resumable delimited continuations, run-time instrumentation, and so on.

Taking delimited continuations as an example, five years ago or so I wrote a lightweight concurrency facility for Guile, modelled on Parallel Concurrent ML. It lets millions of fibers to exist on a system. When a fiber would need to block on an I/O operation (read or write), instead it suspends its continuation, and arranges to restart it when the operation becomes possible.

A lot had to change in Guile for this to become a reality. Firstly, delimited continuations themselves. Later, a complete rewrite of the top half of the ports facility in Scheme, to allow port operations to suspend and resume. Many of the barriers to resumable fibers were removed, but the Fibers manual still names quite a few.

Scheme read, in Scheme

Which brings us to today's note: I just rewrote Guile's reader in Scheme too! The reader is the bit that takes a stream of characters and parses it into S-expressions. It was in C, and now is in Scheme.

One of the primary motivators for this was to allow read to be suspendable. With this change, read-eval-print loops are now implementable on fibers.

Another motivation was to finally fix a bug in which Guile couldn't record source locations for some kinds of datums. It used to be that Guile would use a weak-key hash table to associate datums returned from read with source locations. But this only works for fresh values, not for immediate values like small integers or characters, nor does it work for globally unique non-immediates like keywords and symbols. So for these, we just wouldn't have any source locations.

A robust solution to that problem is to return annotated objects rather than using a side table. Since Scheme's macro expander is already set to work with annotated objects (syntax objects), a new read-syntax interface would do us a treat.

With read in C, this was hard to do. But with read in Scheme, it was no problem to implement. Adapting the expander to expect source locations inside syntax objects was a bit fiddly, though, and the resulting increase in source location information makes the output files bigger by a few percent -- due somewhat to the increased size of the .debug_lines DWARF data, but also due to serialized source locations for syntax objects in macros.

Speed-wise, switching to read in Scheme is a regression, currently. The old reader could parse around 15 or 16 megabytes per second when recording source locations on this laptop, or around 22 or 23 MB/s with source locations off. The new one parses more like 10.5 MB/s, or 13.5 MB/s with positions off, when in the old mode where it uses a weak-key side table to record source locations. The new read-syntax runs at around 12 MB/s. We'll be noodling at these in the coming months, but unlike when the original reader was written, at least now the reader is mainly used only at compile time. (It still has a role when reading s-expressions as data, so there is still a reason to make it fast.)

As is the case with eval, we still have a C version of the reader available for bootstrapping purposes, before the Scheme version is loaded. Happily, with this rewrite I was able to remove all of the cruft from the C reader related to non-default lexical syntax, which simplifies maintenance going forward.

An interesting aspect of attempting to make a bug-for-bug rewrite is that you find bugs and unexpected behavior. For example, it turns out that since the dawn of time, Guile always read #t and #f without requiring a terminating delimiter, so reading "(#t1)" would result in the list (#t 1). Weird, right? Weirder still, when the #true and #false aliases were added to the language, Guile decided to support them by default, but in an oddly backwards-compatible way... so "(#false1)" reads as (#f 1) but "(#falsa1)" reads as (#f alsa1). Quite a few more things like that.

All in all it would seem to be a successful rewrite, introducing no new behavior, even producing the same errors. However, this is not the case for backtraces, which can expose the guts of read in cases where that previously wouldn't happen because the C stack was opaque to Scheme. Probably we will simply need to add more sensible error handling around callers to read, as a backtrace isn't a good user-facing error anyway.

OK enough rambling for this evening. Happy hacking to all and to all a good night!

April 10, 2021

Converting a project to Meson: the olc Pixel Game Engine

Meson's development has always been fairly practical focusing on solving real world problems people have. One simple way of doing that is taking existing projects, converting their build systems to Meson and seeing how long it takes, what pain points there are and whether there are missing features. Let's look at one such conversion.

We'll use the One Lone Coder Pixel Game Engine. It is a simple but fairly well featured game engine that is especially suitable for beginners. It also has very informative YouTube channel teaching people how to write their own computer games. The engine is implemented as a single C++ header and the idea is that you can just copy it in your own project, #include it and start coding your game. Even though the basic idea is simple, there are still some build challenges:

  • On Windows there are no dependencies, it uses only builtin OS functionality but you need to set up a Visual Studio project (as that is what most beginners tend to use)
  • On Linux you need to use system dependencies for the X server, OpenGL and libpng
  • On macOS you need to use both OpenGL and GLUT for the GUI and in addition obtain libpng either via a prebuilt library or by building it yourself from source

The Meson setup

The Meson build definition that provides all of the above is 25 lines long. We'll only look at the basic setup, but the whole file is available in this repo for those who want all the details. The most important bit is looking up dependencies, which looks like this:

external_deps = [dependency('threads'), dependency('gl')]
cpp = meson.get_compiler('cpp')

if host_machine.system() == 'windows'
  # no platform specific deps are needed
elif host_machine.system() == 'darwin'
  external_deps += [dependency('libpng'),
                    dependency('appleframeworks', modules: ['GLUT'])]
else
  external_deps += [dependency('libpng'),
                    dependency('x11'),
                    cpp.find_library('stdc++fs', required: false)]

The first two dependencies are the same on all platforms and use Meson's builtin functionality for enabling threading support and OpenGL. After that we add platform specific dependencies as described above. The stdc++fs one is only needed if you want to build on Debian stable or Raspberry Pi OS, as their C++ standard library is old and does not have the filesystem parts enabled by default. If you only support new OS versions, then that dependency can be removed.

The interesting bit here is libpng. As macOS does not provide it as part of the operating system, we need to build it ourselves. This can be accomplished easily by using Meson's WrapDB service for package management. Adding this dependency is a matter of going to your project's source root, ensuring that a directory called subprojects exists and running the following command:

meson wrap install libpng

This creates a wrap file that Meson can use to download and build the dependency as needed. That's all that is needed. Now anyone can fork the repo, edit the sample program source file and get going writing their own game.

Bonus Xcode support

Unlike the Ninja and Visual Studio backends, the Xcode backend has always been a bit ...  crappy, to be honest. Recently I have started work on bringing it up to feature parity with the other backends. There is still a lot of work to be done, but it is now good enough that you can build and debug applications on macOS. Here is an example of the Pixel engine sample application running under the debugger in Xcode.


This functionality will be available in the next release (no guarantees on feature completeness) but the impatient among you can try it out using Meson's Git trunk.

Calliope, slowly building steam

I wrote in December about Calliope, a small toolkit for building music recommendations. It can also be used for some automation tasks.

I added a bandcamp module which list albums in your Bandcamp collection. I sometimes buy albums and then don’t download them because maybe I forgot or I wasn’t at home when I bought it. So I want to compare my Bandcamp collection against my local music collection and check if something is missing. Here’s how I did it:

# Albums in your online collection that are missing from your local collection.

ONLINE_ALBUMS="cpe bandcamp --user ssssam collection"
LOCAL_ALBUMS="cpe tracker albums"
#LOCAL_ALBUMS="cpe beets albums"

cpe diff --scope=album <($ONLINE_ALBUMS | cpe musicbrainz resolve-ids -) <($LOCAL_ALBUMS) 


Like all things in Calliope this outputs a playlist as a JSON stream, in this case, a list of all the albums I need to download:

{
  "album": "Take Her Up To Monto",
  "bandcamp.album_id": 2723242634,
  "location": "https://roisinmurphy.bandcamp.com/album/take-her-up-to-monto",
  "creator": "Róisín Murphy",
  "bandcamp.artist_id": "423189696",
  "musicbrainz.artist_id": "4c56405d-ba8e-4283-99c3-1dc95bdd50e7",
  "musicbrainz.release_id": "0a79f6ee-1978-4a4e-878b-09dfe6eac3f5",
  "musicbrainz.release_group_id": "d94fb84a-2f38-4fbb-971d-895183744064"
}
{
  "album": "LA OLA INTERIOR Spanish Ambient & Acid Exoticism 1983-1990",
  "bandcamp.album_id": 3275122274,
  "location": "https://lesdisquesbongojoe.bandcamp.com/album/la-ola-interior-spanish-ambient-acid-exoticism-1983-1990",
  "creator": "Various Artists",
  "bandcamp.artist_id": "3856789729",
  "meta.warnings": [
    "musicbrainz: Unable to find release on musicbrainz"
  ]
}

There are some interesting complexities to this, and in 12 hours of hacking I didn’t solve them all. Firstly, Bandcamp artist and album names are not normalized. Some artist names have spurious “The”, some album names have “(EP)” or “(single)” appended, so they don’t match your tags. These details are of interest only to librarians, but how can software tell the difference?

The simplest approach is use Musicbrainz, specifically cpe musicbrainz resolve-ids. By comparing ids where possible we get mostly good results. There are many albums not on Musicbrainz, though, which for now turn up as false positives. Resolving Musicbrainz IDs is a tricky process, too — how do we distinguish Multi-Love (album) from Multi-Love (single) if we only have an album name?

If you want to try it out, great! It’s still aimed at hackers — you’ll have to install from source with Meson and probably fix some bugs along the way. Please share the fixes!

April 09, 2021

New Shortwave release

Ten months later, after 14.330 added and 8.634 deleted lines, Shortwave 2.0 is available! It sports new features, and comes with the well known improvements, and bugfixes as always.

Optimised user interface

The user interface got ported from GTK3 to GTK4. During this process, many elements were improved or recreated from scratch. For example the station detail dialog window got completely overhauled:

New station details dialog

Huge thanks to Maximiliano, who did the initial port to GTK4!

Adaptive interface – taken to the next level

New mini player window mode

Shortwave has always been designed to handle any screen size from the beginning. In version 2.0 we have been able to improve this even further. There is now a compact mini player for desktop screens. This still offers access to the most important functions in a tiny window.

Other noteworthy changes

  • New desktop notifications to notify you of new songs.
  • Improved keyboard navigation in the user interface.
  • Inhibit sleep/hibernate mode during audio playback.

Download

Shortwave is available to download from Flathub:

April 08, 2021

sign of the times

Hello all! There is a mounting backlog of things that landed in Guile recently and to avoid having to eat the whole plate in one bite, I'm going to try to send some shorter missives over the next weeks.

Today's is about a silly thing, dynamic-link. This interface is dlopen, but "portable". See, back in the day -- like, 1998 -- there were lots of kinds of systems and how to make and load a shared library portably was hard. You'd have people with AIX and Solaris and all kinds of weird compilers and linkers filing bugs on your project if you hard-coded a GNU toolchain invocation when creating loadable extensions, or hard-coded dlopen or similar to use them.

Libtool provided a solution to create portable loadable libraries, which involved installing .la files alongside the .so files. You could use libtool to link them to a library or an executable, or you could load them at run-time via the libtool-provided libltdl library.

But, the .la files were a second source of truth, and thus a source of bugs. If a .la file is present, so is an .so file, and you could always just use the .so file directly. For linking against an installed shared library on modern toolchains, the .la files are strictly redundant. Therefore, all GNU/Linux distributions just delete installed .la files -- Fedora, Debian, and even Guix do so.

Fast-forward to today: there has been a winnowing of platforms, and a widening of the GNU toolchain (in which I include LLVM as well as it has a mostly-compatible interface). The only remaining toolchain flavors are GNU and Windows, from the point of view of creating loadable shared libraries. Whether you use libtool or not to create shared libraries, the result can be loaded either way. And from the user side, dlopen is the universally supported interface, outside of Windows; even Mac OS fixed their implementation a few years back.

So in Guile we have been in an unstable equilibrium: creating shared libraries by including a probably-useless libtool into the toolchain, and loading them by using a probably-useless libtool-provided libltdl.

But the use of libltdl has not been without its costs. Because libltdl intends to abstract over different platforms, it encourages you to leave off the extension when loading a library, instead promising to try a platform-specific set such as .so, .dll, .dylib etc as appropriate. In practice the abstraction layer was under-maintained and we always had some problems on Mac OS, for example.

Worse, as ltdl would search through the path for candidates, it would only report the last error it saw from the underlying dlopen interface. It was almost always the case that if A and B were in the search path, and A/foo.so failed to load because of a missing dependency, the error you would get as a user would instead be "file not found", because ltdl swallowed the first error and kept trucking to try to load B/foo.so which didn't exist.

In summary, this is a case where the benefits of an abstraction layer decline over time. For a few years now, libltdl hasn't been paying for itself. Libtool is dead, for all intents and purposes (last release in 2015); best to make plans to migrate away, somehow.

In the case of the dlopen replacement, in Guile we ended up rewriting the functionality in Scheme. The underlying facility is now just plain dlopen, for which we shim a version of dlopen on Windows, inspired by the implementation in cygwin. There are still platform-specific library extensions, but that is handled easily on the Scheme layer.

Looking forward, I think it's probably time to replace Guile's use of libtool to create its libraries and executables. I loathe the fact that libtool puts shell scripts in the place of executables in build directories and stashes the actual executables elsewhere -- like, visceral revulsion. There is no need for that nowadays. Not sure what to replace it with, nor on what timeline.

And what about autotools? That, my friends, would be a whole nother blog post. Until then, & probably sooner, happy hacking!

dLeyna updates and general project things

I have too many projects. That’s why I also picked up dLeyna which was laying around looking a bit unloved, smacked fixes, GUPnP 1.2 and meson support on top and made new releases. They are available at

  • https://github.com/phako/dleyna-core/archive/refs/tags/v0.7.0.tar.gz
  • https://github.com/phako/dleyna-connector-dbus/archive/refs/tags/v0.4.0.tar.gz
  • https://github.com/phako/dleyna-server/archive/refs/tags/v0.7.0.tar.gz
  • https://github.com/phako/dleyna-renderer/archive/refs/tags/v0.7.0.tar.gz

Furthermore I have filed an issue on upstream’s dLeyna-core component asking for the project to be transferred to GNOME infrastructure officially (https://github.com/intel/dleyna-core/issues/55).

As for all the other things I do. I was trying to write many versions of this blog post and each one sounded like an apology, which sounded wrong. Ever since I changed jobs in 2018 I’m much more involved in coding during my work-time again and that seems to suck my “mental code reservoir” dry, meaning I have very little motivation to spend much time on designing and adding features, limiting most of the work on Rygel, GUPnP and Shotwell to the bare minimum. And I don’t know how and when this might change.

Deploy Strapi on VPS with Ubuntu, MySQL

So you have built your Strapi project and the next thing you need to do is to deploy it on a production server. In this blog, we will learn about how to set up a Virtual Private Server(VPS) and then deploy our Strapi application. We can apply this guide to any kind of servers like Linode, DigitalOcean and many more.

Contents

Steps

We will be using Hostinger VPS Plan 1 with Ubuntu 20.04. Make sure to follow step by step.

Replace all the values in <> with your own values.

1. Create a non-root user

It is a good idea to create a non-root user with sudo privileges. All the commands will be run through this user. The first step is to log in as the root user.

ssh root@<VPSIPADDRESS>

The first thing to do on a new machine is to update the packages and remove all the older ones.

sudo apt update -y && sudo apt upgrade -y && sudo apt autoremove -y

Now our machine is up to date and we need to create a new user and log out of the session.

adduser <NEWUSER>
usermod -aG sudo <NEWUSER>
exit

2. Create a Public-Private key pair

It is great to use Public-Private key pair to SSH into a remote server. We can create a new one using

ssh-keygen

or

copy the existing one’s public key onto our clipboard

xclip -selection clipboard -in ~/.ssh/hostinger_rsa.pub

3. SSH as a new user

Let us now SSH as a new user using password authentication.

ssh <NEWUSER>@<VPSIPADDRESS>

Once we are logged in, we need to create a directory for the new user identified by its name.

mkdir -p <NEWUSER>
cd <NEWUSER>

4. Add SSH key to authorized keys

Let us register our public key by adding it to our authorized keys so that we can log in using a private key.

mkdir -p  ~/.ssh/
sudo echo "<COPIED_PUBLIC_KEY>" >> ~/.ssh/authorized_keys

5. Configure Firewall

Now is the time to set up a firewall. A firewall is essential while setting up VPS to restrict unwanted traffic going out or into your VPS. Let us install ufw and configure a firewall to allow SSH operations.

sudo apt install ufw -y
sudo ufw allow OpenSSH
sudo ufw enable -y
sudo ufw status

6. Remove Apache and Install Nginx

Nginx is a much better server than Apache. It is lightweight, easy to set up and allow us to set up proxies. Before installing Nginx, we need to remove Apache which is available by default in Ubuntu 20.04.

sudo systemctl stop apache2 && sudo systemctl disable apache2
sudo apt remove apache2 -y && sudo rm /var/www/html/index.html
sudo apt autoremove -y

Let us now install Nginx.

sudo apt install nginx -y
sudo systemctl start nginx
sudo systemctl status nginx

Let us configure the firewall to allow HTTP and HTTPS traffic to pass through it.

sudo ufw allow 'Nginx Full'
sudo ufw enable -y
sudo ufw status

We can also allow either HTTPS or HTTP by using

sudo ufw allow 'Nginx HTTP'
# or 
sudo ufw allow 'Nginx HTTPS'

To verify Nginx installation, visit the IP address of the VPS.
You can find IP address by curl -4 icanhazip.com.

7. Install Node using NVM

Let us install NodeJs using NVM. The following code helps us find the current nvmversion

nvmversion=$(curl --silent "https://api.github.com/repos/nvm-sh/nvm/releases/latest" | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/')
curl -o- "https://raw.githubusercontent.com/nvm-sh/nvm/$nvmversion/install.sh" | bash

Some NPM packages require to refer to numerous packages needed for building software in general. So we will install build-essential for the same.

sudo apt install build-essential -y

8. Install PM2

PM2 is a Process Manager built for production-level applications. It can help us run our Strapi application when the server restarts. It can watch for file changes and restart the server automatically for us.

npm i -g pm2@latest
cd ~
pm2 startup systemd

Follow the rest of the instructions as specified on the terminal and then do pm2 save.

9. Install Database (MariaDB/MySQL)

MariaDB is a fork of MySQL with lots of performance gains. Let us install our database sever by doing

sudo apt install mariadb-server -y
sudo mysql_secure_installation

Follow all the instructions and complete the setup.

Once our database server is installed, we need to create a non-root user with root privileges for database operations.

sudo mariadb
GRANT ALL ON *.* TO '<NEWUSER>'@'localhost' IDENTIFIED BY '<PASSWORD>`' WITH GRANT OPTION;
FLUSH PRIVILEGES;
EXIT;

Once this is done, we need to restart our database server.

sudo systemctl restart mariadb

10. Create Nginx Server Blocks

It is always a good idea to server blocks rather than to change the default Nginx configuration. This helps us when we decide to host multiple websites on the same server. To create a server block, we need to do

sudo nano /etc/nginx/sites-available/<YOUR_DOMAIN>

Add the following config

upstream <YOUR_DOMAIN> {
  server 127.0.0.1:1337;
  keepalive 64;
}

server {
  server_name <YOUR_DOMAIN>;
  access_log /var/log/nginx/<YOUR_DOMAIN>-access.log;
  error_log /var/log/nginx/<YOUR_DOMAIN>-error.log;
  location / {
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Host $http_host;
    proxy_set_header X-NginX-Proxy true;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_pass http://<YOUR_DOMAIN>;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_cache_bypass $http_upgrade;
  }
}

server {
  listen 80;
  server_name <YOUR_DOMAIN>;
}

This configuration assumes that our Strapi application will be run 127.0.0.1:1337.

Once our configuration is set up, we need to enable our website by creating a symbolic link for our configuration file.

sudo ln -s /etc/nginx/sites-available/<YOUR_DOMAIN> /etc/nginx/sites-enabled/

This is optional but if we are serving multiple domains or subdomains from our server, we need to edit our nginx.conf by uncommenting this line server_names_hash_bucket_size 64; using

sudo nano /etc/nginx/nginx.conf

Lets us quickly check that our configurations are error-free by doing

sudo nginx -t

The output will tell us whether an error exists. If any error comes up, we need to resolve it and then finally restart the server.

sudo systemctl restart nginx

11. Setup DNS

In our domain provider, we need to add an A record to point our subdomain to the VPS as follows:

Type Name Content TTL
A subdomain «VPSIPADDRESS» 3600

The DNS propagation can take upto 24 hours.

12. Install SSL

The final step is to issue an SSL certificate for our Strapi application. We can automate the process of issuing certificates to our domain using certbot. Running the following commands will help us issue a Let’s Encrypt SSL certificate for our domain.

sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d <YOUR_DOMAIN>

Follow all the instructions. Use the Redirect option to redirect HTTP traffic to HTTPS.

The advantage of the above commands is that the certbot process runs twice a day to check if any certificates will expire within a month. It automatically renews the certificates so we don’t have to worry about certificate expiration. We can verify this by running:

sudo systemctl status certbot.timer

13. Run the app

Now our setup is complete and it is the time that we all have been waiting for. Let us run our Strapi application using PM2.

cd ~
cd api
pm2 start ecosystem.config.js

We can visit our domain and check our Strapi application is running.

April 07, 2021

Governing Values-Centered Tech Non-Profits; or, The Route Not Taken by FSF

A few weeks ago, I interviewed my friend Katherine Maher on leading a non-profit under some of the biggest challenges an org can face: accusations of assault by leadership, and a growing gap between mission and reality on the ground.

We did the interview at the Free Software Foundation’s Libre Planet conference. We chose that forum because I was hopeful that the FSF’s staff, board, and membership might want to learn about how other orgs had risen to challenges like those faced by FSF after Richard Stallman’s departure in 2019. I, like many others in this space, have a soft spot for the FSF and want it to succeed. And the fact my talk was accepted gave me further hope.

Unfortunately, the next day it was announced at the same conference that Stallman would rejoin the FSF board. This made clear that the existing board tolerated Stallman’s terrible behavior towards others, and endorsed his failed leadership—a classic case of non-profit founder syndrome.

While the board’s action made the talk less timely, much of the talk is still, hopefully, relevant to any value-centered tech non-profit that is grappling with executive misbehavior and/or simply keeping up with a changing tech world. As a result, I’ve decided to present here some excerpts from our interview. They have been lightly edited, emphasized, and contextualized. The full transcript is here.

Sunlight Foundation: harassment, culture, and leadership

In the first part of our conversation, we spoke about Katherine’s tenure on the board of the Sunlight Foundation. Shortly after she joined, Huffington Post reported on bullying, harassment, and rape accusations against a key member of Sunlight’s leadership team.

[I had] worked for a long time with the Sunlight Foundation and very much valued what they’d given to the transparency and open data open government world. I … ended up on a board that was meant to help the organization reinvent what its future would be.

I think I was on the board for probably no more than three months, when an article landed in the Huffington Post that went back 10 years looking at … a culture of exclusion and harassment, but also … credible [accusations] of sexual assault.

And so as a board … we realized very quickly that there was no possible path forward without really looking at our past, where we had come from, what that had done in terms of the culture of the institution, but also the culture of the broader open government space.

Katherine

Practical impacts of harassment

Sunlight’s board saw immediately that an org cannot effectively grapple with a global, ethical technological future if the org’s leadership cannot grapple with its own culture of harassment. Some of the pragmatic reasons for this included:

The [Huffington Post] article detailed a culture of heavy drinking and harassment, intimidation.

What does that mean for an organization that is attempting to do work in sort of a progressive space of open government and transparency? How do you square those values from an institutional mission standpoint? That’s one [pragmatic] question.

Another question is, as an organization that’s trying to hire, what does this mean for your employer brand? How can you even be an organization that’s competitive [for hiring] if you’ve got this culture out there on the books?

And then the third pragmatic question is … [w]hat does this mean for like our funding, our funders, and the relationships that we have with other partner institutions who may want to use the tools?

Katherine

FSF suffers from similar pragmatic problems—problems that absolutely can’t be separated from the founder’s inability to treat all people as full human beings worthy of his respect. (Both of the tweets below lead to detailed threads from former FSF employees.)

Since the announcement of Stallman’s return, all top leadership of the organization have resigned, and former employees have detailed how the FSF staff has (for over a decade) had to deal with Richard’s unpleasant behavior, leading to morale problems, turnover, and even unionization explicitly to deal with RMS.

And as for funding, compare the 2018 sponsor list with the current, much shorter sponsor list.

So it seems undeniable: building a horrible culture has pragmatic impacts on an org’s ability to espouse its values.

Values and harassment

Of course, a values-centered organization should be willing to anger sponsors if it is important for their values. But at Sunlight, it was also clear that dealing with the culture of harassment was relevant to their values, and the new board had to ask hard questions about that:

The values questions, which … are just as important, were… what does this mean to be an organization that focuses on transparency in an environment in which we’ve not been transparent about our past?

What does it mean to be an institution that [has] progressive values in the sense of inclusion, a recognition that participation is critically important? … Is everyone able to participate? How can we square that with the institution that are meant to be?

And what do we do to think about justice and redress for (primarily the women) who are subjected to this culture[?]

Katherine

Unlike Sunlight, FSF is not about transparency, per se, but RMS at his best has always been very strong about how freedom had to be for everyone. FSF is an inherently political project! One can’t advocate for the rights of everyone if, simultaneously, one treats staff disposably and women as objects to be licked without their consent, and half the population (women) responds by actively avoiding the leadership of the “movement”.

So, in this situation, what is a board to do? In Sunlight’s case:

[Myself and fellow board member Zoe Reiter] decided that this was a no brainer, we had to do an external investigation.

The challenges of doing this… were pretty tough. [W]e reached out to everyone who’d been involved with the organization we also put not just as employees but also trying to find people who’ve been involved in transparency camps and other sorts of initiatives that Sunlight had had run.

We put out calls for participation on our blog; we hired a third party legal firm to do investigation and interviews with people who had been affected.

We were very open in the way that we thought about who should be included in that—not just employees, but anyone who had something that they wanted to raise. That produced a report that we then published to the general public, really trying to account for some of the things that have been found.

Katherine

The report Katherine mentions is available in two parts (results, recommendations) and is quite short (nine pages total).

While most of the report is quite specific to the Sunlight Foundation’s specific situation, the FSF board should particularly have read page 3 of the recommendations: “Instituting Board Governance Best Practices”. Among other recommendations relevant to many tech non-profits (not just FSF!), the report says Sunlight should “institute term limits” and “commit to a concerted effort to recruit new members to grow the Board and its capacity”.

Who can investigate a culture? When?

Katherine noted that self-scrutiny is not just something for large orgs:

[W]hen we published this report, part of what we were hoping for was that … we wanted other organizations to be able to approach this in similar challenges with a little bit of a blueprint for how one might do it. Particularly small orgs.

There were four of us on the board. Sunlight is a small organization—15 people. The idea that an even smaller organizations don’t have the resources to do it was something that we wanted to stand against and say, actually, this is something that every and all organizations should be able to take on regardless of the resources available to them.

Katherine

It’s also important to note that the need for critical self scrutiny is not something that “expires” if not undertaken immediately—communities are larger, and longer-lived, than the relevant staff or boards, so even if the moment seems to be in the relatively distant past, an investigation can still be valuable for rebuilding organizational trust and effectiveness.

[D]espite the fact that this was 10 years ago, and none of us were on the board at this particular time, there is an accounting that we owe to the people who are part of this community, to the people who are our stakeholders in this work, to the people who use our tools, to the people who advocated, who donated, who went on to have careers who were shaped by this experience.

And I don’t just mean, folks who were in the space still—I mean, folks who were driven out of the space because of the experiences they had. There was an accountability that we owed. And I think it is important that we grappled with that, even if it was sort of an imperfect outcome.

Katherine

Winding down Sunlight

As part of the conclusion of the report on culture and harassment, it was recommended that the Sunlight board “chart a new course forward” by developing a “comprehensive strategic plan”. As part of that effort, the board eventually decided to shut the organization down—not because of harassment, but because in many ways the organization had been so successful that it had outlived its purpose.

In Katherine’s words:

[T]he lesson isn’t that we shut down because there was a sexual assault allegation, and we investigated it. Absolutely not!

The lesson is that we shut down because as we went through this process of interrogating where we were, as an organization, and the culture that was part of the organization, there was a question of what would be required for us to shift the organization into a more inclusive space? And the answer is a lot of that work had already been done by the staff that were there…

But the other piece of it was, does it work? Does the world need a Sunlight right now? And the answer, I think, in in large part was not to do the same things that Sunlight had been doing. …

The organization spawned an entire community of practitioners that have gone on to do really great work in other spaces. And we felt as though that sort of national-level governmental transparency through tech wasn’t necessarily needed in the same way as it had been 15 years prior. And that’s okay, that’s a good thing.

Katherine

We were careful to say at Libre Planet that I don’t think FSF needs to shut down because of RMS’s terrible behavior. But the reaction of many, many people to “RMS is back on the FSF board” is “who cares, FSF has been irrelevant for decades”.

That should be of great concern to the board. As I sometimes put it—free licenses have taken over the world, and despite that the overwhelming consensus is that open won and (as RMS himself would say) free lost. This undeniable fact reflects very badly on the organization whose nominal job it is to promote freedom. So it’s absolutely the case that shutting down FSF, and finding homes for its most important projects in organizations that do not suffer from deep governance issues, should be an option the current board and membership consider.

Which brings us to the second, more optimistic topic: how did Wikimedia react to a changing world? It wasn’t by shutting down! Instead, it was by building on what was already successful to make sure they were meeting their values—an option that is also still very much available to FSF.

Wikimedia: rethinking mission in a changing world

Wikimedia’s vision is simple: “A world in which every single human can freely share in the sum of all knowledge.” And yet, in Katherine’s telling, it was obvious that there was still a gap between the vision, the state of the world, and how the movement was executing.

We turned 15 in 2016 … and I was struck by the fact that when I joined the Wikimedia Foundation, in 2014, we had been building from a point of our founding, but we were not building toward something.

So we were building away from a established sort of identity … a free encyclopedia that anyone can edit; a grounding in what it means to be a part of open culture and free and libre software culture; an understanding that … But I didn’t know where we were going.

We had gotten really good at building an encyclopedia—imperfect! there’s much more to do!—but we knew that we were building an encyclopedia, and yet … to what end?

Because “a free world in which every single human being can share in the sum of all knowledge”—there’s a lot more than an encyclopedia there. And there’s all sorts of questions:

About what does “share” mean?

And what does the distribution of knowledge mean?

And what does “all knowledge” mean?

And who are all these people—“every single human being”? Because we’ve got like a billion and a half devices visiting our sites every month. But even if we’re generous, and say, that’s a billion people, that is not the entirety of the world’s population.

Katherine

As we discussed during parts of the talk not excerpted here, usage by a billion people is not failure! And yet, it is not “every single human being”, and so WMF’s leadership decided to think strategically about that gap.

FSF’s leadership could be doing something similar—celebrating that GPL is one of the most widely-used legal documents in human history, while grappling with the reality that the preamble to the GPL is widely unheeded; celebrating that essentially every human with an internet connection interacts with GPL-licensed software (Linux) every day, while wrestling deeply with the fact that they’re not free in the way the organization hopes.

Some of the blame for that does in fact lie with capitalism and particular capitalists, but the leadership of the FSF must also reflect on their role in those failures if the organization is to effectively advance their mission in the 2020s and beyond.

Self-awareness for a successful, but incomplete, movement

With these big questions in mind, WMF embarked on a large project to create a roadmap, called the 2030 Strategy. (We talked extensively about “why 2030”, which I thought was interesting, but won’t quote here.)

WMF could have talked only to existing Wikimedians about this, but instead (consistent with their values) went more broadly, working along four different tracks. Katherine talked about the tracks in this part of our conversation:

We ran one that was a research track that was looking at where babies are born—demographics I mentioned earlier [e.g., expected massive population growth in Africa—omitted from this blog post but talked about in the full transcript.]

[Another] was who are our most experienced contributors, and what did they have to say about our projects? What do they know? What’s the historic understanding of our intention, our values, the core of who we are, what is it that motivates people to join this project, what makes our culture essential and important in the world?

Then, who are the people who are our external stakeholders, who maybe are not contributors in the sense of contributors to the code or contributors to the projects of content, but are the folks in the broader open tech world? Who are folks in the broad open culture world? Who are people who are in the education space? You know, stakeholders like that? “What’s the future of free knowledge” is what we basically asked them.

And then we went to folks that we had never met before. And we said, “Why don’t you use Wikipedia? What do you think of it? Why would it be valuable to you? Oh, you’ve never even heard of it. That’s so interesting. Tell us more about what you think of when you think of knowledge.” And we spent a lot of time thinking about what these… new readers need out of a project like Wikipedia. If you have no sort of structural construct for an encyclopedia, maybe there’s something entirely different that you need out of a project for free knowledge that has nothing to do with a reference—an archaic reference—to bound books on a bookshelf.

Katherine

This approach, which focused not just on the existing community but on data, partners, and non-participants, has been extensively documented at 2030.wikimedia.org, and can serve as a model for any organization seeking to re-orient itself during a period of change—even if you don’t have the same resources as Wikimedia does.

Unfortunately, this is almost exactly the opposite of the approach FSF has taken. FSF has become almost infamously insulated from the broader tech community, in large part because of RMS’s terrible behavior towards others. (The list of conference organizers who regret allowing him to attend their events is very long.) Nevertheless, given its important role in the overall movement’s history, I suspect that good faith efforts to do this sort of multi-faceted outreach and research could work—if done after RMS is genuinely at arms-length.

Updating values, while staying true to the original mission

The Wikimedia strategy process led to a vision that extended and updated, rather than radically changed, Wikimedia’s strategic direction:

By 2030, Wikimedia will become the essential infrastructure of the ecosystem of free knowledge, and anyone who shares our vision will be able to join us.

Wikipedia

In particular, the focus was around two pillars, which were explicitly additive to the traditional “encyclopedic” activities:

Knowledge equity, which is really around thinking about who’s been excluded and how we bring them in, and what are the structural barriers that enable that exclusion or created that exclusion, rather than just saying “we’re open and everyone can join us”. And how do we break down those barriers?

And knowledge as a service, which is without thinking about, yes, the technical components of what a service oriented architecture is, but how do we make knowledge useful beyond just being a website?

Katherine

I specifically asked Katherine about how Wikimedia was adding to the original vision and mission because I think it’s important to understand that a healthy community can build on its past successes without obliterating or ignoring what has come before. Many in the GNU and FSF communities seem to worry that moving past RMS somehow means abandoning software freedom, which should not be the case. If anything, this should be an opportunity to re-commit to software freedom—in a way that is relevant and actionable given the state of the software industry in 2021.

A healthy community should be able to handle that discussion! And if the GNU and FSF communities cannot, it’s important for the FSF board to investigate why that is the case.

Checklists for values-centered tech boards

Finally, at two points in the conversation, we went into what questions an organization might ask itself that I think are deeply pertinent for not just the FSF but virtually any non-profit, tech or otherwise. I loved this part of the discussion because one could almost split it out into a checklist that any board member could use.

The first set of questions came in response to a question I asked about Wikidata, which did not exist 10 years ago but is now central to the strategic vision of knowledge infrastructure. I asked if Wikidata had been almost been “forced on” the movement by changes in the outside world, to which Katherine said:

Wikipedia … is a constant work in progress. And so our mission should be a constant work in progress too.

How do we align against a north star of our values—of what change we’re trying to effect in the world—while adapting our tactics, our structures, our governance, to the changing realities of the world?

And also continuously auditing ourselves to say, when we started, who, you know, was this serving a certain cohort? Does the model of serving that cohort still help us advance our vision today?

Do we need to structurally change ourselves in order to think about what comes next for our future? That’s an incredibly important thing, and also saying, maybe that thing that we started out doing, maybe there’s innovation out there in the world, maybe there are new opportunities that we can embrace, that will enable us to expand the impact that we have on the world, while also being able to stay true to our mission and ourselves.

Katherine

And to close the conversation, I asked how one aligns the pragmatic and organizational values as a non-profit. Katherine responded that governance was central, with again a great set of questions all board members should ask themselves:

[Y]ou have to ask yourself, like, where does power sit on your board? Do you have a regenerative board that turns over so that you don’t have the same people there for decades?

Do you ensure that funders don’t have outsize weight on your board? I really dislike the practice of having funders on the board, I think it can be incredibly harmful, because it tends to perpetuate funder incentives, rather than, you know, mission incentives.

Do you think thoughtfully about the balance of power within those boards? And are there … clear bylaws and practices that enable healthy transitions, both in terms of sustaining institutional knowledge—so you want people who are around for a certain period of time, balanced against fresh perspective.

[W]hat are the structural safeguards you put in place to ensure that your board is both representative of your core community, but also the communities you seek to serve?

And then how do you interrogate on I think, a three year cycle? … So every three years we … are meant to go through a process of saying “what have we done in the past three, does this align?” and then on an annual basis, saying “how did we do against that three year plan?” So if I know in 15 years, we’re meant to be the essential infrastructure free knowledge, well what do we need to clean up in our house today to make sure we can actually get there?

And some of that stuff can be really basic. Like, do you have a functioning HR system? Do you have employee handbooks that protect your people? … Do you have a way of auditing your performance with your core audience or core stakeholders so that you know that the work of your institution is actually serving the mission?

And when you do that on an annual basis, you’re checking in with yourself on a three year basis, you’re saying this is like the next set of priorities. And it’s always in relation to that that higher vision. So I think every nonprofit can do that. Every size. Every scale.

Katherine

The hard path ahead

The values that the FSF espouses are important and world-changing. And with the success of the GPL in the late 1990s, the FSF had a window of opportunity to become an ACLU of the internet, defending human rights in all their forms. Instead, under Stallman’s leadership, the organization has become estranged and isolated from the rest of the (flourishing!) digital liberties movement, and even from the rest of the software movement it was critical in creating.

This is not the way it had to be, nor the way it must be in the future. I hope our talk, and the resources I link to here, can help FSF and other value-centered tech non-profits grow and succeed in a world that badly needs them.

April 06, 2021

GNOME Internet Radio Locator 4.0.1 with KVRX on Fedora Core 33

GNOME Internet Radio Locator 4.0.1 with KVRX (Austin, Texas) features updated language translations, new, improved map marker palette with 125 other radio stations from around the world with live audio streaming implemented through GStreamer.

The project lives on www.gnomeradio.org and Fedora 33 RPM packages for version 4.0.1 of GNOME Internet Radio Locator are now also available:

gnome-internet-radio-locator.spec

gnome-internet-radio-locator-4.0.1-1.fc33.src.rpm

gnome-internet-radio-locator-4.0.1-1.fc33.x86_64.rpm

To install GNOME Internet Radio Locator 4.0.1 on Fedora Core 33 in Terminal:

sudo dnf install http://www.gnome.org/~ole/fedora/RPMS/x86_64/gnome-internet-radio-locator-4.0.1-1.fc33.x86_64.rpm

 

“Getting Things GNOME” 0.5 released!

It is time to welcome a new release of the Rebuild of EvanGTGelion: 0.5, “You Can (Not) Improve Performance”!

This release of GTG has been 9 months in the making after the groundbreaking 0.4 release. While 0.4 was a major “perfect storm” overhaul, 0.5 is also a very technology-intensive release, even though it was done in a relatively short timeframe comparatively.

Getting Things GNOME 0.5 brings a truckload of user experience refinements, bugfixes, a completely revamped file format and task editor, and a couple of notable performance improvements. It doesn’t solve every performance problem yet (some remain), but it certainly improves a bunch of them for workaholics like me. If 0.4 felt a bit like a turtle, 0.5 is a definitely a much faster turtle.

If that was not enough already, it has some killer new features too. It’s the bee’s knees!

To benefit from one performance improvement in particular, it requires the new version of liblarch, 3.1, that we have released this month. GTG with the latest liblarch is available all-in-one in a Flatpak update near you. 📦

This release announcement and all that led up to it was, as you can imagine, planned using GTG:

“Who’s laughing now eh, señor Ruiz?”

As you can see, I came prepared. So continue reading below for the summary of improvements, I guarantee it’ll be worth it.


Some brief statistics

Beyond what is featured in these summarized release notes below, GTG itself has undergone roughly 400 changes affecting over 230 files, and received hundreds of bug fixes, as can be seen here. In fact, through this development cycle, we have crossed the 6,000 commits mark!

If you need another indication of how active the project has been in the last 9 months, I have received over 950 GTG-related emails since the 0.4 release last July.

We are thankful to all of our supporters and contributors, past and present, who made GTG 0.5 possible. Check out the “About” dialog for a list of contributors for this release.


New features

Recurring Tasks

"Repeating task" UI screenshot

Thanks to Mohieddine Drissi and Youssef Toukabri, tasks can now be made recurring (repeating). This allows you to keep reminding yourself that you need to clean your room every Saturday, and then not do it because it’s more fun to spend your whole Saturday fixing GTG bugs!

  • In the task editor, recurrence can be toggled with the dedicated menubutton that shows a nice user interface to manage the recurrence (daily, every other day, weekly, monthly and yearly).
  • Recurring tasks can also be added from the main window, using the “quick add” entry’s syntax.

Emojis as tag emblems 🦄

The old tag icons have been replaced by emojis. I decided it would be best for us to switch to emojis instead of icon-based emblems, because:

  • The availability of emblem icons varies a lot from one system to another, and from one icon theme to another;
  • In recent years, emblem icons on Linux have become quite a mismatch of styles, and their numbers in standard themes have also decreased;
  • Emojis are scalable by default, which makes them better for HiDPI displays (while traditional pixmap icons still are a hit-or-miss);
  • There is a much, much, much wider choice of emojis (for every conceivable situation!) than there are emblem icons on typical Linux systems;
  • It’s simply easier to deal with, in the long run. 😌

Unfortunately, this means that when you upgrade from 0.4 to 0.5, the old emblem icons will disappear and you will need to choose new emblems for your tasks.

Undead Hamster

The GTG plugin to interface with Hamster has been brought back to life by Francisco Lavin (issue #114 and pull request #465). I have not personally tested this. I’m still a bit worried about the ferrets rising from their graves.

Performance improvements
for power users

An elegant search trigger
for a more… civilized age

The filter-as-you-type global live search in GTG now uses a timeout approach (it waits a fraction of a second until after you stopped typing). This changes many things:

  • No more spamming the database and the UI’s treeview
  • The UI stays much more responsive, and you can see your characters being typed smoothly instead of appearing 15 seconds later
  • It won’t hog your CPU nearly as much
  • You get search results much faster: the results now tend to show up within a second after you stop typing, rather than a linear or exponential amount of time. For example, with 1000 tasks, it is now over 5 times faster (ex: bringing down the search time from 17+ seconds to 3 seconds), and the difference would be even more drastic if you have more tasks than that.

Optimizations to avoid processing all views all the time

As a result of the changes proposed in pull request #530 and follow-up PR #547, for issue #402, most operations switching between various representations of your tasks will now be 20% to 200% faster (depending on how your tasks are structured):

  • Faster startup
  • Faster switching between tags (particularly when inside the “Actionable” view)
  • Faster mid-night refreshing
  • It most likely makes the global search feature faster, too

However, changing between global view modes (Open/Actionable/Closed) is now a bit slower the first time around (but is instantaneous afterwards, until a filtering change occurs).

It’s a performance tradeoff, but it seems to be worth it. Especially when you have lots of tasks:

Note that to benefit from these changes, GTG 0.5 depends on the newly released version of liblarch (3.1) we have published this month.

Faster read/write operations on the data file 🚀

The switch to LXML as our parser ensures any operations on the local file format now happen instantly (less than a millisecond for a normal/lightweight task lists, and up to 25 milliseconds for heavy-duty 1000-tasks lists like mine). This was originally such a big deal that we thought we’d nickname this release “¿Que parser, amigo?” … but other major improvements kept happening since then, so this release is not just about the pasa!

Less micro-freezes when focusing task windows

Mr. Bean waiting
Pictured: me, waiting for the task editor windows to un-freeze when I focus them.

Users with a thousand tasks will probably have noticed that GTG 0.4 had a tendency to painfully lock up for a couple of seconds when focusing/defocusing individual task editor windows.

Sure, “It doesn’t catastrophically lock up in your face multiple times per day anymore” doesn’t sound like a feature in theory, but it is a usability feature in practice.

Now, even when typing inside the new Task Editor and making lots of changes, it remains butter-smooth.

Part of it might be because it now uses a timer-based approach to parsing, and part of it is probably due to the code having been refactored heavily and being of higher quality overall. It just tends to be better all around now, and I’ve tried fairly hard to break it during my smoke-testing, especially since a lot of the improvements have also been tied to the major file format redesign that Diego implemented.

All in all, I’m not sure exactly how it accomplishes better reliability, to be honest, so we’ll just assume Diego secretly is a warlock.

50% more awesome task editing

Oh yeah, speaking of the task editor…

Rewriting the Task Editor’s content area and its natural language parser brought many user-visible improvements in addition to the technological improvements:

  • The new timeout-based parser is faster (because we’re not spamming it like barbarians), more reliable, and now gains the ability to recognize subtasks (lines starting with “-“) live, instead of “after pressing Enter.” This also makes copy-pasting from a dashed list easier, as it will be converted into many subtasks in one go
  • Clicking on a tag inside the task editor’s text selects the tag in the main window
  • Tags’ text highlight color now uses the tag’s chosen emblem color
  • Subtasks now have GTK checkboxes!
  • Completed subtasks show up in normal strikethrough text style
  • Opening a subtask takes care of closing the parent task, and the other way around (opening a parent task closes the subtask window), minimizing the amount of utility windows cluttering your view (and reducing the chances of making conflicting changes)
  • Now supports subheadings, in Markdown format (type # title). Now that it supports some basic rich-text formatting features, even Grumpy Sri will be happy with this release:

Here is a short demonstration video that lets you see the improved “parsing as you type” engine in action:

Other user interface improvements

General bug fixes

Technological changes

  • Eliminate the pyxdg dependency in favor of directly using GLib
  • dbus-python dependency dropped in favor of Gio (except in the Hamster plugin)
  • New and improved file format (see issue 279, “Better XML format [RFC]“, and my previous spooky blog post):
    • Automatically converts upon first startup, nothing to do manually
    • More consistent
    • Uses a single file instead of three separate ones
    • Uses valid XML
    • Fully documented (see docs/contributors/file format.md)
    • Adds file format versioning to make it future-proof
    • Uses LXML as the parser, which is reportedly 40 times faster (see issue #285)
    • Fundamentally allows the task editor to be a lot more reliable for parsing, as we ditch the XML-in-XML matryoshka approach
    • Solves a few other bugs related to (de)serializing
  • Rewrote the Task Editor’s “TaskView” (content area) and its natural language parser, paving the way for more improvements and increased reliability:
    • The timeout-based approach mentioned earlier means that parsing happens less often, “when it needs to”, and is also less error-prone as a result.
    • With the implementation of subheadings, the code now has the foundations for more Markdown features in the future.
    • The taskview supports invisible characters that turn visible when the cursor passes over them, which could be useful for more text tags in the future.
    • Easier to maintain improved performance in the future: When running in debug mode (launch.sh -d), the process function now prints the time it took to process the buffer. This will help us stay conscious of the performance.
    • Support for linking between tasks with gtg://[TASK_ID] (no UI for this yet, though)
    • Better code overall, a lot more extensible and easier to work on
  • Rewrote the Quick Add entry’s parsing and added unit tests
  • Command line parsing ported to GLib, so GLib/GTK-specific options such as --display are now available
  • GTG is now dbus-activatable. Although no API that other applications can use has been implemented yet (outside of “standard” GTK ones), this opens up the possibility in the future, such as the GNOME Shell search integration.

Developer Experience improvements

Want to be part of a great team? Join BAHRAM GTG!

Here are some changes to primarily benefit users crazy enough to be called developers.

  • New plugin: “Developer Console”.
    Play with GTG’s internals from the comfort of GTG itself! Remember: when exiting your Orbital Frame, wear… a helmet.
  • New parameter in launch.sh: -w to enable warnings
    (in case you still are not wearing a helmet, or want to catch more bugs)
  • Better help message for launch.sh, documenting the available options
  • Include Git commit version in the about window
  • Use the proper exit code when the application quits cleanly
  • We closed all the tickets on LaunchPad (over 300 of them) so there is no confusion when it comes to reporting bugs and the current state of the app’s quality and priorities, and to ensure that issues are reported in the currently used bug tracker where development actually happens.

Help out Diego!

It is no secret that Diego has been spending a lot of time and effort making this release happen, just like the 0.4 release. The amount and depth of his changes are quite amazing, so it’s no surprise he’s the top contributor here:

If you would like to show your appreciation and support his maintainership work with some direct monthly donations to him, please consider giving a couple of dollars per month on his gumroad page or his liberapay page!

More coming in the next release

Many new features, plugins, and big code changes have been held back from merging so that we could release 0.5 faster, so a bunch of other changes will land in the development version on the way to 0.6.

“This blog post is ridiculously long! And we still have 10 pull requests almost ready to merge!”
— Diego, five minutes before I published this blog post

Spreading this announcement

We have made some social postings on Twitter, on Mastodon and on LinkedIn that you can re-share/retweet/boost. Please feel free to link to this announcement on forums and blogs as well!

Permanent Revolution

10 years ago today was April 6, 2011.

Windows XP was still everywhere. Smartphones were tiny, and not everyone had one yet. New operating systems were coming out left and right. Android phones had physical buttons, and webOS seemed to have a bright future. There was general agreement that the internet would bring about a better world, if only we could give everyone unrestricted access to it.

This was the world into which GNOME 3.0 was released.

I can’t speak to what it was like inside the project back then, this is all way before my time. I was still in high school, and though I wasn’t personally contributing to any free software projects yet, I remember it being a very exciting moment.

Screenshot of the GNOME 3.0 live ISO with Settings, Gedit, Calculator, and Evince Screenshot of the GNOME 3.0 live ISO showing Settings, Gedit, Calculator, and Evince in the overview

3.0 was a turning point. It was a clear sign that we’d not only caught up to, but confidently overtaken the proprietary desktops. It was the promise that everything old and crufty about computing could be overcome and replaced with something better.

As an aspiring designer and free software activist it was incredibly inspiring to me personally, and I know I’m not alone in that. There’s an entire generation of us who are here because of GNOME 3, and are proud to continue working in that tradition.

Here’s to permanent revolution. Here’s to the hundreds who worked on GNOME 3.

April 04, 2021

Linux tool for Logitech 27MHz wireless keyboard encryption setup

For a long time Logitech produced wireless keyboards using 27 MHz as communications band. Although these have not been produced for a while now these are still pretty common and a lot of them are still perfectly serviceable.

But when using them under Linux, there is one downside, since the communication is one way by default the wireless link is unencrypted by default, which is kinda bad from a security pov. These keyboards do support using an encrypted link, but this requires a one-time setup where the user manually enters a key on the keyboard.

I've written a small Linux utility to do this under Linux, which should help give these keyboards an extra lease on life and stop them unnecessarily becoming e-waste. Sometimes these keyboards appear to be broken, while the only problem is that the key in the keyboard and receiver are not in sync, the README also contains instructions on howto reset the keyboard, without needing the utility, restoring (unencrypted) functionality.

The 'lg-27MHz-keyboard-encryption-setup' utility is available on Fedora in the 'logitech-27mhz-keyboard-encryption-setup package.

March 31, 2021

Never use environment variables for configuration

Suppose you need to create a function for adding two numbers together in plain C. How would you write it? What sort of an API would it have? One possible implementation would be this:

int add_numbers(int one, int two) {
    return one + two;
}

// to call it you'd do
int three = add_numbers(1, 2);

Seems reasonable? But what if it was implemented like this instead:

int first_argument;
int second_argument;

void add_numbers(void) {
    return first_argument + second_argument;
}

// to call it you'd do
first_argument = 1;
second_argument = 2;
int three = add_numbers();

This is, I trust you all agree, terrible. This approach is plain wrong, against all accepted coding practices and would get immediately rejected in any code review. It is left as an exercise to the reader to come up with ways in which this architecture is broken. You don't even need to look into thread safety to find correctness bugs.

And yet we have environment variables

Environment variables is exactly this: mutable global state. Envvars have some legitimate usages (such as enabling debug logging) but they should never, ever be used for configuring core functionality of programs. Sadly they are used for this purpose a lot and there are some people who think that this is a good thing. This causes no end of headaches due to weird corner, edge and even common cases.

Persistance of state

For example suppose you run a command line program that has some sort of a persistent state.

$ SOME_ENVVAR=... some_command <args>

Then some time after that you run it again:

$ some_command <args>

The environment is now different. What should the program do? Use the old configuration that had the env var set or the new one where it is not set? Error out? Try to silently merge the different options into one? Something else?

The answer is that you, the end user, can not now. Every program is free to do its own thing and most do. If you have ever spent ages wondering why the exact same commands work when run from one terminal but not the other, this is probably why.

Lack of higher order primitives

An environment variable can only contain a single null-terminated stream of bytes. This is very limiting. At the very least you'd want to have arrays, but it is not supported. Surely that is not a problem, you say, you can always do in-band signaling. For example the PATH environment variable has many directories which are separated by the : character. What could be simpler? Many things, it turns out.

First of all the separator for paths is not always :. On Windows it is ;. More generally every program is free to choose its own. A common choice is space:

CFLAGS='-Dfoo="bar" -Dbaz' <command>

Except what if you need to pass a space character as part of the argument? Depending on the actual program, shell and the phase of the moon, you might need to do this:

ARG='-Dfoo="bar bar" -Dbaz'

or this:

ARG='-Dfoo="bar\ bar" -Dbaz'

or even this:

ARG='-Dfoo="bar\\ bar" -Dbaz'

There is no way to know which one of these is the correct form. You have to try them all and see which one works. Sometimes, as an implementation detail, the string gets expanded multiple times so you get to quote quote characters. Insert your favourite picture of Xzibit here.

For comparison using JSON configuration files this entire class os problems would not exist. Every application would read the data in the same way, because JSON provides primitives to express these higher level constructs. In contrast every time an environment variable needs to carry more information than a single untyped string, the programmer gets to create a new ad hoc data marshaling scheme and if there's one thing that guarantees usability it's reinventing the square wheel.

There is a second, more insidious part to this. If a decision is made to configure something via an environment variable then the entire design goal changes. Instead of coming up with a syntax that is as good as possible for the given problem, instead the goal is to produce syntax that is easy to use when typing commands on the terminal. This reduces work in the immediate short term but increases it in the medium to long term.

Why are environment variables still used?

It's the same old trifecta of why things are bad and broken:

  1. Envvars are easy to add
  2. There are existing processes that only work via envvars
  3. "This is the way we have always done it so it must be correct!"
The first explains why even new programs add configuration options via envvars (no need to add code to the command line parser, so that's a net win right?).

The second makes it seem like envvars are a normal and reasonable thing as they are so widespread.

The third makes it all but impossible to improve things on a larger scale. Now, granted, fixing these issues would be a lot of work and the transition would unearth a lot of bugs but the end result would be more readable and reliable.

OBS Studio on Wayland

As of today, I’m happy to announce that all of the pull requests to make OBS Studio able to run as a native Wayland application, and capture monitors and windows on Wayland compositors, landed.

I’ve been blogging sparsely about my quest to make screencasting on Wayland a fluid and seamless experience for about a couple of years now. This required some work throughout the stack: from making Mutter able to hand DMA-BUF buffers to PipeWire; to improving the GTK desktop portal; to creating a plugin for OBS Studio; to fixing bugs in PipeWire; it was a considerable amount of work.

But I think none of it would matter if this feature is not easily accessible to everyone. The built-in screen recorder of GNOME Shell already works, but most importantly, we need to make sure applications are able to capture the screen properly. Sadly our hands are tied when it comes to proprietary apps, there’s just no way to contribute. But free and open source software allows us to do that! Fortunately, not only OBS Studio is distributed under GPL, it also is a pretty popular app with an active community. That’s why, instead of creating a fork or just maintaining a plugin, I decided to go the long hard route of proposing everything to OBS Studio itself.

The Road to Native Wayland

Making OBS Studio work on Wayland was a long road indeed, but fortunately other contributors attempted to do it before I did, and my pull requests were entirely based on their fantastic work. It took some time, but eventually the 3 big pull requests making OBS Studio able to run as a native Wayland application landed.

OBS Studio running as a native Wayland client. An important step, but not very useful without a way to capture monitors or windows.

After that, the next step was teaching OBS Studio how to create textures from DMA-BUF information. I wrote about this in the past, but the tl;dr is that implementing a monitor or window capture using DMA-BUFs means we avoid copying buffers from GPU memory to RAM, which is usually the biggest bottleneck when capturing anything. Exchanging DMA-BUFs is essentially about passing a few ids (integers) around, which is evidently much faster than copying dozens of megabytes of image data per second.

Fortunately for us, this particular feature also landed, introducing a new function gs_texture_create_from_dmabuf() which enables creating a gs_texture_t from DMA-BUF information. Like many other DMA-BUF APIs, it is pretty verbose since it needs to handle multiplanar buffers, but I believe this API is able to handle pretty much anything related to DMA-BUFs. This API is being documented and will freeze and become stable soon, with the release of OBS Studio 27, so make sure to check if and see if there’s anything wrong with it!

This was the last missing piece of the puzzle to implement a Wayland-compatible capture.

PipeWire & Portals to the Rescue!

An interesting byproduct of the development of apps sandboxing mechanisms are portals. Portals are D-Bus interfaces that provide various functionalities at runtime. For example, the Document portal allows applications with limited access to the filesystem to ask the user to select a file; the user is then presented with a file chooser dialog managed by the host system, and after selecting, the application will have access to that specific file only, and nothing else.

The portal that we’re interested in here is the Desktop portal, which provides, among others, a screencasting interface. With this interface, sandboxed applications can ask users to select either a window or a monitor, and a video stream is returned if the user selects something. The insecure nature of X11 allows applications to completely bypass this interface, but naturally it doesn’t work Wayland. At most, Xwayland will give you an incomplete capture of some (or no) applications running through it.

It is important to notice that despite born and hosted under the Flatpak umbrella, portals are mostly independent of Flatpak. It is perfectly possible to use portals outside of a Flatpak sandbox, and even when running it as a Snap or an AppImage. It’s merely a bunch of D-Bus calls after all. Portals are also implemented by important Wayland desktops, such as GNOME, KDE, and wlroots, which should cover the majority of Wayland desktops out there.

Remember that I vaguely mentioned above that the screencast interface returns a video stream? This video stream is actually a PipeWire stream. PipeWire here is responsible for negotiating and exchanging buffers between the video producer (GNOME Shell, Plasma, etc) and the consumer (here, OBS Studio).

These mechanisms (portals, and PipeWire) were the basis of my obs-xdg-portal plugin, which was recently merged into OBS Studio itself as part of the built-in capture plugin! Fortunately, it landed just in time for the release of OBS Studio 27, which means soon everyone will be able to use OBS Studio on Wayland.

And, finally, capturing on Wayland works!

Meanwhile at Flatpakland…

While contributing with these Wayland-related features, I sidetracked a bit and did some digging on a Flatpak manifest for OBS Studio.

Thanks to the fantastic work by Bilal Elmoussaoui, there is a GitHub action that allows creating CI workflows that build Flatpaks using flatpak-builder. This allowed proposing a new workflow to OBS Studio’s CI that generates a Flatpak bundle. It is experimental for now, but as we progress towards feature parity between Flatpak and non-Flatpak, it’ll eventually reach a point where we can propose it to be a regular non-experimental workflow.

In addition to that, Flatpak greatly helps me as a development tool, specially when used with GNOME Builder. The Flatpak manifest of OBS Studio is automatically detected and used to build and run it. Running OBS Studio is literally an one-click action:

Running OBS Studio with Flatpak on GNOME Builder


Next Steps

All these Wayland, PipeWire, and portals pull requests are only the first steps to make screencasting on Wayland better than on X11. There’s still a lot to do and fix, and contributions would be more than welcomed.

For a start, the way the screencast interface currently works doesn’t mix well with OBS Studio’s workflow. Each capture pops up a dialog to select a monitor or a window, and that’s not exactly a fantastic experience. If you have complex scenes with many window or screen captures, a swarm of dialogs will pop up. This is clearly not great UX, and improving this would be a good next step. Fortunately, portals are extensible enough to allow implementing a more suitable workflow that e.g. saves and restores the previous sessions.

Since this was tested in a relatively small number of hardware setups and environments, I’m sure we’ll need a round of bugfixes once people start using this code more heavily.

There’s also plenty of room for improvements on the Flatpak front. My long-term goal is to make OBS Studio’s CI publish stable releases to Flathub directly, and unstable releases to Flathub Beta, all automatically with flat-manager. This will require fixing some problems with the Flatpak package, such as the obs-browser plugin not working inside a sandbox (it uses CEF, Chromium Embedded Framework, which apparently doesn’t enjoy the sandbox PID remapping) nor on Wayland (Chromium barely supports Wayland for now).

Of course, I have no authority over what’s going to be accepted by the OBS Studio community, but these goals seem not to be controversial there, and might be great ways to further improve the state of screencasting on Wayland.

I’d like to thank everyone who has been involved in this effort; from the dozens of contributors that tested the Wayland PRs, to the OBS Studio community that’s been reviewing all this code and often helping me, to the rest of the Flatpak and GNOME communities that built the tooling that I’ve been using to improve OBS Studio.

March 30, 2021

Introducing Libadwaita

GNOME 41 will come with libadwaita, the GTK 4 port of libhandy that will play a central role in defining the visual language and user experience of GNOME applications.

GNOME and GTK

For the past 20 years, GNOME has had human interface guidelines — HIG for short — that are followed by applications targeting the platform.

Implementing the HIG is a lot of manual work for application developers. This led to lots of verbose copypasted UI code, full of slight interpretation variations and errors, making applications hard to maintain and full of visual and behaviorial inconsistencies. The fact these guidelines are not set in stone and evolve every so often made these inconsistencies explode.

Following these guidelines can be streamlined via a library offering tailored widgets and styles. This role has been filled by GTK because of its strong bonds with the GNOME project: Adwaita is both GNOME’s visual language and GTK’s default theme. This is somewhat problematic though, as GTK serves multiple audiences and platforms, and this favores GNOME over them.

This also leads to life cycle mismatches, as GTK machinery must be extremely stable and can’t evolve at a rapid pace, while the GNOME guidelines and Adwaita evolve in comparison much faster, and can benefit from a library that can keep up with the latest designs

Untangling Them

The need to give GNOME a faster moving library quickly grew in the community, and many widget libraries filled the gap: libdazzle, libegg, libgd, or libhandy to name a few. This improved the situation, but it only worked around the issues instead of solving them. GNOME needs a blessed library implementing its HIG rapidly, developed in collaboration with its design team. Such a libary would define the visual language of GNOME by offering the stylesheet and the patterns in a single package.

This would allow GTK to grow independently from GNOME, at a pace fitting its needs. It could narrow its focus on more generic widgetry and on its core machinery, simplifying its theming support in the process to make it more flexible. This would in turn give other users of GTK an even playing field: from GTK’s POV, GNOME, elementary and Inkscape would be no different, and that hypothetical GNOME library would fill the same role as elementary’s Granite.

The introduction of that library should not make GTK less useful on other platforms, or make GTK applications harder to build (or uglier). It should simply be simply another library you can choose to link against if you want to make your application fit well into GNOME.

Adwaita

To solve both GTK’s need of independence and GNOME’s need to move faster, we are creating the libadwaita project.

Adwaita is the name of GNOME’s visual language and identity, and it is already used by two projects implementing it: the Adwaita GTK stylesheet and the Adwaita icon set. This new libadwaita library intends to extend that concept by being the missing code part of Adwaita. The library will be implemented as a direct GTK 4 continuation and replacement of libhandy, and it will be developed by libhandy’s current developers.

The Adwaita stylesheet will be moved to libadwaita, including its variants such as HighContrast and HighContrastInverse. GTK will keep a copy of that stylesheet renamed Default, that will focus on the needs of vanilla GTK applications. See gtk!3079 and gtk#3582 for more information. We want that to happen early in the development of libadwaita.

As it will implement the GNOME HIG, the library’s developers will work in close collaboration with the GNOME design team. The design team will also review the initial set of widgets and styles inherited from libhandy, ensuring they are aligned with the guidelines they developed and that they will refresh for GNOME 41.

This transition is being developed in close collaboration between the GTK developers, the libhandy developers, and the GNOME design team. It has also been discussed with various other GNOME and elementary developers.

Gimme

If you are a GNOME application developer, you likely want to port your application from GTK 3 (and libhandy) to GTK 4 and libadwaita in time for GNOME 41. You can find libadwaita on GNOME’s GitLab, and you can reach the developers on the Matrix room at #libadwaita:gnome.org. You can also join the room via IRC at #libadwaita on GIMPnet.

The libadwaita project will follow revisions of the HIG, and releases will follow GNOME’s schedule. Each version of the library will target a specific version of GNOME, the first stable version being released alongside GNOME 41.

The first stable version of the library will be 1.0. We won’t use the GNOME version number as major versions of the Adwaita library will span multiple GNOME releases, the contrary would be unmanageable for application developers. Major versions will be parallel-installable.

The library isn’t quite ready to be used yet, we have to fix some remaining issues, write a migration guide, and release a first alpha version you can safely target. We will then release several alpha versions and matching migration guides, so you can safely update your application as we stabilize libadawaita 1.0 without ever breaking your build.

Starting from now, libhandy’s pace of development will slow down tremendously as we will focus on developing libadwaita. Any improvement made to libhandy must now first be implemented in libadwaita.

We hope the GTK 4 users will enjoy the newly gained independence, and that GNOME application developers will feel empowered!

Thanks to the GTK developers, GNOME design team, and Alexander Mikhaylenko for helping me write this article.

GTK 4.2.0

GTK 4.2.0 is now available for download in the usual places.

This release is the result of the initial round of feedback from the application developers porting their projects to GTK4, so it mostly consists of bug fixes and improvements to the API—but we also added new features, like a new GL renderer; various improvements in how the toolkit handles Compose and dead key sequences; build system improvements for compiling GTK on Windows and macOS; and a whole new API reference, generated from the same introspection data that language bindings also consume.

For more information, you can check the previous blog post about the 4.1 development cycle.

The NGL renderer

Thanks to the hard work of Christian Hergert, the NGL renderer is now the default renderer for Linux, Windows, and macOS. We’ve had really positive feedback from users on mobile platforms using drivers like Lima, with noticeable improvements in frames per second, as well as power and CPU usage; the latter two are also going to impact positively desktop and laptop users. The NGL renderer is just at the beginning: the new code base will allow us even more improvements down the line.

For the time being, we have kept the old GL renderer available; you can use export GSK_RENDERER=gl in your environment to go back to the 4.0 GL renderer—but make sure to file an issue if you need to do so, to give us the chance to fix the NGL renderer.

Input

Matthias wrote a whole blog post about the handling of Compose and dead keys input sequences, so you can just read it. The dead key handling has seen a few iterations, to deal with oddities and workarounds that have been introduced in the lower layers of the input stack.

There is one known issue with the handling of dead acute accents vs apostrophes in some keyboard layouts, which is still being investigated. If you notice other problems with keyboard input, specifically around Compose sequences or dead keys, please file an issue.

Portability

One of the goals of GTK is to have a “turn key” build system capable of going from a clone of the Git repository to a fully deployable installation of the toolkit, without having to go through all the dependencies manually, or using weird contraptions. You can see how this works on Windows, using native tools, in this article from our friends at Collabora.

Additionally, we now ensure that you can use GTK as a Meson sub-project; this means you can build GTK and all its dependencies as part of your own application’s build environment, and you can easily gather all the build artifacts for you to distribute alongside your application, using the toolchain of your choice.

Documentation

One of the most notorious issues for newcomers to GTK has been the documentation. Application developers not acquainted with our API have often found it hard to find information in our documentation; additionally, the style and structure of the API reference hasn’t been refreshed in ages. To improve the first impression, and the use of our documentation, GTK has switched to a new documentation generator, called gi-docgen. This new tool adds new features to the API reference, like client-side search of terms in the documentation; as well as nice little usability improvements, like:

  • a “Copy to clipboard” button for code fragments and examples
  • a visual hierarchy of ancestors and interfaces for each class
  • the list of inherited properties, signals, and methods in a class
  • a responsive design, which makes it easier to use the API reference on small screens

An API is only as good as it allows developers to use it in the most idiomatic way. GTK not only has a C API, it also exposes a whole API for language bindings to consume, through GObject-Introspection. The new documentation uses the same data, which not only allows us to cut down the built time in half, but it also generates common bits of documentation from the annotations in the source, making the API reference more consistent and reliable; finally, the C API reference matches what language binding authors see when consuming the introspection data, which means we are going to tighten the feedback loop between toolkit and bindings developers when introducing new API.

Pango and GdkPixbuf have also switched to gi-docgen, which allows us to build the API reference for various dependencies through our CI pipeline, and publish it to a whole new website: docs.gtk.org. You’ll always find the latest version of the GTK documentation there.

Odds and ends

Of course, alongside these visible changes we have smaller ones:

  • performance improvements all across the board, from GLSL shaders used to render our content, to the accessibility objects created on demand instead of upfront
  • sub-pixel positioning of text, when using a newer version of Cairo with the appropriate API
  • a responsive layout for the emoji chooser
  • improved rendering of shadows in popover widgets
  • localized digits in spin buttons
  • improved support for the Wayland input method protocol
  • improved scrolling performance of the text view widget

The numbers

GTK 4.2 is the result of four months of development, with 1268 individual changes from 54 developers; a total of 73950 lines were added, and 60717 removed.

Developers with the most changesets
Matthias Clasen 843 66.5%
Emmanuele Bassi 124 9.8%
Timm Bäder 87 6.9%
Christian Hergert 33 2.6%
Jakub Steiner 24 1.9%
Benjamin Otte 21 1.7%
Chun-wei Fan 15 1.2%
Alexander Mikhaylenko 14 1.1%
Fabio Lagalla 10 0.8%
Bilal Elmoussaoui 8 0.6%
Carlos Garnacho 6 0.5%
Ignacio Casal Quinteiro 6 0.5%
Michael Catanzaro 6 0.5%
Emmanuel Gil Peyrot 5 0.4%
Xavier Claessens 4 0.3%
David Lechner 4 0.3%
Jan Alexander Steffens (heftig) 4 0.3%
Kalev Lember 3 0.2%
wisp3rwind 3 0.2%
Mohammed Sadiq 2 0.2%
Developers with the most changed lines
Matthias Clasen 38475 42.6%
Emmanuele Bassi 15997 17.7%
Christian Hergert 13913 15.4%
Kalev Lember 9202 10.2%
Timm Bäder 5890 6.5%
Jakub Steiner 2397 2.7%
Benjamin Otte 902 1.0%
Chun-wei Fan 783 0.9%
Ignacio Casal Quinteiro 717 0.8%
Fabio Lagalla 292 0.3%
Marek Kasik 267 0.3%
Alexander Mikhaylenko 254 0.3%
Emmanuel Gil Peyrot 232 0.3%
Simon McVittie 214 0.2%
Jan Tojnar 83 0.1%
wisp3rwind 74 0.1%
Jan Alexander Steffens (heftig) 65 0.1%
Carlos Garnacho 62 0.1%
Michael Catanzaro 61 0.1%
Ungedummt 60 0.1%
Developers with the most lines removed
Emmanuele Bassi 8408 13.8%
Jakub Steiner 1890 3.1%
Timm Bäder 493 0.8%
Simon McVittie 203 0.3%
Emmanuel Gil Peyrot 146 0.2%
Chun-wei Fan 43 0.1%
Jan Tojnar 26 0.0%
Alexander Mikhaylenko 25 0.0%
Jonas Ådahl 17 0.0%
Luca Bacci 13 0.0%
Robert Mader 4 0.0%
Chris Mayo 3 0.0%
Bartłomiej Piotrowski 2 0.0%
Marc-André Lureau 2 0.0%
Jan Alexander Steffens (heftig) 1 0.0%
Tom Schoonjans 1 0.0%

Student applications for Google Summer of Code 2021 are now open!

It’s that time of the year when we see an influx of students interested in Google Summer of Code.

Some students may need some pointers to where to get started. I would like to ask GNOME contributors to be patient with the student’s questions and help them find where to get started.

Point them at https://wiki.gnome.org/Outreach/SummerOfCode/Students for overall information regarding the program and our project proposals. Also, help them find mentors through our communication channels.

Many of us have been Outreachy/GSoC interns and the positive experiences we had with our community were certainly an important factor making us long-term contributors.

If you have any doubts, you can ask them in the #soc channel or contact GNOME GSoC administrators.

Happy hacking!

March 29, 2021

GNOME Software performance in GNOME 40

tl;dr: Use callgrind to profile CPU-heavy workloads. In some cases, moving heap allocations to the stack helps a lot. GNOME Software startup time has decreased from 25 seconds to 12 seconds (-52%) over the GNOME 40 cycle.

To wrap up the sporadic blog series on the progress made with GNOME Software for GNOME 40, I’d like to look at some further startup time profiling which has happened this cycle.

This profiling has focused on libxmlb, which gnome-software uses extensively to query the appstream data which provides all the information about apps which it shows in the UI. The basic idea behind libxmlb is that it pre-compiles a ‘silo’ of information about an XML file, which is cached until the XML file next changes. The silo format encodes the tree structure of the XML, deduplicating strings and allowing fast traversal without string comparisons or parsing. It is memory mappable, so can be loaded quickly and shared (read-only) between multiple processes. It allows XPath queries to be run against the XML tree, and returns the results.

gnome-software executes a lot of XML queries on startup, as it loads all the information needed to show many apps to the user. It may be possible to eliminate some of these queries – and some earlier work did reduce the number by binding query parameters at runtime to pre-prepared queries – but it seems unlikely that we’ll be able to significantly reduce their number further, so better speed them up instead.

Profiling work which happens on a CPU

The work done in executing an XPath query in libxmlb is largely on the CPU — there isn’t much I/O to do as the compiled XML file is only around 7MB in size (see ~/.cache/gnome-software/appstream), so this time the most appropriate tool to profile it is callgrind. I ruled out using callgrind previously for profiling the startup time of gnome-software because it produces too much data, risks hiding the bigger picture of which parts of application startup were taking the most time, and doesn’t show time spent on I/O. However, when looking at a specific part of startup (XML queries) which are largely CPU-bound, callgrind is ideal.

valgrind --tool=callgrind --collect-systime=msec --trace-children=no gnome-software

It takes about 10 minutes for gnome-software to start up and finish loading the main window when running under callgrind, but eventually it’s shown, the process can be interrupted, and the callgrind log loaded in kcachegrind:

Here I’ve selected the main() function and the callee map for it, which shows a 2D map of all the functions called beneath main(), with the area of each function proportional to the cumulative time spent in that function.

The big yellow boxes are all memset(), which is being called on heap-allocated memory to set it to zero before use. That’s a low hanging fruit to optimise.

In particular, it turns out that the XbStack and XbOperands which libxmlb creates for evaluating each XPath query were being allocated on the heap. With a few changes, they can be allocated on the stack instead, and don’t need to be zero-filled when set up, which saves a lot of time — stack allocation is a simple increment of the stack pointer, whereas heap allocation can involve page mapping, locking, and updates to various metadata structures.

The changes are here, and should benefit every user of libxmlb without further action needed on their part. With those changes in place, the callgrind callee map is a lot less dominated by one function:

There’s still plenty left to go at, though. Contributions are welcome, and we can help you through the process if you’re new to it.

What’s this mean for gnome-software in GNOME 40?

Overall, after all the performance work in the GNOME 40 cycle, startup time has decreased from 25 seconds to 12 seconds (-52%) when starting for the first time since the silo changed. This is the situation in which gnome-software normally starts, as it sits as a background process after that, and the silo is likely to change every day or two.

There are plans to stop gnome-software running as a background process, but we are not there yet. It needs to start up in 1–2 seconds for that to give a good user experience, so there’s a bit more optimisation to do yet!

Aside from performance work, there’s a number of other improvements to gnome-software in GNOME 40, including a new icon, some improvements to parts of the interface, and a lot of bug fixes. But perhaps they should be explored in a separate blog post.

Many thanks to my fellow gnome-software developers – Milan, Phaedrus and Richard – for their efforts this cycle, and my employer the Endless OS Foundation for prioritising working on this.

March 27, 2021

Setup Github Actions for a Dart project

When working in a team or even as an individual, we humans often break rules. But sometimes breaking rules can result in a poor quality code which over time grows out to be messy. We can take advantage of linting and static analysis to check whether the written code adheres to our code styling rule. This can be automated using Github Actions. In this blog, we will see how we can set up Github Actions workflows for static analyzing our code before merging it with our production codebase.

Contents

Prerequisites

Before getting started, we assume that we have set up the following:

1. Basic Analysis Options

Static analysis helps us to find problems before executing a single line of code. It’s a great tool that we can integrate into our development environment to prevent bugs and ensure that code conforms to style guidelines especially when we are working with a team. analysis_options.yaml is a YAML file that we can use to specify the lint rules. Below is a basic example with minimum configuration:

# Defines a default set of lint rules enforced for
# projects at Google. For details and rationale,
# see https://github.com/dart-lang/pedantic#enabled-lints.
include: package:pedantic/analysis_options.yaml

# For lint rules and documentation, see http://dart-lang.github.io/linter/lints.
# Uncomment to specify additional rules.
linter:
  rules:
    - camel_case_types

analyzer:
  exclude:
    - path/to/excluded/files/**

2. Setting up Github Actions

Let us now create a Github Actions workflow for doing a static analysis of our project’s source code and even run some tests. This workflow will run on each push and pull request made to the master branch. Let us create a ci.yml file in the .github/workflows/ directory and the following code:

name: CI

on:
  push:
    branches: [master]
  pull_request:
    branches: [master]

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - name: Setup Repository
        uses: actions/checkout@v2
      
      - name: Setup Dart
        uses: dart-lang/setup-dart@v1

      - name: Install Pub Dependencies
        run: dart pub get

      - name: Verify Formatting
        run: dart format --output=none --set-exit-if-changed .

      - name: Analyze Project Source
        run: dart analyze
      
      - name: Run tests
        run: dart test

The above steps are pretty self-explanatory. We are using a stable version of Dart. However, if we want to set up different Dart configuration, we can use sdk input with dart-lang/setup-dart action.

- name: Setup Dart
  uses: dart-lang/setup-dart@v1
  with:
    sdk: 2.10.3

- name: Setup Dart
  uses: dart-lang/setup-dart@v1
  with:
    sdk: dev

Results

We can make a push directly to the master branch and check out the result for our Github Action.

Github Actions for Dart
All steps passed successfully

So, we can see that we have been able to set up our lint pipeline in less than two minutes. This power of Github Actions can help any team to achieve better developer workflow and faster releases.

Three months of Inbox Zero

Email: A source of stress Email has been a significant source of stress for me. Canonical, Red Hat, Debian and free software projects all are a firehose of announcements, discussions, bug reports, review requests, questions, and work tasks every day. I am fairly good and efficient at quickly deleting the 90% irrelevant email, but the type of “10 seconds for the sender to write, half a day of work for me to act upon” emails pile up, break my motivation, and cause annoyance and refusal.

March 26, 2021

A Piece+Tree (Augmented B+Tree)

Most of my career I’ve been working on a text editor product in either a hobby or professional capacity. Years ago I had an idea to combine a B+Tree with a PieceTable and put together a quick prototype. However, it didn’t do the nasty part which was removal and compaction of the B+Tree (so just another unfinished side-project).

Now that we’re between GNOME cycles, I had the chance to catch up on that data structure and finish it off.

Just for a bit of background, a B+Tree is a B-Tree (N-ary tree) where you link the leaves (and often the branches) in a doubly-linked list from left-to-right. This is handy when you need to do in-order/reverse-order table-scans as you don’t need to traverse the internal nodes of the tree. Unsurprisingly, editors do this a lot. Since B+Trees only grow from the root, maintaining these linked-lists is pretty easy.

And a PieceTable is essentially an array of tuples where each tuple tells you the length of a run of text, what buffer it came from (read-only original buffer or append-only change buffer), and the offset in that buffer. They are very handy but can become expensive to manage as you gain a lot of changes over time. They are fantastic when you want to support infinite undo as you can keep an append-only file for individual changes along with one for the transaction log. You can use that for crash recovery as well.

This augmented B+Tree works by storing pointers to children branch-or-leaves along with their combined run-length. This is handy because as you mutate runs in the leaves, you only need to adjust lengths as you traverse back up the tree.

Another bit of fun trickery when writing B-trees of various forms is to break your branches-or-leaves into two sections. One section is for the items to be inserted. On the other, you have a fixed array of integers. After you insert an item into a free slot, you update a linked list of integers (as opposed to pointers) on the other end. Doing so allows you to do most inserts/removals in O(1) once you know the proper slot and avoid a whole series of memmove()s. Scanning requires traversing the integer linked-list instead of your typical for (i=0; i<n_items; i++) scenario. Easily resolved with a FOREACH macro.

Anyway, here it is, and it seems to work. Finally I can move on from having that bit of data-structure on my mind.

March 25, 2021

Free Software Charities

I believe we have reached a point where it is time to discontinue donations to the Free Software Foundation, in light of the outrageously poor judgment shown by its board of directors in reinstating Richard Stallman to the board. I haven’t seen other free software community members calling for cutting off donations yet. Even the open letter doesn’t call for this. Edit: I’ve been pointed to the line “We urge those in a position to do so to stop supporting the Free Software Foundation,” which surely implies a call to stop donating, so I was wrong about that.

I have no doubt there will be follow-up blog posts explaining why cutting off donations is harmful to the community and the FSF’s staff, and will hurt the FSF in the long-term… but seriously, enough is enough. If we don’t draw the line here, there will never be any line anywhere. Continued support for the FSF is continued complicity, and is harming rather than helping advance the ideals of the free software community. So, where should you donate instead?

The Software Freedom Conservancy is a great choice. It hosts many member projects you’re probably familiar with, including Outreachy and git, among many others. It does some GPL compliance work and takes a strict view on software freedom. For US donors, it is a 501(c)(3), so your donations may be tax-deductible. This is where I send my free software donations not directed to GNOME. Read its statement on recent events at the FSF.

Another good option, especially for people in the EU, is the Free Software Foundation Europe, an independently-run sister organization of the FSF. If you’re not familiar with FSFE, think of it as a more moderate version of the FSF, with a special focus on Europe. It shares the same goals as the FSF, but with more reasonable leadership and much less popcorn. Read its statement on recent events at the FSF.

Most people reading this blog are likely GNOME users. Contributing to GNOME is a great way to support the desktop you use to get things done. Monthly sustaining donations are especially appreciated. Despite the historical association between GNOME and GNU, GNOME has had little to do with GNU for a long time now. The GNOME Foundation is run independently and has formally signed the open letter calling for the resignation of the FSF’s board of directors. For US donors, it is a 501(c)(3). (If you use KDE, donate to KDE here.)

It should be obvious, but this is a personal blog. None of the above organizations have endorsed cutting off donations to the FSF, to my knowledge. They would probably find it to be in poor taste to abuse a crisis to solicit funds. But I doubt they’d object if you send some money their way.

Chromium on Flathub

In December 2020, Chromium reached the Flathub stable channel. Assuming you have Flatpak 1.8.2 or newer, and your kernel is configured to allow unprivileged user namespaces, you can download it now.

Screenshot of Chromium showing the Chromium page on flathub.org

History

Endless OS is based on Debian, but rather than releasing as a bunch of .debs, it is released as an immutable OSTree snapshot, with apps added and removed using Flatpak.

For many years, we maintained a branch of Chromium as a traditional OS package which was built into the OS itself, and updated it together with our monthly OS releases. This did not match up well with Chromium, which has a new major version every 6 weeks and typically 2–4 patch versions in between. It’s a security-critical component, and those patch versions invariably fix some rather serious vulnerability. In some ways, web browsers are the best possible example of apps that should be updated independently of the OS. (In a nice parallel, it seems that the Chrome OS folks are also working on separating OS updates from browser updates on Chrome OS.)

Browsers are also the best possible example of apps which should use elaborate sandboxing techniques to limit the impact of security vulnerabilities, and Chromium is indeed a pioneer in this space. Flatpak applies much the same tools to sandbox applications, which ironically made it harder to ship Chromium as a Flatpak: when running in the Flatpak sandbox, it can’t use those same sandboxing APIs provided by the kernel to sandbox itself further.

Flatpak provides its own API for sandboxed applications to launch new instances of themselves with tighter sandboxing; what’s needed is a way to make Chromium use that…

Solution

Ryan Gonzalez has had a long-running project with us to enable Chromium-based apps to work well as Flatpaks. The first targets were apps built with Electron: his zypak project provides an LD_PRELOAD-able library that redirects Chromium’s sandbox to use Flatpak’s sub-sandboxing API. This avoids the need to modify the (often proprietary) apps themselves, and is now used by dozens of Electron apps on Flathub which would otherwise not be usable with Flatpak. There’s also a version of Chrome in the Flathub beta channel using this technique.

For Chromium, we can take a different approach. It’s open-source code, being compiled by Flathub, so Ryan prepared some patches to teach it to use the Flatpak sandboxing APIs directly, for better performance and robustness.

Once the sandbox integration was done, there was a long list of other changes needed to make the Chromium Flatpak work at least as well as our previous built-in version, which André Moreira Magalhães from Endless worked through with Ryan.

Some of these came from the old Endless OS package, such as using a royalty-free implementation of AAC, splitting encumbered codecs to a separate package so they can be excluded as needed for distribution, and discarding background tabs when the system is under memory pressure (which is useful on systems with limited RAM, but is disabled by default on desktop Linux builds).

Others were specific to Flatpak, such as dealing with udev not being available in the sandbox, restoring the ability to create app launchers for websites, integrating with Flatpak’s network proxy portal, and allowing Chromium policy files to be provided by the host system.

Over in Endless OS, we also needed to update users’ existing file associations and migrate their Chromium profiles to its new home.

Impact

Chart of 30 days of Chromium downloads, with three large spikes of around 20,000 daily downloads

The chart above is the Flathub download statistics for Chromium in the past 30 days. Counting the points between 14th March (when the most recent update was pushed) and 21st March, there have been nearly 60,000 downloads. The majority of these will be Endless OS users: our 3.9.2 release in January 2021 rolled this change out to all users, and Endless OS has automatic updates enabled by default. But Flathub has a broader reach than just Endless OS! I believe that users of System76’s Pop!_OS have been migrated from a .deb of Chromium to this Flatpak, and surely there are many users on other distributions, too. It’s also been used as the basis for other apps on Flathub, including ungoogled-chromium.

As an added bonus, the Flatpak is wired up to flatpak-external-data-checker, which now automatically opens a pull request when a new Chromium release is published. Typically, new major releases need manual intervention to refresh the Flatpak patches, but minor releases often build without issue: for these, one can just smoke-test the test build from the pull request, and then merge it, reducing what used to be days of effort rebasing the package in Endless OS to the work of minutes. I love it when a plan comes together.

A quick glance at the issues on the flathub/org.chromium.Chromium repo will show that there is always more work to be done. We would love to see other distributions getting involved, reducing the duplicated work of maintaining Chromium packages for each distro, and making it easier for users of long-term stable branches to get important browser updates quickly and easily.

March 24, 2021

Input, revisited

My last update talked about better visual feedback for Compose sequences in GTK’s input methods. I did not explicitly mention dead keys back then, but historically, X11 has treated dead keys and Compose sequences in exactly the same way.

Dead keys are a feature of certain keyboard layouts where you can hit a key that does not produce a character by itself, but modifies the next key you type. Typically, this is used for accents that can be combined with different base characters. For example, type <dead_acute> <a> to produce á or  <dead_acute> <o> to produce ó.

Traditionally, dead keys were really dead – you didn’t get any visual feedback before the final result appears. With the improvements described in the last update, we now show dead keys as they are entered:

That is a nice improvement. But as it turned out, not everybody was happy.

The shared treatment of Compose sequences and dead keys has some implications: one is that entering a non-existing sequence such as <dead_grave> <x> will produce a beep, and no output. That is acceptable for a Compose sequence that you explicitly started with the Compose key, but not so great when you maybe meant to enter `x.

The people who decided to use Compose sequences for dead keys foresaw the need to actually enter spacing accents every now and then, and added sequences such as <dead_grave> <space> and <dead_grave> <dead_grave> for producing a single ` character.

While that is a nice thought, it is still pretty inconvenient, since you need to type <dead_grave> six time to produce `‍`‍`, e.g. for entering code examples in markdown.

After thinking about this for a while and comparing what other systems do, we’ve made two changes, that will hopefully make dead keys as convenient to use as any other keys on your keyboard.

  • When a <dead key> <key> sequence does not match one of our Compose sequences, commit the individual keys
  • When a <dead key> follows another <dead key>, commit the first one, and treat the second as the beginning of a new Compose sequence

Together, this makes it so that typing <dead_acute> <a> produces á, typing <dead_grave> <x>  produces `x, and you only need to type <dead_grave> three times to enter `‍`‍`:

Much better!

March 23, 2021

Bringing FIDO2 device support to sandboxes

Hardening user logins with 2FA is becoming must-feature of Web services; most of the services I use daily (such as GitLab instances) are already enabling it. Although it’s a bit cumbersome to enter secondary factor manually, using hardware tokens (such as FIDO2 authenticators) simplifies the process to a single tap, also making the entire authentication more secure based on public key cryptography.

On the client side, major browsers provide built-in support for hardware tokens (at least CTAP1), though sandboxed applications cannot benefit from this without allowing direct access to the host hardware. To improve the situation, we had several discussions in forums last year and somehow reached a rough consensus: we need a proxy for those authenticator devices.

Norbert Pócs in our team tackled this problem and has managed to create a D-Bus based proxy service that can bridge the device access to sandboxed applications. At DevConf.cz 2021, we presented our effort covering a proof-of-concept Firefox/Flatpak integration (special thanks to the people behind zbus, which made this pretty straightforward).

If you are interested in this topic, take a look at the recording of our presentation. Slides are also available. It’s still up in the air how to properly integrate this feature into browsers, but maybe the next step would be to finalize the protocol to allow different implementations.

What’s new in GVfs for GNOME 40?

Unfortunately, my contributions to various projects have been limited over the last months given the various coronavirus-related restrictions. Also, I took over gnome-autoar maintainership recently on which I spent some time. So GVfs doesn’t bring as much news as I would like, but there are some which are worth mentioning.

Google Shared Drives and Shared with me

Definitely, the biggest changes happened in the Google backend. It is now possible to browse files on Shared Drives (formerly Team Drives). Also, support for the Shared with me folder was added. So it is possible to browse files that are shared with the user but not added to his drive. These two features were merged together with various smaller bug fixes and improvements.

SFTP supports two-factor authentication

It is now possible to access SFTP shares, which are guarded by two-factor authentication. This was not possible earlier, the backend hanged during the mount procedure and the connection timeouted. Currently, two password prompts should be shown instead.

Possibility to disable trash

In some cases, the trash support is not desired, for example on shared mounts, or for automounted locations. Now, the x-gvfs-notrash mount option can be used, which causes that the gvfsd-trash daemon ignores that mount and the g_file_trash() function returns an error. This is actually already part of GNOME 3.38, but I didn’t make a separate post for that release.


In the end, I would like to thank all who made some contributions during this cycle even small ones, especially to all newcomers.

March 22, 2021

Use GLIB_VERSION_MIN_REQUIRED to avoid deprecation warnings

tl;dr: Define GLIB_VERSION_MIN_REQUIRED and GLIB_VERSION_MAX_ALLOWED in your meson.build to avoid deprecation warnings you don’t care about, or accidentally using GLib APIs which are newer than you want to use. You can add this to your library by copying gversionmacros.h and using its macros on public APIs in your header files.

With every new stable release, GLib adds and/or deprecates public API. If you are building a project against GLib, you probably don’t want to use the new APIs until you can reliably depend on a new enough version of GLib. Similarly, you want to be able to continue using the newly-deprecated APIs until you can reliably depend on the version of GLib which deprecated them.

In both cases, the alternative is for your project to add conditional compilation blocks which use some GLib symbols if building against the new version of GLib, and others if building against the old version. That’s lots of work, and no fun.

So to prevent projects using GLib APIs which are outside the range of GLib versions which those projects are tested to build and work against, GLib can emit deprecation warnings at compile time if APIs which are too new – or too old – are used.

You can enable this by defining GLIB_VERSION_MIN_REQUIRED and GLIB_VERSION_MAX_ALLOWED in your build environment. With Meson, add the following to your top-level meson.build, typically just after you check for the GLib dependency using dependency():

add_project_arguments(
  '-DGLIB_VERSION_MIN_REQUIRED=GLIB_VERSION_2_46',
  '-DGLIB_VERSION_MAX_ALLOWED=GLIB_VERSION_2_66',
  language: 'c')

The GLIB_VERSION_x symbols are provided by GLib, and there’s one for each stable release series. You can also use GLIB_VERSION_CUR_STABLE or GLIB_VERSION_PREV_STABLE to refer to the stable version of the copy of GLib you’re building against.

If undefined, both symbols default to the current stable version, meaning you get all new APIs and all deprecation warnings by default. This is good for keeping up to date with developments in GLib, but not so good if you are targeting an older distribution and don’t want to see all the latest deprecation warnings.

None of this is new; this blog post is your periodic reminder that this versioning functionality exists and you may benefit from using it.

Add this to your library too

It’s easy enough to add to other libraries, and should be useful in situations where your library is unlikely to break API for the foreseeable future, but could add or deprecate API with every release.

Simply copy and adapt gversionmacros.h, and use its macros against every symbol in a public header. You can use them for functions, types, macros and enums.

The downside is that you will need to update the version macros header for each new version of your library, to add macros for the new version. There’s no way round this within the header file, as C macros may not define additional macros. It may be possible to generate the header using an external script from Meson, though, if someone wants to automate it.

Drawing GNOME App Mockups

I’ve written about designing GNOME apps at a high level before, but not about the actual process of drawing UI mockups the way we do on the GNOME design team. In this tutorial we’ll pick up the Read It Later example from previous tutorials again, and draw some mockups in Inkscape from scratch.

Before we start, let’s look at the sketches we’re going to base this on. I’ve re-drawn some of the sketches from my last app design blog post with just the parts we’ll need for this tutorial.

I recommend having a look at that other blog post before jumping into this one, as it will give you some background on the basic design patterns and show step by step how we got to this layout.

What’s in a Mockup?

After you’ve designed the basic structure of your app (e.g. as a sketch on paper) but before starting implementation, it’s good to check what your layout will look like with real UI elements.

Left: The initial mockup drawn in Inkscape, right: Screenshot of the real app

This doesn’t mean mockups need to recreate every gradient and highlight from the GTK stylesheet. Doing that would make mockups very hard to edit and keep in sync as the stylesheet evolves. However, things like spacing, border radii, button styling, etc. can be made to look very close to how they’ll look in the implemented version with relatively little effort. This is why on the GNOME design team we use a simplified style somewhere between a wireframe and a mockup, where sizes and metrics are mostly pixel-perfect, but UI visuals are not.

This level of fidelity is great for trying variations on layouts, placement of individual controls, different icon metaphors, etc. which are the most important things to validate before starting development. Once the implementation is in progress there’s usually additional rounds of iteration on different aspects of the design, but those don’t always require mockups as you can just iterate directly in code at that point.

Pre-Requisites

In order to be able to follow along with this tutorial, you’ll need to install a few apps:

  • Inkscape: The vector drawing app we’ll be using to draw our mockup
  • Icon Library: A handy app for finding symbolic icons to use in mockups

Next, you need the GNOME mockup template. This is an SVG file with many of our most common UI patterns, which enables you to make mockups by copying and adapting these existing components, rather than having to draw every element yourself. You can download the template from GNOME Gitlab.

Finally, you need a recent version of Cantarell, GNOME’s interface font. It’s possible that while you may have a font with that name installed, it’s not the right version, because some distributions and Google Fonts are still on an old version (which has only 2 weights, rather than 5). You can download the new version here.

Inkscape Basics

If you’ve never used Inkscape before it might be good to do some more general beginner tutorials as a first step. I’ll assume familiarity with navigation and object manipulation primitives such as selection, moving/scaling/rotating, duplicating, manipulating z-index, grouping/ungrouping, and navigating through group hierarchies.

Nevertheless, here’s a quick overview of the features we’ll be using primarily.

Controls

Inkscape has a lot of features, but we only need a small subset for what we’re doing. Most interfaces are just nested rectangles, after all ;)

Toolbox (the toolbar on the left edge):

  • Selection/movement/scaling tool (S)
  • Rectangle tool (R)
  • Ellipse tool (E)
  • Text tool (T)
  • Color picker (D)

Properties Sidebar (configuration dialogs docked to the right side):

  • Fill & Stroke (Ctrl + Shift + F)
  • Align & Distribute (Ctrl + Shift + A)
  • Export (Ctrl + Shift + E)
  • Document Properties (Ctrl + Shift + D)

Snap Controls (the toolbar on the right edge): Inkscape has very fine-grained snapping controls to configure what should be snapped to when you move items on the canvas (e.g. path nodes, object center, path intersections). It’s a bit fiddly, but very useful for making sure things are aligned to the grid. The icon tooltips are your friends :)

When aligning things or working with the pixel grid it’s very helpful to have the page grid visible. It can be toggled with the # shortcut or in the View menu.

Advanced: Partial Rounded Corners

One sort of advanced thing I started doing recently is using path effects to get rounded corners only on specific corners of a rectangle. This is handy compared to having the rounding baked into the geometry, because it keeps the rounding flexible, so the object can be scaled without affecting the rounded corners.

The feature is quite hidden and looks very complex, but once you know where it is it’s not that scary. You can find it in Path > Path Effects... > + > Corners (Fillet/Chamfer).

You’ll find that the mockup templates use this path effect technique for e.g. rounded bottom corners on windows.

Color Palette

It’s not as important for mockups as it is for app icons, but still nice to have: The GNOME color palette. Inkscape 1.0+ includes it by default, so you can just choose it from the arrow menu on the right.

Otherwise you can also get it via the dedicated color palette app, or download the .gpl from Gitlab and put it in ~/.var/app/org.inkscape.Inkscape/config/inkscape/palettes for Flatpak Inkscape or ~/.config/inkscape/palettes if it’s on the host.

Page Setup

With that out of the way, let’s get started making our mockup! First, we need a basic page to start from. We can start from the empty-page.svg file, by opening that file and saving it under the name we’ll ultimately want for our mockup, e.g. read-later.svg.

GNOME mockups usually consist of one or more views laid out on a “page” of whatever size is needed to make the content fit. There’s usually a title and description at the top, plus additional captions to explain things about the individual screens where it’s needed (example).

The page dimensions can be adjusted in Document Properties (Ctrl + Shift + D). One useful shortcut here is Ctrl+Shift+R, which resizes to the bounding box of the current selection (Note: Only use this shortcut if the thing you’re resizing to is e.g. a rectangle you set up for that purpose. Resizing the page to things off the pixel grid will break it, because it moves the document origin).

Let’s do that, and tweak the title and description for our case:

Desktop View

Now that the page is set up we can add our first screen. This is the sketch we’re going to be drawing first:

As a first step, let’s bring in the window template with view switcher from pattern-templates.svg. Open that file in Inkscape, select the top leftmost screen, copy it, and paste it into your mockup file.

After roughly positioning the window, check the exact position using the numeric position entries in the bar at the top, and make sure both X and Y positions are full integers (if they’re not integers it means your mockup is off the pixel grid and will look blurry).

It’s worth pointing out that while in this case we’re just starting from the plain default templates, it’s often faster to start from an existing mockup for another app. The app-mockups repository on GNOME Gitlab is full of existing mockups to borrow elements from or use as a starting point for a new mockup. That said, depending on their age those mockups might be using outdated patterns, so it’s good to check when a particular mockup was last updated :)

Headerbar

Let’s start by adjusting the headerbar to what we have in our sketch. Conveniently, several of the buttons don’t need to change at all from the template. All we need to do here is move the search button to the left, delete the add button, and adjust the switcher.

As a general rule, spacing between elements is a multiple of 6 pixels (so 6, 12, 18, 24…). For example, there are 6px of padding around buttons inside a headerbar, and 6px between the individual buttons.

When moving/placing elements, always make sure that snapping to bounding box is active (topmost group of snapping controls). As with the placement of the window earlier, it’s good to verify that sizes and positions are even integers in the toolbar up top.

Next up: The view switcher. You can start by changing the three labels, and then re-centering the icon + label groups on their respective containers. The inactive items have invisible containers, so they’re a bit fiddly to select.

For the icons, you can fire up Icon Library and search for the following icons:

  • Unread: view-paged
  • Archive: drawer *
  • Favorites: star-outline-thick

* This icon isn’t included yet, but will be in a future version

You can paste icons directly into Inkscape using the “Copy to Clipboard” feature. Before pasting, navigate into the relevant icon group (each of the icons is grouped with an invisible 16px rectangle, because that’s the icon canvas size). When placing an icon, make sure to center it on the invisible canvas rectangle, and place them on the pixel grid. You may want to turn on outline mode (Ctrl+5 cycles through display modes) to deal with the invisible rectangles more easily.

Once the icons are replaced, change the label strings with the text tool, and move the icon+label blocks horizontally so they look centered within their containers. I personally just do this manually by eye rather than using the alignment tool, so I don’t lose the icon’s alignment to the pixel grid, but you can also align and then manually move it to the closest position on the pixel grid.

With this, the headerbar is complete now:

Content

Let’s look at the content inside the window next. We can keep the basic structure of the listbox from the template, but obviously we want to change what’s in the list.

I’ve prepared some example content we can just copy and paste that into our mockup. Each article consists of three labels, one using the regular font size (10.5pt), and two using the small one (9pt). The metadata label also uses a lighter gray, to distinguish it from the body copy (you can get the color from the labels on the right side of the list in the template using the color picker tool).

The list from the template is grouped and has a clipping mask to cut off scrolling content at the bottom. Since our layout is different anyway we can remove the clipping by simply ungrouping (Ctrl + Shift + G). Next, we can delete all content except the first row, and change that to our first article.

In order to accommodate multiple pieces of content inline as part of the same label, a common pattern is to use middle dots (“·”) as dividers. Pro tip: The Typography app makes it easy to copy and paste typographic symbols like this one into your mockups.

After adding the articles in the first listbox, we need the “Show more” button at the bottom. For that we can just resize the list so it extends past the last article, and add a centered label on that area.

Some of the articles also have images associated with them (there are download URLs for the images in the example content file). Images that aren’t already square need to be clipped to a square shape for our layout. To do that in Inkscape

  • Pull in the image via drag and drop from the file manager (or paste it directly from a website via “Copy Image”)
  • Place a rectangle with the right size/proportions where we want the image to go in the layout. In this case, images should be 80px squares, with 12px spacing all around.
  • Move and scale the image to cover the rectangle on all sides
  • Select both the image and the rectangle, and do Object > Clip > Set. Note that the clipping object (the rectangle) needs to be above the target (the image).
Left: semi-transparent clipping rectangle overlaying image, right: clipped image

With the first list done, we can move it down a bit and add a title above it. You can re-use one of the titles from the list for that, and just horizontally align it with the list container. Spacing between title baseline and list should also be 12px, and above the title 18px to the edge of the view.

After that, our first list is complete:

For the second list we can duplicate our first list, move it down below, and just change the content:

In order to have it cut off nicely at the bottom we can bring back a clipping mask. Group the list, duplicate the window background rectangle, and use that as a clipping mask for the list. To make the last article a little more visible I’m also resizing the window background to be a little bit taller first.

For the primary menu we can pretty much just re-use the menu from the template (top left, outside the canvas). Copy over the template popover and button, and vertically align it so the button is at the same height as the one in the window.

Since we don’t need many actions other than the default ones, all we need to do here is change a few labels, add another divider, and put in the name of the app.

That leaves us with a bunch of whitespace in the popover though, so let’s move up the actions and then resize the popover by selecting the bottom nodes and moving them up using the arrow keys:

And with that, our menu looks pretty good:

That completes our desktop view, and we can move on to mobile.

Mobile View

Now that we have a desktop view, let’s do a mobile version of it. Let’s have another look at our sketch:

We can re-use all the elements from the desktop view here, but we need to resize and move around a few things.

As a starting point for the layout, let’s bring in the mobile template. I like to align the mobile headerbar with the desktop one vertically, so all the headerbar buttons have the same vertical position.

Headerbar & Navigation

On mobile sizes there’s not enough horizontal space to keep the view switcher in the headerbar, so instead there’s just a title. The view switcher is in a separate bar at the bottom.

For the headerbar we can duplicate the buttons from the desktop mockup and use them to replace the placeholder buttons on the mobile template.

For the switcher, start by deleting all items except one. Then center the remaining one, and resize the background rectangle to a bit less than a third of the width of the view.

After that you can duplicate the item twice, and move the two additional switcher items to the sides:

Now you can change the icons and strings, and delete the backgrounds on two of the items, and we have a complete mobile switcher:

Content

Adapting the content is easy in this case. Duplicate the lists from the desktop mockup, left-align it with the lists from the template and delete the original lists.

Then we just need to resize the text boxes by moving the handle on the right of the baseline, truncate the text to two lines, move the images to 12px from the right edge, and center the “Show More” label.

Repeat the same thing for the second list, and we have a complete mobile layout!

Page Size & Export

If we zoom out and look at the whole thing together, we can see that this looks pretty much done now:

However, the canvas size is too big for the content we have. Open Document Properties (Ctrl + Shift + D) and change the size to about 1780×1100px.

This is what it looks like with the new page size:

Now that the mockup is ready, let’s export a PNG for easy sharing. Open the Export dialog (Ctrl+Shift+E), choose “Page” at the very top, make sure the DPI is 96, and set the file path. Then press Export, and try opening the PNG in an image viewer to check if there are any issues you missed.

On the design team we usually then push the finished mockup to a git repository, but that’s out of scope for this tutorial.

Conclusion

Congratulation, you made it all the way to the end! I hope this was useful, and you’ll go on to make many great mockups using what you learned :)

You can download the SVG for the mockup I created for this tutorial from GNOME Gitlab. It might come in handy if you have problems with a specific part of the tutorial and want to see how I did it. By the way: The inkscape-tutorial-resources repository contains snapshots of all templates and resources used in this tutorial, in case the original ones change in the future.

Obviously for a real mockup we’d do additional screens (e.g. the article page, various other menus, settings screens, and the like), but I think we’ve covered most of the basics with just this one screen. If you have questions feel free to comment or get in touch!

March 20, 2021

scikit-survival 0.15 Released

I am proud to announce the release if version 0.15.0 of scikit-survival, which brings support for scikit-learn 0.24 and Python 3.9. Moreover, if you fit a gradient boosting model with loss='coxph', you can now predict the survival and cumulative hazard function using the predict_cumulative_hazard_function and predict_survival_function methods.

The other enhancement is that cumulative_dynamic_auc now supports evaluating time-dependent predictions. For instance, you can now evaluate the predicted time-dependent risk of a RandomSurvivalForest rather than just evaluating the predicted total number of events per instance, which is what RandomSurvivalForest.predict returns.

All you have to do is create an array where the columns are the predictions at the time points you want to evaluate. The snippet below summarizes the idea:

from sksurv.ensemble import RandomSurvivalForest
from sksurv.metrics import cumulative_dynamic_auc
rsf = RandomSurvivalForest()
rsf.fit(X_train, y_train)
chf_funcs = rsf.predict_cumulative_hazard_function(X_test)
time_points = np.array([30, 60, …])
risk_scores = np.row_stack([
chf(time_points) for chf in chf_funcs
])
aucs, mean_auc = cumulative_dynamic_auc(
y_train, y_test, risk_scores, time_points
)

For a complete example, please have a look at the User Guide.

If you want to know about all changes in scikit-survival 0.15.0, please see the release notes.

As usual, pre-built conda packages are available for Linux, macOS, and Windows via

 conda install -c sebp scikit-survival

Alternatively, scikit-survival can be installed from source following these instructions.

Extensions Rebooted: Porting your existing extensions to GNOME 40

This will be the first blog post in a series to help get extensions quickly updated after each release. While communications have been quiet, we have not been idle! For the past few months, we have been working on building the structure to building a robust extensions community.

GNOME 40 will be released soon and it will be important to know what that means for extension developers. Since there have been significant changes in GNOME Shell – it will be important to understand where those changes are and how they might affect the various extensions that are out there.

The changes in GNOME Shell have been primarily around the overview as you can imagine if you’ve been keeping up with the GNOME Shell blog posts and the preferences. Previously preferences were GTK3 base and now require using GTK4.

To help with updating your extensions, community member Just Perfection has created a porting guide that you can use to learn how to modify your extension to work with the GNOME 40 shell release. With the advent of the porting guide, we hope that porting will be a lot smoother than it has in the past.

It’s also important to highlight an important change that has also taken place and that is GNOME Shell will once again perform strict version checking. Because there are numerous changes, and that some distros will still be using GNOME 3.38 – we will be enforcing version checking here on in for GNOME 40 compatible extensions. If you do not set your version to the correct GNOME Shell version – it will fail to load.

Testing Your Extension

To test your extension for the GNOME 40, please download the GNOME OS image from [link] and then use the GNOME Boxes from  https://flathub.org/. Do not use your distro version as that will not work with the GNOME OS image. 

GNOME Boxes need to be compiled with UEFI support, but not all distros have compiled that support in and so it’s more consistent to use the flatpak version as the canonical source.

Download the GNOME 40 release candidate from here – https://os.gnome.org/download/40.rc/gnome_os_installer_40.rc.iso  [~2.2GB].

Once you have downloaded the image, you can import it in GNOME Boxes. Go through the GNOME initial setup to create your account and then you’ll be fully logged in.

GNOME software will automatically notify you if there are any updates. You can manually keep up with changes with:

$ sudo ostree admin upgrade -r

In order to effectively use the image testing you will need to switch to the development tree. To do that:

$ sudo ostree admin switch gnome-os:gnome-os/40/x86_64-devel

This might require that you resize your partition so that there is enough space to do that If so, resize your partition to an extra 2G of storage to accommodate the developer toolchain.

You can then use git and other tools to copy your extension into the image. Please note that this image is not based on a distro and is built completely from source using buildstream and freedesktop-sdk and is managed through ostree. If your extensions have any external dependencies, you will need to bring them in manually.

Once you’ve got the extension working to your expectations, please make sure you update the metadata file to include the GNOME 40 version so that it will properly load before packaging it and uploading it to https://extensions.gnome.org/.

In the future, we will try to provide a better experience for testing – but for now, we will use this method. If you have questions or run into problems – you are welcome to use our community communication channels to ask them.

Porting your extension

For your convenience, we provide a porting guide – which is located at https://gjs.guide/ – inside the guide, there will be a porting section that you can use to port your extension by identifying the various function calls you are using and providing an alternative one to use.

During the porting process, you are welcome to join our community communication channels and ask us for help.

Reaching out to the Community

There are several ways you can reach out to us. You’re welcome to ask your questions on discourse.gnome.org and/or join us on the matrix at https://matrix.to/#/#extensions:gnome.org. If you prefer IRC, you can use our IRC server irc.gnome.org and then join #extensions.

To our users

We hope this is a beginning of a much better experience for those who use extensions on the GNOME platform. To aid us in this, we need your help. Since this is a relatively new initiative – it would be wonderful if you would aid us in outreach. If you have a favorite extension please politely call attention to this blog post. The greater the outreach the better we can hope to have most of your favorite extensions available for you when GNOME 40 arrives at your distribution.

March 17, 2021

More documentation changes

It’s been nearly a month since I’ve talked about gi-docgen, my little tool to generate API references from introspection data. In between my blog post and now, a few things have changed:

  • the generated API reference has had a few improvements, most notably the use of summaries in all index pages
  • all inheritable types now show the properties, signals, and methods inherited from their ancestors and from the implemented interfaces; this should hopefully make the reference much more useful for newcomers to GTK
  • we allow cross-linking between dependent namespaces; this is done using an optional URL map, with links re-written on page load. Websites hosting the API reference would need only to provide an urlmap.js file to rewrite those links, instead of doing things like parsing the HTML and changing the href attribute of every link cough library-web cough
  • we parse custom GIR attributes to provide better cross-linking between methods, properties, and signals.
  • we generate an index file with all the possible end-points, and a dictionary of terms that can be used for searching; the terms are stemmed using the Porter stemming algorithm
  • the default template will let you search using the generated index; the search supports scoping, so using method:show widget will look for all the symbols in which the term show appears in method descriptions, alongside the widget term
  • we also generate a DevHelp file, so theoretically DevHelp can load up the API references built by gi-docgen; there is still work to be done, there, but thanks to the help of Jan Tojnar, it’s not entirely hopeless

Thanks to all these changes, both Pango and GTK have switched from gtk-doc to gi-docgen for their API references in their respective main development branches.

Now, here’s the part where it gets complicated.

Using gi-docgen

Quick reminder: the first and foremost use case for gi-docgen is GTK (and some of its dependencies). If it works for you, I’m happy, but I will not go out of my way to make your use case work—especially if it comes at the expense of Job #1, i.e. generating the API reference for GTK.

Since gi-docgen is currently a slightly moving target, I strongly recommend using it as a Meson subproject. I also strongly recommend vendoring it inside your release tarballs, using:

meson dist --include-subprojects

when generating the distribution archive. Do not try and depend on an installed copy of gi-docgen.

Additionally, it’s possible to include the gi-docgen API reference into the Meson tarball by using a dist script. The API reference will be re-generated when building, but it can be extracted from the tarball, like in the good old gtk-doc-on-Autotools days.

Publishing your API reference

The tool we use to generate developer.gnome.org, library-web, is unmaintained and, quite frankly, fairly broken. It is a Python2 script that got increasingly more complicated without actually getting more reliable; it got progressively more broken once we started having more than two GTK modules, and then it got severely broken once we started using Meson and CMake, instead of Autotools. These days, you’ll be lucky to get your API reference uploaded to developer.gnome.org (as a separate archive), and you can definitely forget about cross-linking, because the tool will most likely get things wrong in its quest to restyle any HTML it finds, and then fix the references to what it thinks is the correct place:

The support for Doxygen (which is used by the C++ bindings) is minimal, and it ended up breaking a few times. Switching away from gtk-doc to gi-docgen is basically the death knell for the whole thing:

  • first of all, it cannot match the documentation module with the configuration for it, because git-docgen does not have the concept of a “documentation module”; at most, it has a project configuration file.
  • additionally, we really don’t want library-web messing about with the generated HTML, especially if the end result breaks stuff.

So, the current solution is to try and make library-web detect if we’re using gi-docgen, by looking for toml and toml.in files in the release archive, and then upload various files as they are. It’s a bad and fragile stop gap solution, but it’s the best we can do without breaking everything in even more terrible ways.

For GNOME 41 my plan is to sidestep the whole thing, and send library-web to a farm upstate. We’re going to use gnome-build-meta to build the API references of the projects we have in our SDK, and then publish them according to the SDK version.

My recommendation for library authors, in any case, is to build the API reference for the development branch of their project as part of their CI, and then publish it to the GitLab pages space. For instance:

This way, you’ll always have access to the latest documentation.

Sadly, we can’t have per-branch references, because GitLab pages are nuked every time a branch gets built; for that, we’d have to upload the artifacts somewhere else, like an S3 bucket.


Things are going to get better in the near future, after 10 years of stagnation; sadly, this means we’re living in Interesting Times, so I ask of you to please be patient while we transition towards a new and improved way to document our platform.

March 16, 2021

GUADEC 2021 Keynotes and Updates

GUADEC is taking place this July!

A GNOME banner hung on a tower, with several people behind it.
“GUADEC 2010” by Francisco Rojas is licensed under CC BY 2.0

The GNOME Foundation is excited to announce that GUADEC 2021 will take place July 21 -25. This year’s conference will be held online and last five days. The first two days of the conference, July 21 – 22, will be dedicated to presentations. The 23 – 24 will be Birds of a Feather sessions and workshops, and the last day will be for social activities.

Call for Papers 

The GUADEC 2021 theme is “Future-Proofing FOSS”. GNOME is always anticipating the future and looking to innovate and build better for whatever the future may hold, whether it’s with events, technologies, or ways to empower users and integrate free and open source software into our lives. We are interested in proposals that align with this theme as well as ones relating to free and open source software, technology, and community building. In the past, we’ve had talks on:

  • Application development
  • Privacy and security
  • Community and team building
  • Design of user and developer experience
  • Use of GNOME technologies outside the desktop
  • Newcomers initiatives
  • Project planning and governance

Proposals are due March 22, so submit one today!

Keynote Speakers

A photo of a person with long hair and a pink shirt.
Photo courtesy of Mario Behling. License: CC BY

Our amazing keynote speakers for this year are Hong Phuc Dang and Shauna Gordon-McKeon. 

Hong Phuc Dang chairs the annual FOSSASIA Summit and organizes Open Tech Summits in countries from Vietnam, India, China, and Sri Lanka, to Germany. She is the board member of the Open Source Business Alliance Europe and also serves as Vice President of the Open Source Initiative.

 

A person with long hair and a purple/blue shirt.
Photo courtesy of Shauna Gordon-McKeon. License: CC-BY-SA

Shauna Gordon-McKeon is a writer, programmer and community organizer who focuses on the intersection of technology and governance. She co-organizes the Washington DC chapter of the Tech Workers Coalition. She serves on the board of Tech Inquiry and is an advisor to Metagov.

Fixing the Sierra Wireless EM7345-LTE modem not working on Linux

I spend quite a bit of time on getting a Sierra Wireless EM7345-LTE modem to work under Linux. So here are some quick instructions to help other people who may hit the same problem.

These modems are somewhat notorious for shipping with broken firmware. They work fine after a firmware upgrade, but under Windows they will only upgrade to "carrier approved" firmware versions, which requires to be connected to the mobile-network first so that the tool can identify the carrier. And with some carriers connecting to the network does not work due to the broken firmware (ugh). There are a ton of forum-threads on how to work around this under Windows, but they all require that you are atleast able to register with the mobile-network.

Luckily someone has figured out how to update these under Linux and posted instructions for this. The procedure is actually much more straight forward under Linux. The hardest part is extracting the firmware from the Windows driver download.

One problem is that the necessary Intel FlashTool download is no longer available for download from Intel. I needed this tool a while ago for something else, and back then I used the PhoneFlashTool_5.8.4.0.rpm file from https://androiddatahost.com/nm466 . The rpm-file in the zip there has a sha256sum of: c5b964ed4fae470d1234c9bf24e0beb15a088a73c7e8e6b9369a68697020c17a

I now see that it seems that Intel is again offering this for download itself, you can find it here: https://github.com/projectceladon/tools and projectceladon seems to be an official Intel project. Note this is not the version which I used, I used the PhoneFlashTool_5.8.4.0.rpm version.

Once you have the Intel FlashTool installed just follow the posted instructions and after that your model should start working under Linux.

March 15, 2021

Maps and GNOME 40

 As we have just pushed out the release-candidate before the GNOME 40 release next week I thought it would be appropriate with a little summary of the news for 40.

The first big visual update this cycle has been the new updated “place bubbles” redesign by James Westman:

 


This has already been covered in earlier postings, but still earns mentioning when summing up.

I also made some tuning to the way we present the information for places. The title will now show the place' name in the user's language, when that is available in the OpenStreetMap data. Also it will show the place' native name, as locally written, when they differ:


The other big update has been further improvements of the interface' adaptiveness for mobile (narrower) displays, also by James. When selecting a place marker from a search result when in mobile mode, instead of showing the popover style we use for desktop, a ”place bar” at the bottom now appears:

And when clicking/tapping on this bar, a dialog (which will fill the entire space on smaller displays) appears:

Though not yet used by Maps, there's also been a lot of progress made on the new map rendering library libshumate, based on GTK 4 that is set to replace libchamplain and which will let us move away from depending on Clutter and thus also allow migrating from GTK 3.

James Westman and Georges Basile Stavracas Neto. The demos accompanying the library has been consolidated into a single demo application and also gained a Flatpak manifest, which means the demo can be run directly from the GNOME Builder IDE, allowing to “run” the library direcly when making changes. Georges also implemented so called ”kinetic scrolling” (meaning the view has keeps some inertia when you give it a ”push”, which greatly improves the feeling of smoothness as it doesn't stop abruptly, moving one step further towards providing a matched look-and-feel to what we currently have with libchamplain.

 


James also implemented a test tile source, which can be showcased using the drop-down menu in the demo app. This test source shows the background tile indexes (the TMS tile coordinates, https://wiki.openstreetmap.org/wiki/TMS ) which can be handy for debugging, as well as serves as an example of writing a custom tile source implementation.

 

And as always, stay tuned for coming updates on the progress towards GNOME 41 and other stuff coming!

Cheers!

 

 

Exploring my doorbell

I've talked about my doorbell before, but started looking at it again this week because sometimes it simply doesn't send notifications to my Home Assistant setup - the push notifications appear on my phone, but the doorbell simply doesn't trigger the HTTP callback it's meant to[1]. This is obviously suboptimal, but it's also tricky to debug a device when you have no access to it.

Normally I'd just head straight in with a screwdriver, but the doorbell is shared with the other units in this building and it seemed a little anti-social to interfere with a shared resource. So I bought some broken units from ebay and pulled one of them apart. There's several boards inside, but one of them had a conveniently empty connector at the top with "TX", "RX" and "GND" labelled. Sticking a USB-serial converter on this gave me output from U-Boot, and then kernel output. Confirmation that my doorbell runs Linux, but unfortunately it didn't give me a shell prompt. My next approach would often me to just dump the flash and look for vulnerabilities that way, but this device uses TSOP-48 packaged NAND flash rather than the more convenient SPI NOR flash that I already have adapters to access. Dumping this sort of NAND isn't terribly hard, but the easiest way to do it involves desoldering it from the board and plugging it into something like a Flashcat USB adapter, and my soldering's not good enough to put it back on the board afterwards. So I wanted another approach.

U-Boot gave a short countdown to hit a key before continuing with boot, and for once hitting a key actually did something. Unfortunately it then prompted for a password, and giving the wrong one resulted in boot continuing[2]. In the past I've had good luck forcing U-Boot to drop to a prompt by simply connecting one of the data lines on SPI flash to ground while it's trying to read the kernel - the failed read causes U-Boot to error out. It turns out the same works fine on raw NAND, so I just edited the kernel boot arguments to append "init=/bin/sh" and soon I had a shell.

From here on, things were made easier by virtue of the device using the YAFFS filesystem. Unlike many flash filesystems, it's read/write, so I could make changes that would persist through to the running system. There was a convenient copy of telnetd included, but it segfaulted on startup, which reduced its usefulness. Fortunately there was also a copy of Netcat[3]. If you make a fifo somewhere on the filesystem, you can cat the fifo to a shell, pipe the shell to a netcat listener, and then pipe netcat's output back to the fifo. The shell's output all gets passed to whatever connects to netcat, and whatever's sent to netcat gets passed through the fifo back to the shell. This is, obviously, horribly insecure, but it was enough to get a root shell over the network on the running device.

The doorbell runs various bits of software, one of which is Lighttpd to provide a local API and access to the device. Another component ("nxp-client") connects to the vendor's cloud infrastructure and passes cloud commands back to the local webserver. This is where I found something strange. Lighttpd was refusing to start because its modules wanted library symbols that simply weren't present on the device. My best guess is that a firmware update went wrong and left the device in a partially upgraded state - and without a working local webserver, there was no way to perform any further updates. This may explain why this doorbell was sitting on ebay.

Anyway. Now that I had shell, I could simply dump the flash by copying it directly off the /dev/mtdblock devices - since I had netcat, I could just pipe stuff through that back to my actual computer. Now I had access to the filesystem I could extract that locally and start digging into it more deeply. One incredibly useful tool for this is qemu-user. qemu is a general purpose hardware emulation platform, usually used to emulate entire systems. But in qemu-user mode, it instead only emulates the CPU. When a piece of code tries to make a system call to access the kernel, qemu-user translates that to the appropriate calling convention for the host kernel and makes that call instead. Combined with binfmt_misc, you can configure a Linux system to be able to run Linux binaries from other architectures. One of the best things about this is that, because they're still using the host convention for making syscalls, you can run the host strace on them and see what they're doing.

What I found was that nxp-client was calling back to the cloud platform, setting up an encrypted communication channel (using ChaCha20 and a bunch of key setup stuff I couldn't be bothered picking apart) and then waiting for commands from the cloud. It would then proxy those through to the local webserver. Since I couldn't run the local lighttpd, I just wrote a trivial Python app using http.server and waited to see what requests I got. The first was a GET to a CGI script called editcgi.cgi, along with a path name. I mocked up the GET request to respond with what was on the actual filesystem. The cloud then proceeded to POST to editcgi.cgi, with the same pathname and with new file contents. editcgi.cgi is apparently able to read and write to files on the filesystem.

But this is on the interface that's exposed to the cloud client, so this didn't appear immediately useful - and, indeed, trying to hit the same CGI binary over the local network gave me a 401 unauthorized error. There's a local API spec for these doorbells, but they all refer to scripts in the bha-api namespace, and this script was in the plain cgi-bin namespace. But then I noticed that the bha-api namespace didn't actually exist in the filesystem - instead, lighttpd's mod_alias was configured to rewrite requests to bha-api through to files in cgi-bin. And by using the documented API to get a session token, I could call editcgi.cgi to read and write arbitrary files on the doorbell. Which means I can drop an extra script in /etc/rc.d/rc3.d and get a shell on my doorbell.

This all requires the ability to have local authentication credentials, so it's not a big security deal other than it allowing you to retain access to a monitoring device even after you've moved out and had your credentials revoked. I'm sure it's all fine.

[1] I can ping the doorbell from the Home Assistant machine, so it's not that the network is flaky
[2] The password appears to be hy9$gnhw0z6@ if anyone else ends up in this situation
[3] https://twitter.com/mjg59/status/654578208545751040

comment count unavailable comments

What to look for in Fedora Workstation 34

As we are heading towards April and the release of Fedora Workstation 34 I wanted to post an update on what we are working on for this release and what we are looking at going forward. 2020 was a year where we focused a lot on polishing what we had and getting things past the finish line and Fedora Workstation 34 is going to be the culmination of that effort in many ways.

Wayland:
The big ticket item we have wanted to close off on was Wayland, because while Wayland has been production ready for most of us for a while, there was still some cases it didn’t cover as well as X.org. The biggest of this was of course the lack of accelerated XWayland support with the binary NVidia driver. Fixing that issue of course wasn’t something we could do ourselves, but we have been working diligently with our friends at NVidia to help ensure everything was in place for them to enable that support in their driver, so I have been very happy to see the public reports confirming that NVidia will have accelerated 3D in the summer release of their driver. The other Wayland area we have put a lot of effort into has been the work undertaken by Jonas Ådahl to get headless display support working with Wayland. This is a critical feature for people who for instance want a desktop instance on their servers or in the cloud, who want a desktop they access through things like VNC or RDP to use for sysadmin related tasks. Jonas spent a lot of time laying the groundwork for this over the course of last year and we are now in the final stages of merging the patches to enable this feature in GNOME and Wayland in preparation for Fedora Workstation 34. Once those two items are out we consider our Wayland rampup/rollout to be complete, so while there of course will continue to be bugfixes and new features implemented, that will be part of a natural evolution of Wayland and not part of a ‘close gaps with X11’ effort like now.

PipeWire
Another big ticket item we are hoping to release fully in Fedora Workstation 34 is PipeWire. PipeWire as most of you know is the engine we use to deal with handling video streams in a secure and shareable away in Fedora Workstation, so when you interact with your web camera(s) or do screen casting or make screenshots it is all routed and handled by PipeWire. But in Fedora Workstation 34 we are aiming to also switch to using PipeWire for audio, to replace both PulseAudio and Jack. For those of you who had read any previous blog post from me you will know what an important step forward this will be as we would finally be making the pro-audio community first class citizens in Fedora Workstation and Linux in general. When we decided to try to switch to PipeWire in Fedora Workstation 34 I have to admit I was a little skeptical about if we would be able to get all things ready in time as there are so many things that needs to be tested and fixed when you switch out such a critical component. Up to that point we had a lot of people interested in PipeWire, but only limited community involvement, but I feel the announcement of bringing in PipeWire for Fedora Workstation 34 galvanized the community around the project and we now have a very active community around PipeWire in #pipewire on the freenode IRC network. Not only is Wim Taymans getting a ton of help with testing and verification, but we also see a stead stream of patches coming in, with for instance improved Bluetooth audio support being contributed, in fact I believe that PipeWire will be able to usher in better bluetooth audio support in Fedora than we ever had before, with great support for high quality Bluetooth audio codecs like LDAC.

I am especially happy to see so many of the key members of the pro-audio Linux community taking part in this effort and is of course also happy to see many pro-audio folks testing Fedora Workstation for the first time due to this effort. The community is working closely with Wim to test and verify as many important ProAudio applications as possible and work to update Fedora packaging as needed to ensure they can transition from Jack to PipeWire without dependency conflicts or issues. One last item to mention here is that you might have seen that Red Hat is getting into the automotive space, I can’t share a lot of details about that effort, but one thing I can say is that PipeWire will be a core part of it and thus we will soon be looking to hire more engineers to work on PipeWire, so if that is of interest to you be sure to track my twitter feed or blog as I will announce our job openings there as they become available. For the community at large this should be great too as it means that we can get a lot of synergy between automotive and the desktop around audio and video handling.

It is still somewhat of an open question if we end up actually switching to PipeWire in Fedora Workstation 34, but things are looking good at this point in time and worst case scenario it will be in place for Fedora Workstation 35.

Toolbox
Toolbox is another effort that is in a great spot now. Toolbox is our tool for making working with pet containers a breeze for developers. The initial version was prototyped quickly by writing it as a shell script, but we spent time last year getting it rewritten in Go in order to make it possible to keep expanding the project and allow us to implement all the things we envision for it. With that done feature work is now in focus again and Ondřej Michal has done some great work making it possible to set up RHEL containers in Toolbox. This means that you can run Fedora on your laptop and get the latest and greatest features that way, but you can do your development in a RHEL pet container, so you get an environment identical to what you applications will see once they are deployed into the cloud or onto company hardware. This gives you the best of both worlds in my opinion, the fast moving Fedora Workstation that brings the most out of our laptop and desktop hardware, but still easy access to the RHEL platform for development and testing. You can even test this today on Fedora Workstation 33, just open a terminal and type ‘toolbox create --distro rhel --release 8.3‘. The resulting toolbox will then be based on RHEL and not Fedora and thus perfect for doing RHEL targeted development. You will need to use the subscription-manager tool to register it (be sure to register on developer.redhat.com for your free RHEL developer subscription. Over time we hope to integrate this into GNOME Online accounts like we do for the RHEL virtual machines you can set up with GNOME Boxes, so that once you set up your RHEL account you can create RHEL virtual machines and RHEL containers easily on Fedora Workstation.

Toolbox with RHEL

Toolbox pet container with RHEL UBI

Flatpak
Owen Taylor has been doing some incredible work behind the scenes for the last year trying to ensure the infrastructure we have in RHEL and Fedora provides a great integrated Flatpak experience. As we move forward we expect Flatpaks to be the primary packaging format that Fedora users consume their applications in, but to make that a reality we needed to ensure the experience is good both for Fedora maintainers and for the end users. So one of the big ticket items Owen been working on is getting incremental updates working in Fedora. If you have used applications from Flathub you probably noticed that their updates are small and nimble despite being packaged as Flatpak containers, while the Fedora flatpaks causes big updates each time. The reason for this is that the Fedora flatpaks are shipping as OCI (Open Container Initiative) images, while the Flatpaks on Flathub are shipping as OStree repositories (if you don’t know OStree, think of it as git for binaries). So shipping the Flatpaks as OCI images has advantages in the form of being the same format we at Red Hat use for our kubernetes/docker/openshift containers and thus it allows us to reuse a lot of the work that Red Hat as put into ensuring we can provide and keep such containers up to date and secure, but the downside until now has been that these containers where shipped in a way which cause each update, no matter how small the change, to be a full re-download of the whole image. Well Owen Taylor and Alex Larsson worked together to resolve this and came up with a method to allow incremental updates of such containers and thus bring the update sizes in line with what you see on Flathub for Flatpaks. This should be deployed in time for Fedora Workstation 34 and we also hope to  eventually deploy it for  kubernetes/docker containers too. Finally to make even more applications available we are doing work to enable people to get access to Flathub.org out of the box in Fedora when you enable 3rd party repositories in initial setup, so that your our of the box application selection will be even bigger.

Flathub frontpage

Flathub webpage

GNOME 40
Another major change in Fedora Workstation 34 is GNOME40 which contains a revamp of the GNOME 3 user interface. This was a collaborative effort between a lot of GNOME 3 stakeholders with Allan Day representing Red Hat. This was also an effort by the GNOME design community to up their game and thus as part of the development process the GNOME Foundation paid a professional company to do user testing on the proposed changes and some of the alternatives. This means that the changes where verified to actually be experienced as an improvement for the experienced GNOME user participants and that it felt intuitive for new users of GNOME. One advantage we have in Fedora is that since we don’t do major tweaking of the GNOME user interface which means once Fedora Workstation 34 ships you are set to enjoy the new GNOME experience from day one. For long time GNOME users I hope and expect that the updates will be a welcome refresh and at the same time that the changes provide a more easy onramp for new GNOME and Fedora Workstation users. Some of the early versions did lead some long term fans of how multimonitor support in GNOME3 worked to be a bit concerned, but be assured that multi monitor is a critical usecase in our opinion and something we have been looking at and will be looking at keep improving. In fact Allan Day wrote a great blog post about GNOME40 multimonitor support recently to explain what we are doing and how we see it evolving going forward.

Input
Another area where we keep putting in a lot of effort is input. Thanks to Peter Hutterer and Benjamin Tissoires we keep making sure Fedora Workstation and the world of Linux keeps having access to the newest and best in input. The latest effort they are working on has been to enable haptic touchpads. Haptics touchpads should be familiar among people who tried Apple hardware, but they are expected to appear in force on laptops in general this year, so we have been putting in the effort to ensure that we can support this new type of device as they come out. So if you see laptops you want with haptic touchpads then Fedora Workstation should be ready for it, but of course until these devices are commonplace and we had a chance to test and verify I can make no guarantees.

Another major effort that we undertook in relation to input was move the GNOME input to a separate thread. Carlos Garnacho worked on this patch to make that happen. This should provide a smoother experience with Fedora Workstation 34 as it means the mouse should not stall due to the main thread running Wayland being busy. This was done as part of the overall performance work we been continuously doing over the last years to ensure to address performance issues and make Fedora and GNOME have the best performance and performance related behaviour possible.

Lenovo Laptops
So one of the major announcements of last year was Lenovo Laptops with Fedora Linux pre-installed. There are currently two models available with Linux, the X1 Carbon and the Lenovo P1. Between them they cover the two most commons requests we see, a ultralight weight laptop with the X1 and a more powerful ‘portable workstation’ model with the P1. We are working on a couple of more models and also to get them sold globally, which was key goal of the effort. Be aware that both models are on sale as I am writing this (hopefully still true when you read this), so it is a good time to grab a great laptop with a great OS.

Lenovo P1

Lenovo P1

Vision
So one thing I wanted to do to is tie the work we do in Fedora Workstation together by articulating what we are trying to achieve. Fedora has for the longest time been the place where Linux as an operating system is evolving and being developed. There are very few major innovations that has come to Linux that hasn’t been developed and incubated in Fedora by Fedora contributors, including of course Red Hat. This include things like the Linux Vendor Firmware Service, Wayland, Flatpak, SilverBlue, PipeWire, SystemD, flicker free boot, HiDPI support, gaming mouse support and so much more. We have always done this work in close cooperation with the upstreams we are collaborating with, which is why the patch delta in any given Fedora release is small. We work hard to get improvements into the upstream kernel and into GNOME and so on right away, to avoid needing to ship downstream patches in Fedora. That of course saves us from having to maintain temporary forks, but more importantly it is the right way to collaborate with an open source community.

So looking back to when we launched Fedora Workstation we realized that being at the front like that had come at the cost of not being stable and user friendly. So the big question we tried to ask ourselves when launching Fedora Workstation and the question that still drives a lot of our decision making and focus is : how can we preserve being the heart and center of Linux OS development, but at the same time provide end users with a stable and well functioning system? To achieve that we have done lot of changes over the last years, ranging from some policy changes in terms of how and when we brought changes into Fedora, but maybe even more importantly we focused on a lot on figuring out ways to reduce the challenges caused by a rapidly evolving OS, like the introduction of Flatpaks to allow applications to be developed and released without strong ties to the host system libraries and with the concepts we are maturing in Silverblue around image based operating systems or how we are looking at pet container development with Toolbox. All of these things combined remove a lot of the fragility we seen in Linux up to this point and instead let us treat the rapidly evolving linux landscape as a strength.

So where we are today is that I think we are very close to realizing the vision of being able to let Fedora be the place where exiting new stuff happens, yet at the same time provide the robustness and polish that end users need to be able to use it as their daily driver, it has been my daily driver for many years now and by the rapid growth of users we seen in Fedora over the last 5 years I think that is true for a lot of other people too. The goal is to allow the wider community around Linux, especially the developers, sysadmins and creators relying on Linux to do their work, to come to Fedora and interact and collaborate with the developers working on the OS itself to the benefit of all. You all are probably better judges than me to if we are succeeding with that, but I do take the increased chatter and adoption of Fedora by a lot of people doing Linux related podcasts, news sites and so on as a sign that we are succeeding. And PipeWire for me is a perfect example of how this can look, where we want to bring in the pro-audio creators to Fedora Workstation and let them interact and work closely with Wim Taymans and the PipeWire development team to make the experience even better for themselves and their fellow creators and at the same time give them a great stable platform to create their music on.

packpath: A command line utility to upload Signal stickers from a simple config file

I have been using Signal a lot and, besides the privacy features, I have come to really enjoy custom sticker packs. Naturally, this led me to upload and maintain a lot of them. To keep things under control I wrote packpath, a small command line utility to easily upload and update Signal sticker packs from a simple config file.

Consider this mosaic a far more interesting screenshot than a terminal running packpath.

Although the Signal stickers API is only kinda public, some developers have written libraries to interact with it. Concretely, signalstickers.com maintainer Romain Ricard, has written signalstickers-client, a Python library to interact with the sticker API.

Thanks to signalstickers-client doing all of the heavy lifting, I was able to write packpath, a simple command line utility to create and publish sticker packs based on a simple YAML configuration file. It’s already available as packpath in PyPI:

$ pip3 install packpath

To use packpath, you need to create a directory with your sticker images and a config.yaml file. The YAML format is available in packpath --help, but it’s actually very simple:

pack:
  title: Ketnipz
  author: Harry Hambley
  cover: ketnipz_001.webp

stickers:
  ketnipz_001.webp: 💓️
  ketnipz_002.webp: 💃️
  ketnipz_003.webp: 💕️
  ketnipz_004.webp: 😐️
  ketnipz_005.webp: 🙃️

You’ll need your Signal “user and password”, which are available on the desktop app through the web developer tools (window.reduxStore.getState().items.uuid_id and window.reduxStore.getState().items.password). See details on signalstickers-client README.

You can then call packpath on the directory:

$ packpath --user UUID --password PASS path/to/sticker/pack

You will get a progress report, and the “secret” URL to share your sticker pack:

$ packpath --user UUID --password PASS path/to/sticker/pack
[packpath] Configuring sticker pack Ketnipz by Harry Hambley
[packpath] Adding: 💓️ for ketnipz_001.webp
[packpath] Adding: 💃️ for ketnipz_002.webp
[packpath] Adding: 💕️ for ketnipz_003.webp
[packpath] Adding: 😐️ for ketnipz_004.webp
[packpath] Adding: 🙃️ for ketnipz_005.webp
Pack uploaded!

Pack uploaded. You can install it by visiting:
https://signal.art/addstickers/#pack_id=NNNNN&pack_key=MMMMM

You can also preview your pack at signalstickers.com:
https://signalstickers.com/pack/NNNNN?key=MMMMM

Note that visiting the above URL will (technically) make the pack ID and key visible to signalstickers.com server logs.
This will NOT add your pack to signalstickers.com, see https://signalstickers.com/contribute for that.

The code is licensed as AGPL-3.0-only, and you can check it out in the packpath repo in GitHub.

Many thanks to Romain for writing signalstickers-client, which actually made this tool possible.