I have created a new low-volume lvfs-announce mailing list for the Linux Vendor Firmware Service, which will only be used to make announcements about new features and planned downtime. If you are interested in what’s happening with the LVFS you can subscribe here. If you need to contact me about anything LVFS-related, please continue to email me (not the mailing list) as normal.
March 23, 2018
I have started porting the code in librsvg that parses SVG's CSS properties from C to Rust. Many properties have symbolic values:
stroke-linejoin: miter | round | bevel | inherit
stroke-linecap: butt | round | square | inherit
fill-rule: nonzero | evenodd | inherit
StrokeLinejoin is the first property that I ported. First I had to
write a little bunch of machinery to allow CSS properties to be kept
in Rust-space instead of the main C structure that holds them
(upcoming blog post about that). But for now, I just want to show how
this boiled down to a macro after refactoring.
First cut at the code
The stroke-linejoin property can have the values miter, round,
bevel, or inherit. Here is an enum definition for those values,
and the conventional machinery which librsvg uses to parse property values:
#[derive(Debug, Copy, Clone)]
pub enum StrokeLinejoin {
Miter,
Round,
Bevel,
Inherit,
}
impl Parse for StrokeLinejoin {
type Data = ();
type Err = AttributeError;
fn parse(s: &str, _: Self::Data) -> Result<StrokeLinejoin, AttributeError> {
match s.trim() {
"miter" => Ok(StrokeLinejoin::Miter),
"round" => Ok(StrokeLinejoin::Round),
"bevel" => Ok(StrokeLinejoin::Bevel),
"inherit" => Ok(StrokeLinejoin::Inherit),
_ => Err(AttributeError::from(ParseError::new("invalid value"))),
}
}
}
We match the allowed string values and map them to enum values. No
big deal, right?
Properties also have a default value. For example, the SVG spec says
that if a shape doesn't have a stroke-linejoin property specified,
it will use miter by default. Let's implement that:
impl Default for StrokeLinejoin {
fn default() -> StrokeLinejoin {
StrokeLinejoin::Miter
}
}
So far, we have three things:
- An enum definition for the property's possible values.
impl Parseso we can parse the property from a string.impl Defaultso the property knows its default value.
Where things got repetitive
The next property I ported was stroke-linecap, which can take the
following values:
#[derive(Debug, Copy, Clone)]
pub enum StrokeLinecap {
Butt,
Round,
Square,
Inherit,
}
This is similar in shape to the StrokeLinejoin enum above;
it's just different names.
The parsing has exactly the same shape, and just different values:
impl Parse for StrokeLinecap {
type Data = ();
type Err = AttributeError;
fn parse(s: &str, _: Self::Data) -> Result<StrokeLinecap, AttributeError> {
match s.trim() {
"butt" => Ok(StrokeLinecap::Butt),
"round" => Ok(StrokeLinecap::Round),
"square" => Ok(StrokeLinecap::Square),
"inherit" => Ok(StrokeLinecap::Inherit),
_ => Err(AttributeError::from(ParseError::new("invalid value"))),
}
}
}
Same thing with the default:
impl Default for StrokeLinecap {
fn default() -> StrokeLinecap {
StrokeLinecap::Butt
}
}
Yes, the SVG spec has
default: butt
somewhere in it, much to the delight of the 12-year old in me.
Refactoring to a macro
Here I wanted to define a make_ident_property!() macro that would
get invoked like this:
make_ident_property!(
StrokeLinejoin,
default: Miter,
"miter" => Miter,
"round" => Round,
"bevel" => Bevel,
"inherit" => Inherit,
);
It's called make_ident_property because it makes a property
definition from simple string identifiers. It has the name of the
property (StrokeLinejoin), a default value, and a few repeating
elements, one for each possible value.
In Rust-speak, the macro's basic pattern is like this:
macro_rules! make_ident_property {
($name: ident,
default: $default: ident,
$($str_prop: expr => $variant: ident,)+
) => {
... macro body will go here ...
};
}
Let's dissect that pattern:
macro_rules! make_ident_property {
($name: ident,
// ^^^^^^^^^^^^ will match an identifier and put it in $name
default: $default: ident,
// ^^^^^^^^^^^^^^^ will match an identifier and put it in $default
// ^^^^^^^^ arbitrary text
$($str_prop: expr => $variant: ident,)+
^^ arbitrary text
// ^^ start of repetition ^^ end of repetition, repeats one or more times
) => {
...
};
}
For example, saying "$foo: ident" in a macro's pattern means that the
compiler will expect an identifier, and bind it to $foo within the
macro's definition.
Similarly, an expr means that the compiler will
look for an expression — in this case, we want one of the string
values.
In a macro pattern, anything that is not a binding is just arbitrary
text which must appear in the macro's invocation. This is how we can
create a little syntax of our own within the macro: the "default:"
part, and the "=>" inside each string/symbol pair.
Finally, macro patterns allow repetition. Anything within $(...)
indicates repetition. Here, $(...)+ indicates that the
compiler must match one or more of the repeating elements.
I pasted the duplicated code, and substituted the actual symbol names for the macro's bindings:
macro_rules! make_ident_property {
($name: ident,
default: $default: ident,
$($str_prop: expr => $variant: ident,)+
) => {
#[derive(Debug, Copy, Clone)]
pub enum $name {
$($variant),+
// ^^^^^^^^^^^^^ this is how we invoke a repeated element
}
impl Default for $name {
fn default() -> $name {
$name::$default
// ^^^^^^^^^^^^^^^ construct an enum::variant
}
}
impl Parse for $name {
type Data = ();
type Err = AttributeError;
fn parse(s: &str, _: Self::Data) -> Result<$name, AttributeError> {
match s.trim() {
$($str_prop => Ok($name::$variant),)+
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expand repeated elements
_ => Err(AttributeError::from(ParseError::new("invalid value"))),
}
}
}
};
}
Getting rid of duplicated code
Now we have a macro that we can call to define new properties. Librsvg now has this, which is much more readable than all the code written by hand:
make_ident_property!(
StrokeLinejoin,
default: Miter,
"miter" => Miter,
"round" => Round,
"bevel" => Bevel,
"inherit" => Inherit,
);
make_ident_property!(
StrokeLinecap,
default: Butt, // :)
"butt" => Butt,
"round" => Round,
"square" => Square,
"inherit" => Inherit,
);
make_ident_property!(
FillRule,
default: NonZero,
"nonzero" => NonZero,
"evenodd" => EvenOdd,
"inherit" => Inherit,
);
Etcetera. It's now easy to port similar symbol-based properties from C to Rust.
Eventually I'll need to refactor all the crap that deals with inheritable properties, but that's for another time.
Conclusion and references
Rust macros are very powerful to refactor repetitive code like this.
The Rust book has an introductory appendix to macros, and The Little Book of Rust Macros is a fantastic resource that really dives into what you can do.
March 22, 2018
I’ve had a little adventure with my Fedora Atomic Workstation this morning and almost missed a meeting because I couldn’t get to a desktop session.
I’ve been using the rawhide branch of Fedora Atomic Workstation to keep up to speed with the latest developments in Fedora. As is expected of rawhide, recently, it would not get me to a login screen (much less a working desktop session). I’ve just booted back into my working image and ignored this for a few days.
The Adventure begins
But since it didn’t go away by itself, yesterday, I decided to see if I can debug it a bit. Looking at the journal for the last unsuccessful boot gave some hints:
gnome-shell[2934]: Failed to create backend: Failed to initialize renderer: Missing extension for GBM renderer: EGL_KHR_platform_gbm gnome-session-binary[2920]: WARNING: App 'org.gnome.Shell.desktop' exited with code 1 gnome-session-binary[2920]: Unrecoverable failure in required component org.gnome.Shell.desktop
Poking the nearest graphics team member about this, I was asked to provide the output of eglinfo in this situation. Since I had an hour to spare before the meeting, I booted back into the broken image in runlevel 3, logged in on a vt, … and found that eglinfo is not in the OS image.
Well, thats easy enough to fix on an Atomic system, using package layering:
rpm-ostree install egl-utils
After that, I proceeded to reboot to get to the OS image with the newly added layer, and when I got to the boot prompt, I realized my mistake: rpm-ostree never replaces the booted image, since it (reasonably) assumes that the booted image is ‘working’. But it only keeps two images around, so it had to replace the other one – which was the image which successfully boots to my desktop.
Now, at the boot prompt, I was faced with the choice between
- the broken image
- the broken image + egl-utils
Ugh. Not what I had hoped for. And my meeting starts in 50 minutes. Admittedly, this was entirely my fault. rpm-ostree behaved as it should and as documented. Since it is a snow day, I need to do the meeting from home and need a web browser for that.
So, what can be done? I remembered that ostree is ‘like git for binaries’, so there should be history, right? After some fiddling with the ostree commandline, I found the log command that shows me the history of my local repository. But sadly, the output was disappointing:
$ ostree log fedora/rawhide/x86_64/workstation commit fa09fd6d2551a501bcd3670c84123a22e4c704ac30d9cb421fa76821716d8c20 ContentChecksum: 74ff34ccf6cc4b7554d6a8bb09591a42f489388ba986102f6726f9e662b06fcb Date: 2018-03-20 10:27:42 +0000 Version: Rawhide.20180320.n.0 (no subject) << History beyond this commit not fetched >>
rpm-ostree defaults to only keeping the latest commit in the local repository, a bit like a shallow git clone. Thankfully, just like git, ostree is versatile, and bit more searching brought me to the pull command, and its –depth option:
# ostree pull --depth=5 onerepo fedora/rawhide/x86_64/workstation Receiving metadata objects: 698/(estimating) 2.2 MB/s 23.7 MB
This command writes to the local repo in /sysroot/ostree/repo and thus needs to be run as root.
Now ostree log showed a few older commits. I had to bump the depth a few times to find the last working commit. Then, I made that commit available for booting into again, using the depoy command:
# ostree admin deploy 76723f34b8591434fd9ec0
where that hex string is a prefix of the commit ID of the last working commit. This command also needs to be run as root.
Now a quick reboot, and… the boot loader menu had an entry for the working image again. I made it back to my desktop with 5 minutes to spare before the meeting. Phew!
Update: Since you might be wondering, the output of eglinfo was:
eglinfo: eglInitialize failed
Let's say you're making a video of someone's birthday party with an app on your phone. Once the recording starts, you don't care when the app starts writing it to disk—as long as everything is there in the end.
However, if you're having a Skype call with your friend, it matters a whole lot how long it takes for the video to reach the other end and vice versa. It's impossible to have a conversation if the lag (latency) is too high.
The difference is, do you need real-time feedback or not?
Other examples, in order of increasingly stricter latency requirements are: live video streaming, security cameras, augmented reality games such as Pokémon Go, multiplayer video games in general, audio effects apps for live music recording, and many many more.
“But Nirbheek”, you might ask, “why doesn't everyone always ‘immediately’ send/store/show whatever is recorded? Why do people have to worry about latency?” and that's a great question!
To understand that, checkout my previous blog post, Latency in Digital Audio. It's also a good primer on analog vs digital audio!
Low latency on consumer operating systems
- Linux has alsa-lib (old), Pulseaudio (standard), JACK (pro-audio), and Pipewire (under development)
- macOS and iOS have CoreAudio (standard, pro-audio)
- Android has AudioFlinger (Java API, android.media), OpenSL ES (C/C++ API), and AAudio (C/C++ API, new, pro-audio)
- Windows has DirectSound (deprecated), WASAPI (standard), and ASIO (proprietary, old, pro-audio).
- BSDs still use OSS
However, the DirectSound API was deprecated in Windows XP, and with Vista, it was removed and replaced with an emulation layer on top of the newly-released WASAPI. As a result, the plugin can't be configured to have less than 200ms of latency, which makes it unsuitable for all the low-latency use-cases mentioned above. The DirectSound API is quite crufty and unnecessarily complex anyway.
GStreamer is rarely used in video games, but it is widely used for live streaming, audio/video calls, and other real-time applications. Worse, the WASAPI GStreamer plugins were effectively untouched and unused since the initial implementation in 2008 and were completely broken².
This left no way to achieve low-latency audio capture or playback on Windows using GStreamer.
The situation became particularly dire when GStreamer added a new implementation of the WebRTC spec in this release cycle. People that try it out on Windows were going to see much higher latencies than they should.
Luckily, I rewrote most of the WASAPI plugin code in January and February, and it should now work well on all versions of Windows from Vista to 10! You can get binary installers for GStreamer or build it from source.
Shared and Exclusive WASAPI
WASAPI allows applications to open sound devices in two modes: shared and exclusive. As the name suggests, shared mode allows multiple applications to output to (or capture from) an audio device at the same time, whereas exclusive mode does not.
Almost all applications should open audio devices in shared mode. It would be quite disastrous if your YouTube videos played without sound because Spotify decided to open your speakers in exclusive mode.
In shared mode, the audio engine has to resample and mix audio streams from all the applications that want to output to that device. This increases latency because it must maintain its own audio ringbuffer for doing all this, from which audio buffers will be periodically written out to the audio device.
In theory, hardware mixing could be used if the sound card supports it, but very few sound cards implement that now since it's so cheap to do in software. On Windows, only high-end audio interfaces used for professional audio implement this.
Another option is to allocate your audio engine buffers directly in the sound card's memory with DMA, but that complicates the implementation and relies on good drivers from hardware manufacturers. Microsoft has tried similar approaches in the past with DirectSound and been burned by it, so it's not a route they took with WASAPI³.
On the other hand, some applications know they will be the only ones using a device, and for them all this machinery is a hindrance. This is why exclusive mode exists. In this mode, if the audio driver is implemented correctly, the application's buffers will be directly written out to the sound card, which will yield the lowest possible latency.
Audio latency with WASAPI
So what kind of latencies can we get with WASAPI?
That depends on the device period that is being used. The term device period is a fancy way of saying buffer size; specifically the buffer size that is used in each call to your application that fetches audio data.
This is the same period with which audio data will be written out to the actual device, so it is the major contributor of latency in the entire machinery.
If you're using the AudioClient interface in WASAPI to initialize your streams, the default period is 10ms. This means the theoretical minimum latency you can get in shared mode would be 10ms (audio engine) + 10ms (driver) = 20ms. In practice, it'll be somewhat higher due to various inefficiencies in the subsystem.
When using exclusive mode, there's no engine latency, so the same number goes down to ~10ms.
These numbers are decent for most use-cases, but like I explained in my previous blog post, this is totally insufficient for pro-audio use-cases such as applying live effects to music recordings. You really need latencies that are lower than 10ms there.
Ultra-low latency with WASAPI
Starting with Windows 10, WASAPI removed most of its aforementioned inefficiencies, and introduced a new interface: AudioClient3. If you initialize your streams with this interface, and if your audio driver is implemented correctly, you can configure a device period of just 2.67ms at 48KHz.
The best part is that this is the period not just in exclusive mode but also in shared mode, which brings WASAPI almost at-par with JACK and CoreAudio
So that was the good news. Did I mention there's bad news too? Well, now you know.
The first bit is that these numbers are only achievable if you use Microsoft's implementation of the Intel HD Audio standard for consumer drivers. This is fine; you follow some badly-documented steps and it turns out fine.
Then you realize that if you want to use something more high-end than an Intel HD Audio sound card, unless you use one of the rare pro-audio interfaces that have drivers that use the new WaveRT driver model instead of the old WaveCyclic model, you still see 10ms device periods.
It seems the pro-audio industry made the decision to stick with ASIO since it already provides <5ms latency. They don't care that the API is proprietary, and that most applications can't actually use it because of that. All the apps that are used in the pro-audio world already work with it.
The strange part is that all this information is nowhere on the Internet and seems to lie solely in the minds of the Windows audio driver cabals across the US and Europe. It's surprising and frustrating for someone used to working in the open to see such counterproductive information asymmetry, and I'm not the only one.
This is where I plug open-source and talk about how Linux has had ultra-low latencies for years since all the audio drivers are open-source, follow the same ALSA driver model⁴, and are constantly improved. JACK is probably the most well-known low-latency audio engine in existence, and was born on Linux. People are even using Pulseaudio these days to work with <5ms latencies.
But this blog post is about Windows and WASAPI, so let's get back on track.
To be fair, Microsoft is not to blame here. Decades ago they made the decision of not working more closely with the companies that write drivers for their standard hardware components, and they're still paying the price for it. Blue screens of death were the most user-visible consequences, but the current audio situation is an indication that losing control of your platform has more dire consequences.
There is one more bit of bad news. In my testing, I wasn't able to get glitch-free capture of audio in the source element using the AudioClient3 interface at the minimum configurable latency in shared mode, even with critical thread priorities unless there was nothing else running on the machine.
As a result, this feature is disabled by default on the source element. This is unfortunate, but not a great loss since the same device period is achievable in exclusive mode without glitches.
Measuring WASAPI latencies
Now that we're back from our detour, the executive summary is that the GStreamer WASAPI source and sink elements now use the latest recommended WASAPI interfaces. You should test them out and see how well they work for you!
By default, a device is opened in shared mode with a conservative latency setting. To force the stream into the lowest latency possible, set low-latency=true. If you're on Windows 10 and want to force-enable/disable the use of the AudioClient3 interface, toggle the use-audioclient3 property.
To open a device in exclusive mode, set exclusive=true. This will ignore the low-latency and use-audioclient3 properties since they only apply to shared mode streams. When a device is opened in exclusive mode, the stream will always be configured for the lowest possible latency by WASAPI.
To measure the actual latency in each configuration, you can use the new audiolatency plugin that I wrote to get hard numbers for the total end-to-end latency including the latency added by the GStreamer audio ringbuffers in the source and sink elements, the WASAPI audio engine (capture and render), the audio driver, and so on.
I look forward to hearing what your numbers are on Windows 7, 8.1, and 10 in all these configurations! ;)
1. The only ones missing are AAudio because it's very new and ASIO which is a proprietary API with licensing requirements.
2. It's no secret that although lots of people use GStreamer on Windows, the majority of GStreamer developers work on Linux and macOS. As a result the Windows plugins haven't always gotten a lot of love. It doesn't help that building GStreamer on Windows can be a daunting task . This is actually one of the major reasons why we're moving to Meson, but I've already written about that elsewhere!
3. My knowledge about the history of the decisions behind the Windows Audio API is spotty, so corrections and expansions on this are most welcome!
4. The ALSA drivers in the Linux kernel should not be confused with the ALSA userspace library.
March 21, 2018
GTask is super handy, but it’s important you’re very careful with it when threading is involved. For example, the normal threaded use case might be something like this:
state = g_slice_new0 (State); state->frob = get_frob_state (self); state->baz = get_baz_state (self); task = g_task_new (self, cancellable, callback, user_data); g_task_set_task_data (task, state, state_free); g_task_run_in_thread (task, state_worker_func);
The idea here is that you create your state upfront, and pass that state to the worker thread so that you don’t race accessing self-> fields from multiple threads. The “shared nothing” approach, if you will.
However, even this isn’t safe if self has thread usage requirements. For example, if self is a GtkWidget or some other object that is expected to only be used from the main-thread, there is a chance your object could be finalized in a thread.
Furthermore, the task_data you set could also be finalized in the thread. If your task data also holds references to objects which have thread requirements, those too can be unref’d from the thread (thereby cascading through the object graph should you hit this undesirable race).
Such can happen when you call g_task_return_pointer() or any of the other return variants from the worker thread. That call will queue the result to be dispatched to the GMainContext that created the task. If your CPU task-switches to that thread before the worker thread has released it’s reference you risk the chance the thread holds the last reference to the task.
In that situation self and task_data will both be finalized in that worker thread.
Addressing this in Builder
We already have various thread pools in Builder for work items so it would be nice if we could both fix the issue in our usage as well as unify the thread pools. Additionally, there are cases where it would be nice to “chain task results” to avoid doing duplicate work when two subsystems request the same work to be performed.
So now Builder has IdeTask which is very similar in API to GTask but provides some additional guarantees that would be very difficult to introduce back into the GTask implementation (without breaking semantics). We do this by passing the result and the threads last ownership reference to the IdeTask back to the GMainContext at the same time, ensuring the last unref happens in the expected context.
While I was at it, I added a bunch of debugging tools for myself which caught some bugs in my previous usage of GTask. Bugs were filed, GTask has been improved, yadda yadda.
But I anticipate the threading situation to remain in GTask and you should be aware of that if you’re writing new code using GTask.
For SVG rendering, we have few options: librsvg, as the most popular one,,and Lasem, maybe others. Both take an SVG file, parse it and render it over specified Cairo.Context.
GSVGtk is using librsvg by default, but have started to experiment with a new Rendered, written in Vala and using GSVG objects directly.
Using GSVG objects directly, means you can create programmatically an SVG image, then pass it directly to GSVGtk’s Renderer to render it to screen, create an image or a PDF, without write down to an string and then parse back in the renderer.
GSVGtk’s renderer, is not a fully compliant SVG 1.1 specification. It is enough for basic shapes and texts. Needs transformations at least for a more useful renderer for a canvas user like libgtkcanvas.
GSVGtk’s new renderer is in render branch if you want to try it.
For some reason libgrsvg creates an translucent area, while GSVGtk’s renderer is not, so in the video you can see how each shape is rendered clearly when it is drawn inside a Clutter.Actor, so now with this in shape, is possible to think to add some controls for shape editing in GSVGtk, now that invalidating each actor drawing area allows to render just the object it cares about, but without text intermediate format.
Five or More
This year I have proposed a Google Summer of Code idea (we are in student applications period) for modernizing Five-or-More, a game left out from the last games modernization round, when most of the games have been ported to Vala.![]() |
| Five or More |
Several people did ask about what this project was, some of them started contributing, so I got in trouble with finding tasks which will not get obsoleted by the full Vala rewrite, so I started reporting tasks, and they started to solve them at a pace I had a hard time to follow with reviews, but here are the most important updates so far, already on master:
- migration from intltool to gettext
- migration from autotools to meson
- split the monolithic five-or-more.c file into several components
- migration from custom games score tracking to libgnome-games-support
Atomix
![]() |
| The current default theme (32x32 images), created by jimmac in 2000 |
Initially I was thinking about using "this much" colors, but... we are talking about a game, it can not and probably should not strictly follow the color scheme used by applications. In my wildest dreams games should also use different theming for the components, something more playful, colorful, relaxing.
![]() |
| Light theme, darker background, lighter walls, more visible connections |
![]() |
| Dark theme, lighter background, thinner walls, visible connections |
Consider this as a call for help, if you are a designer, or just want to have fun doing some artwork for a small game, contact me to fine-tune the current theme to have a more modern look (the old one is already grown-up, having 18 years).
Let's start with the netstats (hard)work @antares has done (still under review for merging into libgtop master, #1 merge request on libgtop gitlab): she did investigate a lot to find the best way to get per-process network statistics into libgtop, something Usage and System Monitor both should benefit from. This is implemented currently as a root daemon using libpcap for capturing packets and summing their sizes, exposing a dbus-interface, congratulate her for the great job and tremendous patience she has shown enduring all my reviews and nitpicking comments.
In the long-term I would like to also support the gtop daemon used on *BSDs on Linux, which we couldn't get to work, but Ben Dejean has already come up with a solution to our problems and with his help I'm sure we will have a libgtop linux daemon. For the internal network statistics calculation we are investigating alternatives to libpcap (an ebpf-based - based on a suggestion from libgtop senior Ben Dejean, mostly running in-kernel - also needing root, and another suggestion from Philip Whitnall using cgroups + iptables + netfilter, which could get away with less privileges). Any ideas, other options or help on that front would be welcome, these ideas are only sketched, so replacing the currently working libpcap-based one will take some work.
![]() |
| System Monitor using libdazzle cpu chart |
Today, gksu was removed from Debian unstable. It was already removed 2 months ago from Debian Testing (which will eventually be released as Debian 10 “Buster”).
It’s not been decided yet if gksu will be removed from Ubuntu 18.04 LTS. There is one blocker bug there.
The base setup
Rust makes it trivial to write any kind of tests for your project. But what good are they if you do not run them? In this blog series I am gonna explore the capabilities of Gitlab-CI and document how it is used in Librsvg.
First things first. What’s CI? It stands for Continues integration, basically it makes sure what you push in your repository continues to build and pass the tests. Even if someone committed something without testings it, or the tests happened to pass on their machine but not but not in a clean environment, we can know without having to clone and built manually.
CI also can have other uses, like enforcing a coding style or running resource heavy tests.
What’s Librsvg?
As the README.md file puts it.
It’s a small library to render Scalable Vector Graphics(SVG), associated with the GNOME Project. It renders SVG files to Cairo surfaces. Cairo is the 2D, antialiased drawing library that GNOME uses to draw things to the screen or to generate output for printing.
Basic test case
First of all we will add a .gitlab-ci.yml file in the repo.
We will start of with a simple case. A single stage and a single job. A job, is a single action that can be done. A stage is a collection of jobs. Jobs of the same state can be run in parallel.
Minor things were omitted, such as the full list of dependencies. The original file is here.
stages:
- test
opensuse:tumbleweed:
image: opensuse:tumbleweed
stage: test
before_script:
- zypper install -y gcc rust ... gtk3-devel
script:
- ./autogen.sh --enable-debug
- make check
Line, 1 and 2 define the our stages. If a stage is defined but has no jobs attached it is skipped.
Line 3 defines our job, with the name opensuse:tumbleweed.
Line 4 will fetch the opensuse:tumbleweed OCI image from dockerhub.
In line 5 we specify that that job is part of the test stage that we defined in line 2.
before_script: is something like a setup phase. In our case we will install our dependencies.
after_script: accordingly is what runs after every job including failed ones. We are not going to use it yet though.
Then in line 11 we will write our script. The commands that would have to be run to build librsvg like if we where to do it from a shell. Indeed the script: part is like a shell script.
If everything went well hopefully it will look like this.
Testing Multiple Distributions
Builds on opensuse based images work, but we can do better. We can test multiple distros!
Let’s add Debian testing and Fedora 27 builds to the pipeline.
fedora:latest:
image: fedora:latest
stage: test
before_script:
- dnf install -y gcc rust ... gtk3-devel
script:
- ./autogen.sh --enable-debug
- make check
debian:testing:
image: debian:testing
stage: test
before_script:
- apt install -y gcc rust ... libgtk-3-dev
script:
- ./autogen.sh --enable-debug
- make check
Similar to what we did for opensuse. Notice that the only things that change are the names of the container images and the before_script:
specific to each distro’s package manager. This will work even better when we add caching and artifacts extractions into the template. But that’s for a later post.
We could refactor the above by using a template(yaml anchors). Here is how our file will look like after that.
stages:
- test
.base_template: &distro_test
stage: test
script:
- ./autogen.sh --enable-debug
- make check
opensuse:tumbleweed:
image: opensuse:tumbleweed
before_script:
- zypper install -y gcc rust ... gdk-pixbuf-devel gtk3-devel
<<: *distro_test
fedora:latest:
image: fedora:latest
before_script:
- dnf install -y gcc rust ... gdk-pixbuf-devel gtk3-devel
<<: *distro_test
debian:testing:
image: fedora:latest
before_script:
- dnf install -y gcc rust ... gdk-pixbuf-devel gtk3-devel
<<: *distro_test
And Failure :(. I mean Success!
Apparently the librsvg test-suite was failing on anything other than opensuse. Later we found out that this was the result of freetype being a bit outdated on the system Federico used to generate the reference “good” result. In Freetype 2.8/2.9 there was a bugfix that affected how the test cases were rendered. Thankfully this wasn’t librsvg‘s code misbehaving but rather a bug only in the test-suite. After regenerating the reference results with a newer version of Freetype everything worked.
Adding Rust Lints
Rust has it’s own style formatting tool, rustfmt, which is highly configurable. We will use it to make sure our codebase style stays consistent. By adding a test in the Gitlab-CI we can be sure that Merge Requests will be properly formatted before reviewing and merging them.
There’s also clippy! An amazing collection of a bunch of lints for Rust code. If we would have used it sooner it would probably have even caught a couple bugs occurring when comparing floating point numbers. We haven’t decide yet on what lints to enable/deny, so it has a manual trigger for now and won’t be run unless explicitly triggered by someone. I hope that will change Soon
.
First we will add another test stage called lint.
stage: - test - lint
Then we will add 2 jobs. One for each tool. Both tools require the rust nightly toolchain of the compiler.
# Configure and run rustfmt on nightly toolchain
# Exits and builds fails if on bad format
rustfmt:
image: "rustlang/rust:nightly"
stage: lint
script:
- rustc --version && cargo --version
- cargo install rustfmt-nightly --force
- cargo fmt --all -- --write-mode=diff
# Configure and run clippy on nightly toolchain
clippy:
image: "rustlang/rust:nightly"
stage: lint
before_script:
- apt update -yqq
- apt-get install -y libgdk-pixbuf2.0-dev ... libxml2-dev
script:
- rustc --version && cargo --version
- cargo install clippy --force
- cargo clippy --all
when: manual
**
And that’s it, with the only caveat that it would take 40-60min for each pipeline run to complete. There are couple of ways this could be sped up though, which will be the topic of part2 and part3.
** During the first experiments, rustfmt was set as a manual trigger(enabled by default later) and cross-distro tests were grouped to their own stage. But it’s functional identical to the setup described in the post.
Caching stuff
Generally 5min/job does not seem like a terribly long time to wait, but it can add up really quickly when you add couple of jobs to the pipeline. First let’s take a look where most of the time is spent. First of jobs currently are spawned in a clean environment, which means each time we want to build the Rust part of librsvg, we download the whole cargo registry and all of the cargo dependencies each time. That’s our first low hanging fruit! Apart from that another side-effect of the clean environment is that we build librsvg from scratch each time, meaning we don’t make use of the incremental compilation that modern compilers offer. So let’s get started.
Cargo Registry
According to the cargo docs there’s a cache of the registry is stored in $CARGO_HOME(default under $HOME/.cargo). Gitlab-CI though only allows you to cache things that exists inside your projects root (it does not need to be tracked by git). So we have to somehow relocate $CARGO_HOME somewhere where we can extract it. Thankfully that’s as easy as setting $CARGO_HOME to our desired path.
.test_template: &distro_test
before_script:
- mkdir -p .cargo_cache
# Only stuff inside the repo directory can be cached
# Override the CARGO_HOME variable to force it location
- export CARGO_HOME="${PWD}/.cargo_cache"
script:
- echo foo
cache:
paths:
- .cargo_cache/
What’s new in our template above compared to part 1, are the before_script: and cache: blocks. In the before_script: first we create the .cacrgo_cache folder if it not exists (cargo is probably smart enough to not need this but ccache isn’t! So better be safe I guess). And then we export the new $CARGO_HOME location. Then in the cache: block we set what folder we want to cache. That’s it, now our cargo registry and downloaded crates should persist across builds!
Caching Rust Artifacts
The only thing needed to cache the rustc build artifacts is to add target/ in the cache: block. That’s it I am serious.
cache:
paths:
- target/
Caching C Artifacts with ccache
C and ccache on the other hand are a completely different story sadly. To that it did not contribute that my knowledge of C and build systems is approximating 0. Thankfully while searching I found another post from Ted Gould where he describes how ccache was setup for Inkscape. The following config ended up working for librsvg‘s current autotools setup.
.test_template: &distro_test
before_script:
# ccache Config
- mkdir -p ccache
- export CCACHE_BASEDIR=${PWD}
- export CCACHE_DIR=${PWD}/ccache
- export CC="ccache gcc"
script:
- echo foo
cache:
paths:
- ccache/
I got stuck on how to actually call gcc through ccache since it depends on the build system you use (see export CC). Shout out to Christian Hergert for showing me how to do it!
Cache behavior
One last thing, is that we want our each of our job to have an independent cache as opposed to a shared one across the pipeline. This can be achieved by using the key: directive. I am not sure how it works and I wish the jobs would elaborate a bit more. In practice the following line will make sure that each job on each branch will have it’s own cache. For more complex configurations I suggest looking at the gitlab docs.
cache:
# JOB_NAME - Each job will have it's own cache
# COMMIT_REF_SLUG = Lowercase name of the branch
# ^ Keep different caches for each branch
key: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
Final config and results
So here is the cache config as it exists today on Librsvg‘s master branch. This brought build of each job from ~4-5min, which we left it in part 2, to ~1-1:30min. Pretty damn fast! But what if you wanted to do a clean build or rule the out the possibility that the cache is causing bugs and failed runs? Well if you happen to use gitlab 10.4 or later (and GNOME is) you can do it from the Web GUI. If not you probably have to contact a gitlab administrator.
.test_template: &distro_test
before_script:
# CCache Config
- mkdir -p ccache
- mkdir -p .cargo_cache
- export CCACHE_BASEDIR=${PWD}
- export CCACHE_DIR=${PWD}/ccache
- export CC="ccache gcc"
# Only stuff inside the repo directory can be cached
# Override the CARGO_HOME variable to force it location
- export CARGO_HOME="${PWD}/.cargo_cache"
script:
- echo foo
cache:
# JOB_NAME - Each job will have it's own cache
# COMMIT_REF_SLUG = Lowercase name of the branch
# ^ Keep diffrerent caches for each branch
key: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
paths:
- target/
- .cargo_cache/
- ccache/
Gitlab has a fairly conventional Continuous Integration system: you push some commits, the CI pipelines build the code and presumably run the test suite, and later you can know if this succeeded of failed.
But by the time something fails, the broken code is already in the public repository.
The Rust community uses Bors, a bot that prevents this from happening:
-
You push some commits and submit a merge request.
-
A human looks at your merge request; they may tell you to make changes, or they may tell Bors that your request is approved for merging.
-
Bors looks for approved merge requests. It merges each into a temporary branch and waits for the CI pipeline to run there. If CI passes, Bors automatically merges to master. If CI fails, Bors annotates the merge request with the failure, and the main repository stays working.
Bors also tells you if the mainline has moved forward and there's a merge conflict. In that case you need to do a rebase yourself; the repository stays working in the meantime.
This leads to a very fair, very transparent process for contributors and for maintainers. For all the details, watch Emily Dunham's presentation on Rust's community automation (transcript).
For a description of where Bors came from, read Graydon Hoare's blog.
Bors evolved into Homu and it is what Rust and Servo use currently. However, Homu depends on Github.
I just found out that there is a port of Homu for Gitlab. Would anyone care to set it up?
For the past year or so I have mostly worked at home or remote in my daily life. Currently I’m engaged in my master thesis and need to manage my daily time and energy to work on it. It is no surprise to many of us that working using your internet-connected personal computer at home can make you prone to many distractions. However, managing your own time is not just about whipping and self-discipline. It is about setting yourself up in a structure which rewards you for hard work and gives your mind the breaks it needs. Based on reflections and experimentation with many scheduling systems and tools I finally felt I have achieved a set of principles I really like and that’s what I’ll be sharing with you today.
Identifying the distractions
Here’s a typical scenario I used to experience: I would wake up and often the first thing I do is turn on my computer, check my e-mail, check social media, check the news. I then go eat my breakfast and start working. After a while I would find myself returning to check mail and social media. Not that much important necessarily happened. But it’s fairly easy for me to press “Super” and type “Gea” and press “Enter” (and Geary will show my e-mail inbox). It’s also fairly easy to press “Ctrl+L” to focus the address bar in Firefox and write “f” (and Facebook.com is autocompleted). Firefox is by default so (ironically) helpful to suggest facebook.com. At other times, a distraction can simply be an innocent line of thought that hits you fx. “oh it would be so cool if I started sorting my pictures folder, let me just start on that quickly before I continue my work“.

From speaking with friends I am fairly sure this type of behavior is not uncommon at all. The first step in trying to combat it myself was to identify the scope of it. I don’t blame anyone else for dealing with this – I see this more as an unfortunate design consequence of the way our personal computers are “universal” and isn’t context-aware enough. Afterall, GNOME Shell was just trying to be helpful, Firefox was also just trying to be helpful, although they are also in some aspects making it easier for me to distract myself like that.
Weapons against distractions
Let me start with a few practical suggestions, which helped me initially break the worst patterns (using big hammers).
- Stylish: using Inspection tools and CSS hacks I remove endless scrolling news feeds, and news content from websites that I might otherwise on reflex open up and read when in a distracted scenario. The CSS hacks are easy to turn off again of course, but it adds an extra step and makes it purposely less appealing for me to do unless it’s for something important.
- BlockSite: I use BlockSite in “Whitelist mode” and turn it on while I work. This is a big hammer which essentially blocks all of internet except for whitelisted websites I use for work. Knowing that you can’t access anything really had a positive initial psychological effect for me.

- Minimizing shell notifications: While I don’t have the same big hammer to “block access to my e-mail” here, I decided to change the order of my e-mail inboxes in Geary so my more relevant (and far less activity prone) student e-mail inbox appears first. I also turned off the background e-mail daemon and turned off notification banners in GNOME Shell.
- Putting Phone in Ultra Battery Saving Mode: I restrict my phone to calls and SMS so that I don’t receive notifications from various chat apps which are irrelevant whilst working. This also saves the battery nicely.
My final weapon is The Work Schedule.This doesn’t sound new or surprising and we probably all tried it, however with more or less success.
..Schedules can be terrible.
I’m actually not that big a fan of putting microscheduling my life usually. Traditional time schedules are too focused around doing things from timestamp X to timestamp Y. They require that you “judge” how fast you are in working and their structure just feels super inflexible. The truth in real life is that my day never look like how I planned it to be. In fact, I found myself sometimes even more demotivated (and distracted) because I was failing to live up to my own schedule and by the end of the day never really managed to complete that “ideal day”. The traditional time schedule ended up completely missing up what it was supposed to fix and help against.
But on the other hand, working without a schedule often results in:
- Forgetting to take breaks from work which is unhealthy and kills my productivity later.
- No sense of progress except from the work itself but if the work is ongoing for longer time this will feel endless and exhausting.
- Lack of work duration meant that my productivity continued to fluctate between overwork and underwork since it is hard to judge when it is okay to stop.
The resulting system
For the past couple of weeks I have been using a system which is a bit like a “semi-structured time schedule”. To you it might just seem like a list of checkboxes and in some sense it is! However, the simplicity in this system has some important principles behind it I have learned along the way:
- Checking the checkboxes give a sense of progress as I work throughout my day.
- The schedule supports adding breaks in-between work sessions and puts my day in an order.
- The schedule makes no assumptions about “What work” I will be doing or reaching that day. Instead it specifies that I work for 1 hour and this enables me to funnel my energy. I use GNOME Clock’s Timer function and let it count down for 1 hour until there’s a nice simple “ding” to be heard when it finishes. It’s up to you whether you then take the break or continue a bit longer.
- The schedule makes no assumptions about “When” I will do work and only approximates for how long. In reality I might wake up at 7:00, 8:00 or 9:00 AM and it doesn’t really matter. What’s important is that I do as listed and take my breaks in the order presented.
- If there are aspects of the order I end up changing, the schedule permits it – It is possible to tick off tasks independent of the order.
- If I get ideas for additional things I need to do (banking, sending an important e-mail, etc) I can add them to the bottom of the list.
- The list is made the day before. This makes it easier to follow it straight after waking up.
- I always use the breaks for something which does not involve computers. I use dancing, going for a walk or various house duties (Interestingly house duties become more exciting for me to do as work break items, than as items I do in my free time).
- In the start you won’t have much feeling for how much work you can manage to make and it is easy to overestimate and get out of breath or unable to complete everything. It works much better for me to underestimate my performance (fx 2 hours of focused work before lunch instead of 3 hours) and feel rewarded that I did everything I had planned and perhaps even more than that.
- I insert items I want to do in my free time into my scheduling after I finish work. These items are purely there to give additional incentive and motivation to finish.
- The system is analog on purpose because I’m interested in keeping the list visually present on my desk at all times. I also think it is an advantage that making changes to the list doesn’t interfere with the work context I maintain on the computer.
Lastly, I want to give two additional tips. If you like listening to music while working, consider whether it might affect your productivity. For example, I found music with vocals to be distracting me if I try to immerse myself in reading difficult litterature. I can really recommend Doctor Turtle’s acoustic instrumental music while working though (all free). Secondly, I find that different types of tasks requires different postures. For abstract, high-level or vaguely formulated tasks (fx formulating goals, reviewing something or reflecting), I find interacting with the computer whilst standing up and walking around to really help gather my thoughts. On the other hand with practical tasks or tasks which require immersion (fx programming tasks), I find sitting down to be much more comfortable.
Hopefully my experiences here might be useful or interesting for some of you. Let me know!
March 20, 2018
I know some fellows doesn’t read desktop-devel-list, so let me share here an email that it’s important for all to read: We have put in place the plan for the mass migration to GitLab and the steps maintainers needs to do.
Read about it in the email sent to the mailing list.
PD: What a historical moment, isn’t it? 
It’s time (Well, long overdue) for a quick update on stuff I’ve been doing recently, and some things that are coming up. I’ve worked out a new way of doing these, so they should be more regular now, about every couple of weeks or so.
- The annual report is moving ahead. I’ve moved up the timelines a bit here from previous years, so hopefully, the people who very kindly help author this can remember what we did in the 2016/17 financial year!
- GUADEC/GNOME.Asia/LAS sponsorship – elements are coming together for the sponsorship brochure
- Some sponsors are lined up, and these will be announced by the usual channels – thanks to everyone who supports the project and our conferences!
- Shell Extensions – It’s been noticed that reviews of extensions have been taking quite some time recently, so I’ve stepped in to help. I still think that part of the process could be automated, but at the moment it’s quite manual. Help is very much appreciated!
- The Code of Conduct consultation has been useful, and there’s been a couple of points raised where clarity could be added. I’m getting those drafted at the moment, and hope to get the board to approve this soon.
- A couple of administrative bits:
- We now have a filing system for paperwork in NextCloud
- Reviewing accounts for the end of year accounts – it’s the end of the tax year, so our finances need to go to the IRS
- Tracking of accounts receivable hasn’t been great in the past, probably not helped by GNUCash. I’m looking at alternatives at the moment.
- Helping out with a couple of trademark issues that have come up
- Regular working sessions for Flathub legal bits with our lawyers
- I’ll be at LibrePlanet 2018 this weekend, and I’m giving a talk on Sunday. With the FSF, we’re hosting a SpinachCon on Friday. This aims to do some usability testing and finding those small things which annoy people.
Following the GStreamer 1.14 release and the new round of gtk-rs releases, there are also new releases for the GStreamer Rust bindings (0.11) and the plugin writing infrastructure (0.2).
Thanks also to all the contributors for making these releases happen and adding lots of valuable changes and API additions.
GStreamer Rust Bindings
The main changes in the Rust bindings were the update to GStreamer 1.14 (which brings in quite some new API, like GstPromise), a couple of API additions (GstBufferPool specifically) and the addition of the GstRtspServer and GstPbutils crates. The former allows writing a full RTSP server in a couple of lines of code (with lots of potential for customizations), the latter provides access to the GstDiscoverer helper object that allows inspecting files and streams for their container format, codecs, tags and all kinds of other metadata.
The GstPbutils crate will also get other features added in the near future, like encoding profile bindings to allow using the encodebin GStreamer element (a helper element for automatically selecting/configuring encoders and muxers) from Rust.
But the biggest changes in my opinion is some refactoring that was done to the Event, Message and Query APIs. Previously you would have to use a view on a newly created query to be able to use the type-specific functions on it
let mut q = gst::Query::new_position(gst::Format::Time);
if pipeline.query(q.get_mut().unwrap()) {
match q.view() {
QueryView::Position(ref p) => Some(p.get_result()),
_ => None,
}
} else {
None
}
Now you can directly use the type-specific functions on a newly created query
let mut q = gst::Query::new_position(gst::Format::Time);
if pipeline.query(&mut q) {
Some(q.get_result())
} else {
None
}
In addition, the views can now dereference directly to the event/message/query itself and provide access to their API, which simplifies some code even more.
Plugin Writing Infrastructure
While the plugin writing infrastructure did not see that many changes apart from a couple of bugfixes and updating to the new versions of everything else, this does not mean that development on it stalled. Quite the opposite. The existing code works very well already and there was just no need for adding anything new for the projects I and others did on top of it, most of the required API additions were in the GStreamer bindings.
So the status here is the same as last time, get started writing GStreamer plugins in Rust. It works well!
March 19, 2018
In this post I will explain how GitLab, CI, Flatpak and GNOME apps come together into, in my opinion, a dream-come-true full flow for GNOME, a proposal to be implemented by all GNOME apps.
Needless to say I enjoy seeing a plan that involves several moving pieces from different initiatives and people being put together into something bigger, I definitely had a good time
.
Generated Flatpak for every work in progress
The biggest news: From now on designers, testers and people with curiosity can install any work in progress (a.k.a ‘merge request‘) in an automated way with a simple click and a few minutes. With the integrated GitLab CI now we generate a Flatpak file for every merge request in Nautilus!
In case you are not familiar with Flatpak, this technology allows anyone using different Linux distributions to install an application that will use exactly the same environment as the developers are using, providing a seamless synchronized experience.
For example, do you want to try out the recent work done by Nikita that makes Nautilus views distribute the space between icons? Simply click here or download the artifacts of any merge request pipeline. It’s also possible to browse other artifacts, like build and test logs:

Notes: Due to a recent bug in Software you might need to install the 3.28 Flatpak Platform & Sdk manually; this usually happen automatically. In the meantime install the current master development Flatpak Nautilus with a single click here. In Ubuntu you might need to install Flatpak first.
Parallel installation
Now, a way to quickly test latest works in progress in Nautilus it’s a considerable improvement, but a user probably don’t want to mess up with the system installation of Nautilus or other GNOME projects installation, specially since it’s a system component. So we have worked on a way to make possible a full parallel installation and full parallel run of Nautilus versions alongside the system installation. We have also provided support for this setup in the UI to make it easily recognizable and ensure the user is not confused about what version of Nautilus is looking at. This is how it looks after installing any of the Flatpak files mentioned above:

We can see Nautilus system installation and the developer preview running at the same time, the unstable version has a blue color in the header bar and a icon with gears. As a side note you can also see the work of Nikita I mentioned before, the developer version of the views now distribute the space of the icons.
It’s possible to install more versions and run them all at the same time, you can see here how the different installed versions are found in the search of GNOME Shell where I also have the stable Flatpak Nautilus installed:

Another positive note is that this also removes the need to close the system instance of the app when contributing to GNOME, it was one of the most reported confusing steps of our newcomers guide.
Issues templates
One of the biggest difficulties we have for people reporting issues is that they either have an outdated application, the application is modified downstream, or the environment is completely different as the one the developers are using, making the experience difficult and frustrating for both the reporter and the developer. Needless to say all of us had to deal with ‘worksforme‘ issues…
With Flatpak, GitLab and the work explained before we can fix this and boost considerably our success with bugs.
We have created a “bug” template where reporters are instructed to download the Flatpaked application in order to test and reproduce in the exact same environment and version as the developers, testers, and everyone involved is using. Here’s part of how the issue template looks like:

When created, the issue renders as:

Which is considerably clearer.
Notes: The plan is to provide the stable app Flatpak too.
Full continuous integration
The last step to close this plan is to make sure that GNOME projects build in all the major distributors. After all, most of us are working both upstream in GNOME and downstream in a Linux distribution. For that, we have made a full array of builds that runs weekly:

Which also fixes another issue we have experience for years: Distribution packagers delivering some GNOME applications different than intended, causing subtle but also sometimes major issues. Now we can point out to this graph that contains the commands to build the application as exact documentation on how to package GNOME projects, directly from the maintainer.
‘How to’ for GNOME maintainers
For the full CI and Flatpak file generation take a look at Nautilus GitLab CI. For the cross distro weekly array additionally create an scheduled pipeline like this. It’s also possible to do more regularly the weekly array of CI, however keep in mind the resources are limited and that the important part is that every MR is buildable and that the tests passes. Otherwise it can be confusing to contributors if the pipeline is failing for a one of the jobs and not for others. For non apps projects, you can pick a single distribution you are comfortable with, other ideas are welcome.
A more complex CI is possible, take a look at the magic work of Jordan Petridis in librsvg. I heard Jordan will do a blog post soon about more CI magic, which will be interesting to read.
For parallel installation, it’s mainly this MR for master and this commit for the stable version; however there has been a couple of commits on top of each, follow them up to today’s date (19-03-2018).
For issues templates, take a look at the templates folder. We were discussing here a default template to be used for GNOME projects,however there was not much input in there so for now I imagined better to experiment with this in Nautilus. Also, this will make more sense once we can put a default template in place, this is something GitLab will probably work on soon.
Finishing up…
On the last 4 days Ernestas Kulik, Jordan Petridis, and me have been working trying to time box this effort and come with a complete proposal by today, each of us working in a part of the plan, and I think we can say we achieved it. Alex Larsson and other people around in #flatpak provided us with valuable help. Work by Florian Mullner and Christian Hergert were an inspiration for us too. Andrea Veri and Javier Jardon put a considerable amount of their time into setting up an AWS instance for CI so we can have fast builds. Big thanks to all of them.
As you may guess, this CI setup for an organization like GNOME with more than 500 projects is quite resource consuming. Good news is that we have some help from sponsors happening, many thanks to them! Stay tuned for the announcements.
Hope you like the direction GNOME is going, for me it’s exciting to modernize and make more dynamic how GNOME development happens, I can see we have come a long way since a year ago. If you have any thoughts, comments or ideas let any of us know!
Multimedia applications based on GStreamer usually handle playback with the playbin element. I recently added support for playbin3 in WebKit. This post aims to document the changes needed on application side to support this new generation flavour of playbin.
So, first of, why is it named playbin3 anyway? The GStreamer 0.10.x series had a playbin element but a first rewrite (playbin2) made it obsolete in the GStreamer 1.x series. So playbin2 was renamed to playbin. That’s why a second rewrite is nicknamed playbin3, I suppose :)
Why should you care about playbin3? Playbin3 (and the elements it’s using internally: parsebin, decodebin3, uridecodebin3 among others) is the result of a deep re-design of playbin2 (along with decodebin2 and uridecodebin) to better support:
- gapless playback
- audio cross-fading support (not yet implemented)
- adaptive streaming
- reduced CPU, memory and I/O resource usage
- faster stream switching and full control over the stream selection process
This work was carried on mostly by Edward Hervey, he presented his work in detail at 3 GStreamer conferences. If you want to learn more about this and the internals of playbin3 make sure to watch his awesome presentations at the 2015 gst-conf, 2016 gst-conf and 2017 gst-conf.
Playbin3 was added in GStreamer 1.10. It is still considered experimental but in my experience it works already very well. Just keep in mind you should use at least the latest GStreamer 1.12 (or even the upcoming 1.14) release before reporting any issue in Bugzilla. Playbin3 is not a drop-in replacement for playbin, both elements share only a sub-set of GObject properties and signals. However, if you don’t want to modify your application source code just yet, it’s very easy to try playbin3 anyway:
$ USE_PLAYBIN3=1 my-playbin-based-app
Setting the USE_PLAYBIN environment variable enables a code path inside the GStreamer playback plugin which swaps the playbin element for the playbin3 element. This trick provides a glance to the playbin3 element for the most lazy people :) The problem is that depending on your use of playbin, you might get runtime warnings, here’s an example with the Totem player:
$ USE_PLAYBIN3=1 totem ~/Videos/Agent327.mp4 (totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'video-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3' (totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'audio-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3' (totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'text-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3' (totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'video-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3' (totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'audio-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3' (totem:22617): GLib-GObject-WARNING **: ../../../../gobject/gsignal.c:2523: signal 'text-tags-changed' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3' sys:1: Warning: g_object_get_is_valid_property: object class 'GstPlayBin3' has no property named 'n-audio' sys:1: Warning: g_object_get_is_valid_property: object class 'GstPlayBin3' has no property named 'n-text' sys:1: Warning: ../../../../gobject/gsignal.c:3492: signal name 'get-video-pad' is invalid for instance '0x556db67f3170' of type 'GstPlayBin3'
As mentioned previously, playbin and playbin3 don’t share the same set of GObject properties and signals, so some changes in your application are required in order to use playbin3.
If your application is based on the GstPlayer library then you should set the GST_PLAYER_USE_PLAYBIN3 environment variable. GstPlayer already handles both playbin and playbin3, so no changes needed in your application if you use GstPlayer!
Ok, so what if your application relies directly on playbin? Some changes are needed! If you previously used playbin stream selection properties and signals, you will now need to handle the GstStream and GstStreamCollection APIs. Playbin3 will emit a stream collection message on the bus, this is very nice because the collection includes information (metadata!) about the streams (or tracks) the media asset contains. In playbin this was handled with a bunch of signals (audio-tags-changed, audio-changed, etc), properties (n-audio, n-video, etc) and action signals (get-audio-tags, get-audio-pad, etc). The new GstStream API provides a centralized and non-playbin-specific access point for all these informations. To select streams with playbin3 you now need to send a select_streams event so that the demuxer can know exactly which streams should be exposed to downstream elements. That means potentially improved performance! Once playbin3 completed the stream selection it will emit a streams selected message, the application should handle this message and potentially update its internal state about the selected streams. This is also the best moment to update your UI regarding the selected streams (like audio track language, video track dimensions, etc).
Another small difference between playbin and playbin3 is about the source element setup. In playbin there is a source read-only GObject property and a source-setup GObject signal. In playbin3 only the latter is available, so your application should rely on source-setup instead of the notify::source GObject signal.
The gst-play-1.0 playback utility program already supports playbin3 so it provides a good source of inspiration if you consider porting your application to playbin3. As mentioned at the beginning of this post, WebKit also now supports playbin3, however it needs to be enabled at build time using the CMake -DUSE_GSTREAMER_PLAYBIN3=ON option. This feature is not part of the WebKitGTK+ 2.20 series but should be shipped in 2.22. As a final note I wanted to acknowledge my favorite worker-owned coop Igalia for allowing me to work on this WebKit feature and also our friends over at Centricular for all the quality work on playbin3.
March 18, 2018
With GTK4, we’ve been trying to find better solution for image data. In GTK3 the objects we used for this were pixbufs and Cairo surfaces. But they don’t fit the bill anymore, so now we have GdkTexture and GdkPaintable.
GdkTexture
GdkTexture is the replacement for GdkPixbuf. Why is it better?
For a start, it is a lot simpler. The API looks like this:
int gdk_texture_get_width (GdkTexture *texture);
int gdk_texture_get_height (GdkTexture *texture);
void gdk_texture_download (GdkTexture *texture,
guchar *data,
gsize stride);
So it is a 2D pixel array and if you want to, you can download the pixels. It is also guaranteed immutable, so the pixels will never change. Lots of constructors exist to create textures from files, resources, data or pixbufs.
But the biggest difference between textures and pixbufs is that they don’t expose the memory that they use to store the pixels. In fact, before gdk_texture_download() is called, that data doesn’t even need to exist.
And this is used in the GL texture. The GtkGLArea widget for example uses this method to pass data around. GStreamer is expected to pass video in the form of GL textures, too.
GdkPaintable
But sometimes, you have something more complex than an immutable bunch of pixels. For example you could have an animated GIF or a scalable SVG. That’s where GdkPaintable comes in.
In abstract terms, GdkPaintable is an interface for objects that know how to render themselves at any size. Inspired by CSS images, they can optionally provide intrinsic sizing information that GTK widgets can use to place them.
So the core of the GdkPaintable interface are the function make the paintable render itself and the 3 functions that provide sizing information:
void gdk_paintable_snapshot (GdkPaintable *paintable,
GdkSnapshot *snapshot,
double width,
double height);
int gdk_paintable_get_intrinsic_width (GdkPaintable *paintable);
int gdk_paintable_get_intrinsic_height (GdkPaintable *paintable);
int gdk_paintable_get_intrinsic_aspect_ratio (GdkPaintable *paintable);
On top of that, the paintable can emit the “invalidate-contents” and “invalidate-size” signals when its contents or size changes.
To make this more concrete, let’s take a scalable SVG as an example: The paintable implementation would return no intrinsic size (the return value 0 for those sizing function achieves that) and whenever it is drawn, it would draw itself pixel-exact at the given size.
Or take the example of the animated GIF: It would provide its pixel size as its intrinsic size and draw the current frame of the animation scaled to the given size. And whenever the next frame of the animation should be displayed, it would emit the “invalidate-size” signal.
And last but not least, GdkTexture implements this interface.
We’re currently in the process of changing all the code that in GTK3 accepted GdkPixbuf to now accept GdkPaintable. The GtkImage widget of course has been changed already, as have the drag’n’drop icons or GtkAboutDialog. Experimental patches exist to let applications provide paintables to the GTK CSS engine.
And if you now put all this information together about GStreamer potentially providing textures backed by GL images and creating paintables that do animations that can then be uploaded to CSS, you can maybe see where this is going…
Shotwell 0.28 “Braunschweig” is out.

Half a year later than I was expecting it to be, sorry. This release fixes 60 Bugs! Get it at GNOME’s download server, from GIT or in the Shotwell PPA really soon
. A big thank you to all contributors that make up all the bits and pieces for such a release.
Notable features:
- The viewer can now be started in full screen mode
- We have added better support for images with transparent backgrounds
- The image processing pipeline got much faster
- Importing RAW images got much faster
- Finally got rid of one of the “Cannot claim USB device” errors for camera import
- Tumblr got promoted to be in the main export plugin set
- It can be build using meson
Things we have lost:
- We removed F-Spot import. I believe that this is not necessary anymore now.
- Racje and Yandex plugins are no longer in the plugin set that is build by default
Last week I attended the Web Engines Hackfest. The event was sponsored by Igalia (also hosting the event), Adobe and Collabora.
As usual I spent most of the time working on the WebKitGTK+ GStreamer backend and Sebastian Dröge kindly joined and helped out quite a bit, make sure to read his post about the event!
We first worked on the WebAudio GStreamer backend, Sebastian cleaned up various parts of the code, including the playback pipeline and the source element we use to bridge the WebCore AudioBus with the playback pipeline. On my side I finished the AudioSourceProvider patch that was abandoned for a few months (years) in Bugzilla. It’s an interesting feature to have so that web apps can use the WebAudio API with raw audio coming from Media elements.
I also hacked on GstGL support for video rendering. It’s quite interesting to be able to share the GL context of WebKit with GStreamer! The patch is not ready yet for landing but thanks to the reviews from Sebastian, Mathew Waters and Julien Isorce I’ll improve it and hopefully commit it soon in WebKit ToT.
Sebastian also worked on Media Source Extensions support. We had a very basic, non-working, backend that required… a rewrite, basically :) I hope we will have this reworked backend soon in trunk. Sebastian already has it working on Youtube!
The event was interesting in general, with discussions about rendering engines, rendering and JavaScript.
Sometimes you simply do not want to bundle everything in a single package such as optional plugins with large dependencies or third party plugins that are not supported. In this post I’ll show you how to handle this with Flatpak using HexChat as an example.
Flatpak has a feature called extensions that allows a package to be mounted within another package. This is used in a variety of ways but it can be used by any application as a way to insert any optional bits. So lets see how to define one (details omitted for brevity):
{
"app-id": "io.github.Hexchat",
"add-extensions": {
"io.github.Hexchat.Plugin": {
"version": "2",
"directory": "extensions",
"add-ld-path": "lib",
"merge-dirs": "lib/hexchat/plugins",
"subdirectories": true,
"no-autodownload": true,
"autodelete": true
}
},
"modules": [
{
"name": "hexchat",
"post-install": [
"install -d /app/extensions"
]
}
]
}
The exact details of these are best documented in the Extension section of man flatpak-metadata
but I’ll go over the ones used here:
io.github.Hexchat.Pluginis the name of the extension point and all extensions will have the same prefix.versionallows you to have parallel installations of extensions if you break ABI or API for example,2refers to HexChat 2.x incase it makes a3.0with an API break (It is probably smart in the future to add the runtime version here also since it will break ABI also).directorysets a subdirectory where everything is mounted relative to your prefix so/app/extensionsis where they will go.subdirectoriesallows you to have multiple extensions and each one will get their own subdirectory. Soio.github.Hexchat.Plugin.Perlis mounted at/app/extensions/Perl.merge-dirswill merge the contents of subdirectories that match these paths (relative to their prefix). So for this case the contents of/app/extensions/Perl/lib/hexchat/pluginsand/app/extensions/Python/lib/hexchat/pluginswill both be in/app/extensions/lib/hexchat/plugins. This allows limiting the complexity of your loader to only need to look in one directory (applications will need to be configured/patched to look there).add-ld-pathadds a path, relative to extensions prefix, to the library path so for example/app/extensions/Python/lib/libpython.socan be loaded.no-autodownloadwill not automatically install all extensions which is the default.autodeletewill remove all extensions when the application is removed.
So now that we defined an extension point lets make an extension:
{
"id": "io.github.Hexchat.Plugin.Perl",
"branch": "2",
"runtime": "io.github.Hexchat",
"runtime-version": "stable",
"sdk": "org.gnome.Sdk//3.26",
"build-extension": true,
"separate-locales": false,
"appstream-compose": false,
"build-options": {
"prefix": "/app/extensions/Perl",
"env": {
"PATH": "/app/extensions/Perl/bin:/app/bin:/usr/bin"
}
},
"modules": [
{
"name": "perl"
},
{
"name": "hexchat-perl",
"post-install": [
"install -Dm644 plugins/perl/perl.so ${FLATPAK_DEST}/lib/hexchat/plugins/perl.so",
"install -Dm644 --target-directory=${FLATPAK_DEST}/share/metainfo data/misc/io.github.Hexchat.Plugin.Perl.metainfo.xml",
"appstream-compose --basename=io.github.Hexchat.Plugin.Perl --prefix=${FLATPAK_DEST} --origin=flatpak io.github.Hexchat.Plugin.Perl"
]
}
]
}
So again going over some key points quickly: id has the correct prefix, branch refers to the extension version,
build-extension should be obvious, runtime is what defines the extension-point. Some less obvious things to make note of is that your extensions prefix will not be in $PATH or $PKG_CONFIG_PATH by default
so you may need to set them (see build-options in man flatpak-manifest). $FLATPAK_DEST is also
defined as your extensions prefix though not everything expands variables.
While not required you also should install appstream metainfo for easy discover-ability. For example:
<?xml version="1.0" encoding="UTF-8"?>
<component type="addon">
<id>io.github.Hexchat.Plugin.Perl</id>
<extends>io.github.Hexchat.desktop</extends>
<name>Perl Plugin</name>
<summary>Provides a scripting interface in Perl</summary>
<url type="homepage">https://hexchat.github.io/</url>
<project_license>GPL-2.0+</project_license>
<metadata_license>CC0-1.0</metadata_license>
<update_contact>tingping_AT_fedoraproject.org</update_contact>
</component>
Which will be shown in GNOME-Software:

March 16, 2018
In my recent posts, I’ve mostly focused on finding my way around with GNOME Builder and using it to do development in Flatpak sandboxes. But I am not really the easiest target audience for an IDE like GNOME Builder, having spent most of my life on the commandline with tools like vim and make.
So, what about the commandline in an Atomic Workstation environment? There are many container tools, like buildah, atomic, oc, podman, and so on. I am not going to talk about these, since I don’t know them very well, and they are covered, e.g. on www.projectatomic.io.
But there are a few commands that are essential to life on the Atomic Workstation: rpm-ostree and flatpak.
rpm-ostree
First of all, there’s rpm-ostree, which is the commandline frontend to the rpm-ostreed daemon that manages the OS image(s) on the Atomic Workstation.
You can run
rpm-ostree status
to get some information about your OS image (and the other images that may be present on your system). And you can run
rpm-ostree upgrade
to get the latest update for your OS image (the terminology clash here is a bit unfortunate; rpm-ostree calls an upgrade what most Linux distros and packaging tools call an update).
You can run this command as normal user in a terminal, and rpm-ostreed will present you with a polkit dialog to do its privileged operations. Recently, rpm-ostreed has also gained the ability to check for and deploy upgrades automatically.
An important thing to keep in mind is that rpm-ostree never changes your running system. You have to reboot into the new image to see the changes, so
systemctl reboot
should be in your repertoire of commands as well. Alternatively, you can use the –reboot option to tell rpm-ostree to reboot when the upgrade command completes.
flatpak
The other essential command is flatpak. Where rpm-ostree controls your OS image, flatpak rules the applications. flatpak has many commands that are worth exploring, I’ll only mention the most important ones here.
It is quite common to have more than one source for flatpaks enabled.
flatpak remotes
lists them all. If you want to find applications, then
flatpak search
will do that for you, and
flatpak install
will let you install what you found. An important detail to point out here is that applications can be installed in system-wide (in /var) or per-user (in ~/.local/share). You can choose the location with the –user and –system options. If you choose to install system-wide, you will get a polkit prompt, since this is a privileged operation.
After installing applications, you should keep them up-to-date by installing updates. The most straightforward way to so is to just run
flatpak update
which will install available updates for all applications. To just check if updates are available, you can use
flatpak remote-ls --updates
Launching applications
Probably the most important thing you will want to do with flatpak is to run applications. Unsurprisingly, the command to do so is called run, and it expects you to specify the unique application ID:
flatpak run org.gnome.gitg
This is certainly a departure from the traditional commandline, and could be considered cumbersome (even though it has bash completion for the application ID).
Thankfully, flatpak has recently gained a way to recover the familiar interface. It now installs shell wrappers for the flatpak run command in ~/.local/share/flatpak/bin. After adding that directory to your PATH, you can run gitg like this:
org.gnome.gitg
If (like me), you are still not satisfied with this, you can add a shell alias to get the traditional command name back:
PATH=$PATH:$HOME/.local/share/flatpak/bin alias gitg=org.gnome.gitg
Now gitg works again, as it used to. Nice!
March 15, 2018
This is something of a frequently asked question, as it comes up every once in a while. The pkg-config documentation is fairly terse, and even pkgconf hasn’t improved on that.
The problem
Let’s assume you maintain a project that has a dependency using pkg-config.
Let’s also assume that the project you are depending on loads some files from a system path, and your project plans to install some files under that path.
The questions are:
- how can the project you are depending on provide an appropriate way for you to discover where that path is
- how can the project you maintain use that information
The answer to both questions is: by using variables in the pkg-config file. Sadly, there’s still some confusion as to how those variables work, so this is my attempt at clarifying the issue.
Defining variables in pkg-config files
The typical preamble stanza of a pkg-config file is something like this:
prefix=/some/prefix
libdir=${prefix}/lib
datadir=${prefix}/share
includedir=${prefix}/include
Each variable can reference other variables; for instance, in the example
above, all the other directories are relative to the prefix variable.
Those variables that can be extracted via pkg-config itself:
$ pkg-config --variable=includedir project-a
/some/prefix/include
As you can see, the --variable command line argument will automatically
expand the ${prefix} token with the content of the prefix variable.
Of course, you can define any and all variables inside your own pkg-config
file; for instance, this is the definition of the giomoduledir variable
inside the gio-2.0 pkg-config file:
prefix=/usr
libdir=${prefix}/lib
…
giomoduledir=${libdir}/gio/modules
This way, the giomoduledir variable will be expanded to
/usr/lib/gio/modules when asking for it.
If you are defining a path inside your project’s pkg-config file, always make sure you’re using a relative path!
We’re going to see why this is important in the next section.
Using variables from pkg-config files
Now, this is where things get complicated.
As I said above, pkg-config will expand the variables using the definitions
coming from the pkg-config file; so, in the example above, getting the
giomoduledir will use the prefix provided by the gio-2.0 pkg-config
file, which is the prefix into which GIO was installed. This is all well and
good if you just want to know where GIO installed its own modules, in the
same way you want to know where its headers are installed, or where the
library is located.
What happens, though, if your own project needs to install GIO modules in a shared location? More importantly, what happens if you’re building your project in a separate prefix?
If you’re thinking: “I should install it into the same location as specified
by the GIO pkg-config file”, think again. What happens if you are building
against the system’s GIO library? The prefix into which it has been
installed is only going to be accessible by the administrator user; or it
could be on a read-only volume, managed by libostree, so sudo won’t save you.
Since you’re using a separate prefix, you really want to install the files provided by your project under the prefix used to configure your project. That does require knowing all the possible paths used by your dependencies, hard coding them into your own project, and ensuring that they never change.
This is clearly not great, and it places additional burdens on your role as a maintainer.
The correct solution is to tell pkg-config to expand variables using your own values:
$ pkg-config \
> --define-variable=prefix=/your/prefix \
> --variable=giomoduledir
> gio-2.0
/your/prefix/lib/gio/modules
This lets you rely on the paths as defined by your dependencies, and does not attempt to install files in locations you don’t have access to.
Build systems
How does this work, in practice, when building your own software?
If you’re using Meson, you can use the
get_pkgconfig_variable() method of the dependency object, making sure to
replace variables:
gio_dep = dependency('gio-2.0')
giomoduledir = gio_dep.get_pkgconfig_variable(
'giomoduledir',
define_variable: [ 'libdir', get_option('libdir') ],
)
This is the equivalent of the --define-variable/--variable command line arguments.
If you are using Autotools, sadly, the PKG_CHECK_VAR m4 macro won’t be
able to help you, because it does not allow you to expand variables. This
means you’ll have to deal with it in the old fashioned way:
giomoduledir=`$PKG_CONFIG --define-variable=libdir=$libdir --variable=giomoduledir gio-2.0`
Which is annoying, and yet another reason why you should move off from Autotools and to Meson. 😃
Caveats
All of this, of course, works only if paths are expressed as locations relative to other variables. If that does not happen, you’re going to have a bad time. You’ll still get the variable as requested, but you won’t be able to make it relative to your prefix.
If you maintain a project with paths expressed as variables in your
pkg-config file, check them now, and make them relative to existing
variables, like prefix, libdir, or datadir.
If you’re using Meson to generate your pkg-config file, make sure that the paths are relative to other variables, and file bugs if they aren’t.
March 14, 2018
I just flipped the switch for the 3.28 Release Video. I’m really excited for all the new awesome features the community has landed, but I am a bit sad that I don’t have time to put more effort into the video this time around. A busy time schedule collided with technical difficulties in recording some of the apps. When I was staring at my weekly schedule Monday there didn’t seem much chance for a release video to be published at all..
However, in the midst of all that I decided to take this up as a challenge and see what I could come up with given the 2-3 days time. In the end, I identified some time/energy demanding issues I need to find solutions to:
- Building GNOME Apps before release and recording them is painful and prone to error and frustration. I hit errors when upgrading Fedora Rawhide, and even after updating many apps were not on the latest version. Flatpak applications are fortunately super easy to deal with for me, but not all applications are available as flatpaks. And ideally I will need to setup a completely clean environment since many apps draw on content in the home folder. Also, currently I need to post-process all the raw material to get the transparent window films.
- I run out of (8GB) memory several times and it’s almost faster to hold power button down and boot again, than to wait for Linux memory handling to deal with it.. Will definitely need to find a solution to this – it builds up a lot of frustration for me.
I am already working on a strategy for the first problem. A few awesome developers have helped me record some of the apps in the past and this has been really helpful to deal with this. I’m trying to make a list of contacts I need to get in touch with to get these recordings done, and I need to send out emails in time with the freezes in the release cycle. It makes my work and the musician’s work much easier if we know exactly what will go in the video and for how long. I also had a chat with Felipe about maybe making a gnome shell extension tool which could take care of setting wallpaper, recording in the right resolution and uploading to a repository somewhere. As for the second problem, I think I’m going to need a new laptop or upgrade my current one. I definitely have motivation to look into that based on this experience now, hehe..
“Do you have time for the next release video?” You might ask and that is a valid question. I don’t see the problem to be time, but more a problem of spending my contribution energy effectively. I really like making these videos – but mainly the animation and video editing parts of it. Building apps, working around errors and bugs, post-processing and all that just to get the recording assets I need, that’s the part that I currently feel takes up the most of my contribution energy. If I can minimize that, I think I will have much more creative energy to spend on the video itself. Honestly, all the awesome contributions in our GNOME Apps and components really deserve that much extra polish.
Thanks everyone for helping with the video this time around!
One of the great aspects of the Flatpak model, apart from separating apps from the OS, is that you can have multiple versions of the same app installed concurrently. You can rely on the stable release while trying things out in the development or nightly built version. This creates a need to easily identify the two versions apart when launching it with the shell.
I think Mozilla has set a great precendent on how to manage multiple version identities.
Thus came the desire to spend a couple of nights working on the Builder nightly app icon. While we’ve generally tried to simplify app icons to match what’s happening on the mobile platforms and trickling down to the older desktop OSes, I’ve decided to retain the 3D wokflow for the builder icon. Mainly because I want to get better at it, but also because it’s a perfect platform for kit bashing.
For Builder specifically I’ve identified some properties I think should describe the ‘nightly’ icon:
- Dark (nightly)
- Modern (new stuff)
- Not as polished – dangling cables, open panels, dirty
- Unstable / indicating it can move (wheels, legs …)
Next up is giving a stab at a few more apps and then it’s time to develop some guidelines for these nightly app icons and emphasize it with some Shell styling. Overlaid emblems haven’t particularly worked in the past, but perhaps some tag style for the label could do.
Compared to analog audio, digital audio processing is extremely versatile, is much easier to design and implement than analog processing, and also adds effectively zero noise along the way. With rising computing power and dropping costs, every operating system has had drivers, engines, and libraries to record, process, playback, transmit, and store audio for over 20 years.
Analog vs Digital
This number is ridiculously small—especially when compared to the speed of sound. An electrical signal takes 0.001 milliseconds to travel 300 metres (984 feet). Sound takes 874 milliseconds (almost a second).
All analog effects and filters obey similar equations. If you're using, say, an analog pedal with an electric guitar, the signal is transformed continuously by an electrical circuit, so the latency is a function of the wire length (plus capacitors/transistors/etc), and is almost always negligible.
Digital audio is transmitted in "packets" (buffers) of a particular size, like a bucket brigade, but at the speed of electricity. Since the real world is analog, this means to record audio, you must use an Analog-Digital Converter. The ADC quantizes the signal into digital measurements (samples), packs multiple samples into a buffer, and sends it forward. This means your latency is now:
We saw above that the first part is insignificant, what about the second part?
Latency is measured in time, but buffer size is measured in bytes. For 16-bit integer audio, each measurement (sample) is stored as a 16-bit integer, which is 2 bytes. That's the theoretical lower limit on the buffer size. The sample rate defines how often measurements are made, and these days, is usually 48KHz. This means each sample contains 0.021ms of audio. To go lower, you need to increase the sample rate to 96KHz or 192KHz.
However, when general-purpose computers are involved, the buffer size is almost never lower than 32 bytes, and is usually 128 bytes or larger. For 16-bit integer audio at 48KHz, a 32 byte buffer is 0.67ms, and a 128 byte buffer is 2.67ms. This is our buffer size and hence the base latency while recording (or playing) digital audio.
Digital effects operate on individual buffers, and will add an additional amount of latency depending on the delay added by the CPU processing required by the effect. Such effects may also add latency if the algorithm used requires that, but that's the same with analog effects.
The Digital Age
So everyone's using digital. But isn't 2.67ms a lot of additional latency?
It might seem that way till you think about it in real-world terms. Sound travels less than a meter (3 feet) in that time, and that sort of delay is completely unnoticeable by humans—otherwise we'd notice people's lips moving before we heard their words.
In fact, 2.67ms is too small for the majority of audio applications!
To process such small buffer sizes, you'd have to wake the CPU up 375 times a second, just for audio. This is highly inefficient, and wastes a lot of power. You really don't want that on your phone or your laptop, and is completely unnecessary in most cases anyway.
For instance, your music player will usually use a buffer size of ~200ms, which is just 5 CPU wakeups per second. Note that this doesn't mean that you will hear sound 200ms after hitting "play". The audio player will just send 200ms of audio to the sound card at once, and playback will begin immediately.
Of course, you can't do that with live playback such as video calls—you can't "read-ahead" data you don't have. You'd have to invent a time machine first. As a result, apps that use real-time communication have to use smaller buffer sizes because that directly affects the latency of live playback.
That brings us back to efficiency. These apps also need to conserve power, and 2.67ms buffers are really wasteful. Most consumer apps that require low latency use 10-15ms buffers, and that's good enough for things like voice/video calling, video games, notification sounds, and so on.
Ultra Low Latency
There's one category left: musicians, sound engineers, and other folk that work in the pro-audio business. For them, 10ms of latency is much too high!
You usually can't notice a 10ms delay between an event and the sound for it, but when making music, you can hear it when two instruments are out-of-sync by 10ms or if the sound for an instrument you're playing is delayed. Instruments such as drum snare are more susceptible to this problem than others, which is why the stage monitors used in live concerts must not add any latency.
The standard in the music business is to use buffers that are 5ms or lower, down to the 0.67ms number that we talked about above.
Power consumption is absolutely no concern, and the real problems are the accumulation of small amounts of latencies everywhere in your stack, and ensuring that you're able to read buffers from the hardware or write buffers to the hardware fast enough.
Let's say you're using an app on your computer to apply digital effects to a guitar that you're playing. This involves capturing audio from the line-in port, sending it to the application for processing, and playing it from the sound card to your amp.
The latency while capturing and outputting audio are both multiples of the buffer size, so it adds up very quickly. The effects app itself will also add a variable amount of latency, and at 2.67ms buffer sizes you will find yourself quickly approaching a 10ms latency from line-in to amp-out. The only way to lower this is to use a smaller buffer size, which is precisely what pro-audio hardware and software enables.
The second problem is that of CPU scheduling. You need to ensure that the threads that are fetching/sending audio data to the hardware and processing the audio have the highest priority, so that nothing else will steal CPU-time away from them and cause glitching due to buffers arriving late.
This gets harder as you lower the buffer size because the audio stack has to do more work for each bit of audio. The fact that we're doing this on a general-purpose operating system makes it even harder, and requires implementing real-time scheduling features across several layers. But that's a story for another time!
I hope you found this dive into digital audio interesting! My next post will be about my journey in implementing ultra low latency capture and render on Windows in the WASAPI plugin for GStreamer. This was already possible on Linux with the JACK GStreamer plugin and on macOS with the CoreAudio GStreamer plugin, so it will be interesting to see how the same problems are solved on Windows. Tune in!
March 13, 2018
Hey there if you are reading this then probably network stats might be of some interest to you , but still if it doesn’t, just recall that while requesting this page you had your share of packets being transferred over the vast network and delivered to your system. I guess now you’d like to check out the work which has been going on in Libgtop and exploit the network stats details to your personal use.
This post is going to be a brief update about what’s new in Libgtop
Crux of the NetStats Implementation
The implementation which I’ve used requires intensive use of pcap handles to start a capture on your system and segregate the packets into the process they belong to. The following part is a detailed explanation of this , so in case you feel to just skip the details jump to the next part.
Flow of the setup is as follows:
- Initialize
pcap handlesfor different interfaces on a system . - Start packet dispatching on each
pcap handleNote: These handles have been set to capture packets in a non-blocking mode
- As we get any packet start processing it.
- This capture is repeated every time period which is set in the DBus Interface.
Assigning Packets to their Respective Processes
In my opinion this was the coolest part of the entire project which gave me the liberty to filter the packets until I’d finally atomized them. This felt like a recursive butchering of the packets without having a formal medical science qualification. So bear in mind you can flaunt that you too can operate like doctors but only on inanimate objects 😜 . Well, jokes aside coming to technical aspect -
What the packet says is , keep discarding the headers prepended to it until you reach the desired header, in our case reaching the TCP headers. The flow of parsing was somewhat simple :
It required checking the following headers in the sequence as below-
- Linktype: Ethernet
- EtherType:
- IP
- IPv6
- TCP
A bit of Network Stats design dosage
Just so that you are in sync with how my implementation seeks to do what it should do , the design is such :
We know that every process creates various connections. Any general socket detail look like
src ip (fixed): src port (variable) - dest ip (fixed) : dest port (variable)
These sockets are listed in the /proc/net/tcp. You’d like to have a more detailed explanation about this.
The packet headers will give us the connection details , we’ll just have to assign it to the correct process.
This means each process has a list of connections , and each connection has a list of packets which in turn has all the relevant packets.
So getting network stats for a process is as simple as summing up the packet length details in the packet list for each connection in the connection list for that process.
Note: While summing up stale packets aren’t considered.
This is what the design looks like:

TCP parsing
The parameters passed to this callback are:
- TCP header has information like the
source portand thedestination port. - pcap packet header has the length of the packet
- The source and destination ip is stored as a member of the
packet_handlestruct .
Given that we have all the necessary details, now the packet is initialized and all its fields are assigned values using the above mentioned parameters passed to the TCP parsing callback. Now we check which connection to put this new packet into. For checking this we make use of a reference packet bound to each connection. After adding the packet to the connection, in case we had to create a new connection itself then we even need to add it to the relevant process . For doing this we use the Inode to PID mapping expalined in the earlier post.
Getting stats
In every refresh cycle getting stats is as simple as just summing up packet lengths for a given process.
Choosing interface to expose the libgtop API
Next thing of concern was how do we make this functionality available to other applications.
-
Daemon already in libgtop for other OSs
Given the fact that Libgtop already had a daemon to get details which required escalated previleges although it wasn’t configured to be run on Linux, but to start the packet capture we needed root permissions , which meant requirement of something like that to enable us to do so. After discussions with Felipe and Robert it was decided that we start the daemon and access the stats over DBus.
-
Using DBus
But we encountered some issues launching the capture as a root process , hence decided to switch over to run a systemd service and launch a DBus interface in which not only did we get the stats but the invocation to the capture was also exposed to the application . But later we’ll move on to using the daemon provided by libgtop on Linux so that we are able to get historical stats too.
API exposed through DBus

- GetStats This returns a dictionary of PID with the corresponding bytes sent and received
- InitCapture
- This will setup the pcap handles before starting the capture and then invoke the refresh of capture every second but this is subject to change .

- During every refresh cycle the stats are logged into a singleton GPtrArray which is used by GetStats to send the stats over DBus.

- Set and Reset capture This is just to start or stop the capture initiated by InitCapture. On the Usage end this function is invoked based on whether the network subview is active or not. While the network subview is active the capture is kept ongoing otherwise we break out of the capture.
Setting up the system bus
To setup the System bus these two files had to be added to the following paths
- org.gnome.GTopNetStats.service in
/usr/share/dbus-1/system-services

- org.gnome.GTop.NetStats.conf in
/etc/dbus-1/system.d

Inspecting using D-Feet
Here’s what we see on inspecting the interface using d-feet

GetStats output : {pid:(bytes sent, bytes recv)}

You might be done with reading all these implementational details but the most important thing I haven’t mentioned until now is everyone whose mind has been behind helping me do all this.
(Keeping it to be in lexical order)
I’m extremely grateful to Felipe Borges and Robert Roth for their constant support and reviews.
Felipe’s design related corrections of switching implementation to say singletons and on the Libgtop end Robert Roth helping me with those quick hacks with the daemon and DBus even after his working hours and from working on weekends to finally getting things done is what makes me indebted to them .
I’ve a tint of guilt for pinging you all on weekends too .
Did I just forget to mention the entire community in general , because members like Aberto and Carlos were also the ones I sought help from.
If you did reach this fag end without getting bored let me tell you that I’m yet to post the details about the Usage integration.
Feel free to check the work.
Stay tuned !🙂
March 12, 2018
- Practice with E. scales decaying over the weekend; hmm. Mail chew, sync with Miklos. Lunch with J. Sync with kendy, call with Jeroen. TDF call. Stories in the evening - Asimov for E. Max Lucado for H.
March 11, 2018
- Up, cooked breakfast in bed for J. with M. - presents, cards, a banner, etc. for the sweetheart. Out to All Saints, played bass; enjoyed Max's sermon. Home for roast lunch.
- Slugged; digested; played guitar much of the afternoon while babes played lego (a break from minetest), and Smash-Up. Julie over for tea.
March 10, 2018
webkitgtk is the GTK+ port of WebKit. webkitgtk provides web functionality for many things including GNOME Online Accounts’ login panels; Evolution’s HTML email editor and viewer; and the engine for the Epiphany web browser (also known as GNOME Web).
Last year, I announced here that Debian 9 “Stretch” included the latest version of webkitgtk (Debian’s package is named webkit2gtk). At the time, I hoped that Debian 9 would get periodic security and bugfix updates. Nine months later, let’s see how we’ve been doing.
Release History
Debian 9.0, released June 17, 2017, included webkit2gtk 2.16.3 (up to date).
Debian 9.1 was released July 22, 2017 with no webkit2gtk update (2.16.5 was the current release at the time).
Debian 9.2, released October 8, 2017, included 2.16.6 (There was a 2.18.0 release available then but for the first stable update, we kept it simple by not taking the brand new series.)
Debian 9.3 was released December 9, 2017 with no webkit2gtk update (2.18.3 was the current release at the time).
Debian 9.4 released March 10, 2018 (today!), includes 2.18.6 (up to date).
Release Schedule
webkitgtk development follows the GNOME release schedule and produces new major updates every March and September. Only the current stable series is supported (although sometimes there can be a short overlap; 2.14.6 was released at the same time as 2.16.1). Distros need to adopt the new series every six months.
Like GNOME, webkitgtk uses even numbers for stable releases (2.16 is a stable series, 2.16.3 is a point release in that series, but 2.17.3 is a development release leading up to 2.18, the next stable series).
There are webkitgtk bugfix releases, approximately monthly. Debian stable point releases happen approximately every two or three months (the first point release was quicker).
In a few days, webkitgtk 2.20 will be released. Debian 9.5 will need to include 2.20.1 (or 2.20.2) to keep users on a supported release.
Report Card
From five Debian 9 releases, we have been up to date in 2 or 3 of them (depending on how you count the 9.2 release).
Using a letter grade scale, I think I’d give Debian a B or B- so far. But this is significantly better than Debian 8 which offered no webkitgtk updates at all except through backports. In my grading, Debian could get a A- if we consistently updated webkitgtk in these point releases.
To get a full A, I think Debian would need to push the new webkitgtk updates (after a brief delay for regression testing) directly as security updates without waiting for point releases. Although that proposal has been rejected for Debian 9, I think it is reasonable for Debian 10 to use this model.
If you are a Debian Developer or Maintainer and would like to help with webkitgtk updates, please get in touch with Berto or me. I, um, actually don’t even run Debian (except briefly in virtual machines for testing), so I’d really like to turn over this responsibility to someone else in Debian.
Appendix
I find the Repology webkitgtk tracker to be fascinating. For one thing, I find it humorous how the same package can have so many different names in different distros.
March 09, 2018
Flathub is a new distribution channel for Linux desktop apps. Truly distro-agnostic, unifying across abundance of Linux distributions. I was planning for a long time to add an application to Flathub and see what the experience is, especially compared to traditional distro packaging (I’m a Fedora packager). And I finally got to it this last week.

In Fedora I maintain PhotoQt, a very fast image viewer with very unorthodox UI. Its developer is very responsive and open to feedback. Already back in 2016 I suggested he provides PhotoQt as a flatpak. He did so and found making a flatpak really easy. However it was in the time before Flathub, so he had to host its own repo.
Last week I was notified about a new release of PhotoQt, so I prepared updates for Fedora and noticed that the Flatpak support became “Coming soon” again. So I was like “hey, let’s get it back and to Flathub”. I picked up the two-year-old flatpak manifest, and started rewriting it to successfully build with the latest Flatpak and meet Flathub requirements.
First I updated dependencies. You add dependencies to the manifest in a pretty elegant way, but what’s really time consuming is getting checksums of official archives. Most projects don’t offer them at all, so you have to download the archive and generate it yourself. And you have to do it with every update of that dependency. I’d love to see some repository of modules. Many apps share the same dependencies, so why to do the same work again and again with every manifest?
Need to bundle the latest LibRaw? Go to the repository and pick the module info for your manifest:
{
"name": "libraw",
"cmake": false,
"builddir": true,
"sources": [ { "type": "archive", "url": "https://www.libraw.org/data/LibRaw-0.18.8.tar.gz", "sha256":"56aca4fd97038923d57d2d17d90aa11d827f1f3d3f1d97e9f5a0d52ff87420e2" } ]
}
And on the top of such a repo you can actually build a really nice tooling. You can let the authors of apps add dependencies simply by picking them from the list and you can generate the starting manifest for them. And you could also check for dependency updates for them. LibRaw has a new version, wanna bundle it, and see how your app builds with it? And the LibRaw module section of your manifest would be replaced by the new one and a build triggered.
Of course such a repo of modules would have to be curated because one could easily sneak in a malicious module. But it would make writing manifests even easier.
Besides updating dependencies I also had to change the required runtime. Back in 2016 KDE only had a testing runtime without any versioning. Flathub now includes KDE runtime 5.10, so I used it. PhotoQt also uses “photoqt” in all file names and Flatpak/Flathub now requires it in the reverse-DNS format: org.qt.photoqt. Fortunately flatpak-builder can rename it for you, you just need to state it in the manifest:
"rename-desktop-file": "photoqt.desktop",
"rename-appdata-file": "photoqt.appdata.xml",
"rename-icon": "photoqt",
Once I was done with the manifest, I looked at the appdata file. PhotoQt has it in a pretty good shape. It was submitted by me when I packaged it for Fedora. But there were still a couple of things missing which are required by Flathub: OASR and release info. So I added it.
I proposed all the changes upstream and at this point PhotoQt was pretty much ready for submitting to Flathub. I never intended to maintain PhotoQt in Flathub myself. There should be a direct line between the app author and users, so apps should be maintained by app authors if possible. I knew that upstream was interested in adding PhotoQt to Flathub, so I contacted the upstream maintainer and asked him whether he wanted to pick it up and go through the Flathub review process himself or whether I should do it and then hand over the maintainership to him. He preferred the former.
The review was pretty quick and it only took 2 days between submitting the app and accepting it to Flathub. There were three minor issues: 1. the reviewer asked if it’s really necessary to give the app access to the whole host, 2. app-id didn’t match the app name in the manifest (case sensitivity), 3. by copy-pasting I added some spaces which broke the appdata file and of course I was too lazy to run validation before submitting it.
And that was it. Now PhotoQt is available in Flathub. I don’t remember how much time exactly it took me to get PhotoQt to Fedora, but I think it was definitely more and also the spec file is more complex than the flatpak manifest although I prefer the format of spec files to json.
Is not your favorite app available in Flathub? Just go ahead, flatpak it, and then talk to upstream, and try to hand the maintainership over to them.
I’ve been working on some groundwork features to sneak into Builder 3.28 so that we can build upon them for 3.30. In particular, I’ve started to land device abstractions. The goal around this is to make it easier to do cross-architecture development as well as support devices like phones, tablets, and IoT.
For every class and kind of device we want to support, some level of integration work will be required for a great experience. But a lot of it shares some common abstractions. Builder 3.28 will include some of that plumbing.
For example, we can detect various Qemu configurations and provide cross-architecture building for Flatpak. This isn’t using a cross-compiler, but rather a GCC compiled for aarch64 as part of the Flatpak SDK. This is much slower than a proper cross-compiler toolchain, but it does help prove some of our abstractions are working correctly (and that was the goal for this early stage).
Some of the big things we need to address as we head towards 3.30 include running the app on a remote device as well as hooking up gdb. I’d like to see Flatpak gain the support for providing a cross-compiler via an SDK extension too.
I’d like to gain support for simulators in 3.30 as well. My personal effort is going to be based around GNU-based systems. However, if others join in and want to provide support for other sorts of devices, I’d be happy to get the contributions.
Anyway, Builder 3.28 can do some interesting things if you know where to find them. So for 3.30 we’ll polish them up and make them useful to vastly more people.
March 08, 2018
I dig online maps like everyone else, but it is somewhat clumsy sharing a location. The W3W service addesses the issue by chunking up the whole world into 3x3m squares and assigning each a name (Supposedly around 57 trillion). Sometimes it’s a bit of a tongue twister, but most of the time it’s fun to say to meet at a “massive message chuckle” for some fpv flying. I’m really surprised this didn’t take off.
March 07, 2018
All of this has worked for tens of years flawlessly. However recently I have noticed that many programs on multiple platforms seem to alter their display language based on keyboard layout, which is just plain wrong. Display language should be chosen based on the display language choice and nothing else.
I first noticed this in the output of ls, which one would imagine to have reached stability ages ago.
Here we see that ls has chosen to print months in Finnish. Why? I have no idea. This was weird on its own, but then it spread to other operating systems as well. For no reason at all the existing Gimp install switched its display language to Finnish.
Let me reiterate: no setting was changed and the version of Gimp was exactly the same. One day it just decided to change its language to Finnish.
Then the issue spread to Windows.
VLC on Windows has chosen on my behalf to show its menus in Finnish in a completely English Windows 7 install. The only things it could use for language detection are geolocation and keyboard settings and both of these are terrible ideas. The OS has a language. It is very clearly specified. All other applications obey it, VLC should too.
The real kicker here is that Gimp on Windows displays English text correctly, as does VLC on macOS.
The newest case is the new Gnome-ified Ubuntu, whose lock screen stubbornly displays dates in the wrong language. It also does not conjugate the words correctly and has that weird american month/date date ordering which is wrong for Finnish.
What is causing this?
For those of you who aren't in the U.S., we have a game here called Basketball. And every year in March, the university level (called the National Collegiate Athletic Association, or NCAA) holds a single-elimination championship of the different university teams. If you follow basketball, you can fill out a bracket to "predict" the championship outcomes at each round.
Some people really get into creating their brackets, and they closely follow the teams over the season to see how they are doing. I used to work in an office that cared a lot about March Madness (although in honesty, the office I'm in now doesn't really follow the games that much—but forgive me that I didn't update my Linux Journal article to say so.) I didn't want to miss out on the fun in following March Madness.
I don't follow basketball, but one year I decided to write a little random-number generator to predict the games for me. I used this program to fill out my brackets. And a few years ago, I started writing about it.
Read the article, and use the PHP program to fill out your own March Madness bracket! Let me know how your bracket fared in this year's games.
March 06, 2018
So this week I started submitting a seventy-odd commits long branch where every commit was machine generated (but hand reviewed) with the amazing commit message of "component: refresh patches". Whilst this was easy to automate the message isn't acceptable to merge and I was facing the prospect of copy/pasting the same commit message over and over during an interactive rebase. That did not sound like fun. I ended up writing a tiny tool to do this and thought I'd do my annual blog post about it, mainly so I can find it again when I need to do it again next year...
Wise readers will know that Git can rewrite all sorts of things in commits programatically using git-filter-branch and this has a --msg-filter argument which sounds like just what I need. But first a note: git-filter-branch can destroy your branches if you're not careful!
git filter-branch --msg-filter has a simple behaviour: give it a command to be executed by the shell, the old commit message is piped in via standard input, and whatever appears on standard output is the new commit message. Sounds simple but in a way it's too simple, as even the example in the documentation has a glaring problem.
Anyway, this should work. I have a commit message in a predictable format (: refresh patches) and a text editor containing a longer message suitable for submission. I could write a bundle of shell/sed/awk to munge from one to the other but I decided to simply glue a few pieces of Python together instead:
import sys, re
input_re = re.compile(open(sys.argv[1]).read())
template = open(sys.argv[2]).read()
original_message = sys.stdin.read()
match = input_re.match(original_message)
if match:
print(template.format(**match.groupdict()))
else:
print(original_message)
Invoke this with two filenames: a regular expression to match on the input, and a template for the new commit message. If the regular expression matches then any named groups are extracted and passed to the template which is output using the new-style format() operation. If it doesn't match then the input is simply output to preserve commit messages.
This is my input regular expression:
^(?P<recipe>.+): refresh patches
And this is my output template:
{recipe}: refresh patches
The patch tool will apply patches by default with "fuzz", which is where if the
hunk context isn't present but what is there is close enough, it will force the
patch in.
Whilst this is useful when there's just whitespace changes, when applied to
source it is possible for a patch applied with fuzz to produce broken code which
still compiles (see #10450). This is obviously bad.
We'd like to eventually have do_patch() rejecting any fuzz on these grounds. For
that to be realistic the existing patches with fuzz need to be rebased and
reviewed.
Signed-off-by: Ross Burton <ross.burton@intel.com>
A quick run through filter-branch and I'm ready to send:
git filter-branch --msg-filter 'rewriter.py input output' origin/master...HEAD
GTK’s support for loadable modules dates back to the beginning of time, which is why GTK has a lot of code to deal with GTypeModules and with search paths, etc. Much later on, Alex revisited this topic for GVfs, and came up with the concept of extension points and GIO modules, which implement them. This is a much nicer framework, and GTK 4 is the perfect opportunity for us to switch to using it.
Changes in GTK+ 4
Therefore, I’ve recently spent some time on the module support in GTK. The major changes here are the following:
- We no longer support general-purpose loadable modules. One of the few remaining users of this facility is libcanberra, and we will look at implementing ‘event sound’ functionality directly in GTK+ instead of relying on a module for it. If you rely on loading GTK+ modules, please come and talk to us about other ways to achieve what you are doing.
- Print backends are now defined using an extension point named “gtk-print-backend”, which requires the type GtkPrintBackend. The existing print backends have been converted to GIO modules implementing this extension point. Since we have never supported out-of-tree print backends, this should not affect anybody else.
- Input methods are also defined using an extension point, named “gtk-im-module”, which requires the GtkIMContext type. We have dropped all the non-platform IM modules, and moved the platform IM modules into GTK+ proper, while also implementing the extension point.
Adapting existing input methods
Since we still support out-of-tree IM modules, I want to use the rest of this post to give a quick sketch of how an out-of-tree IM module for GTK+ 4 has to look.
There are a few steps to convert a traditional GTypeModule-based IM module to the new extension point. The example code below is taken from the Broadway input method.
Use G_DEFINE_DYNAMIC_TYPE
We are going to load a type from a module, and G_DEFINE_DYNAMIC_TYPE is the proper way to define such types:
G_DEFINE_DYNAMIC_TYPE (GtkIMContextBroadway,
gtk_im_context_broadway,
GTK_TYPE_IM_CONTEXT)
Note that this macro defines a gtk_im_context_broadway_register_type() function, which we will use in the next step.
Note that dynamic types are expected to have a class_finalize function in addition to the more common class_init, which can be trivial:
static void
gtk_im_context_broadway_class_finalize
(GtkIMContextBroadwayClass *class)
{
}
Implement the GIO module API
In order to be usable as a GIOModule, a module must implement three functions: g_io_module_load(), g_io_module_unload() and g_io_module_query() (strictly speaking, the last one is optional, but we’ll implement it here anyway).
void
g_io_module_load (GIOModule *module)
{
g_type_module_use (G_TYPE_MODULE (module));
gtk_im_context_broadway_register_type
(G_TYPE_MODULE (module));
g_io_extension_point_implement
(GTK_IM_MODULE_EXTENSION_POINT_NAME,
GTK_TYPE_IM_CONTEXT_BROADWAY,
"broadway",
10);
}
void
g_io_module_unload (GIOModule *module)
{
}
char **
g_io_module_query (void)
{
char *eps[] = {
GTK_IM_MODULE_EXTENSION_POINT_NAME,
NULL
};
return g_strdupv (eps);
}
Install your module properly
GTK+ will still look in $LIBDIR/gtk-4.0/immodules/ for input methods to load, but GIO only looks at shared objects whose name starts with “lib”, so make sure you follow that convention.
Debugging
And thats it!
Now GTK+ 4 should load your input method, and if you run a GTK+ 4 application with GTK_DEBUG=modules, you should see your module show up in the debug output.
March 05, 2018
It’s been one of those weeks when gnome-terminal and vte keep stumbling on some really weird edge cases, so it was a happy moment when I saw this on Fedora 27 Workstation.







































